Skip to main content
Twine is an integration platform that connects your business systems - HR, payroll, time tracking, identity providers, and more - and keeps their data in sync. You define what data should flow where, and Twine handles the rest.

How data flows

1. Fetch - Twine reads data from a source system as-is, in whatever shape that system exposes it. 2. Map - Each property from the source is mapped to a Twine property. There are three ways a mapping can work:
  • Direct (1:1) - the source value is used as-is.
  • 1:1 with converters - the source value passes through one or more converters before being stored. Converters handle type coercions (string to date, float to percentage) as well as system-specific transformations that only make sense for a particular integration.
  • Data Engine (or Graph Engine for legacy setups) - a node-based pipeline derives the final value from zero or more source properties. Because the engine can emit a constant with no inputs at all, it covers cases that can’t be expressed as a simple field mapping.
The result of this stage is a fully mapped entity in Twine’s data model, stored in Twine’s database. 3. Replicate - Once an entity is stored, Twine spawns a replication job for each destination system configured in the domain mapping. Each destination receives the data in the format it expects, independently of how other destinations consume the same data.

Configuration primitives

Twine exposes a small set of building blocks that you combine to describe any integration:

Data Domains

Domains segment data by type - for example, employee profiles, employment terms, or salary. Each domain defines what data Twine tracks and how it is structured.

Domain Mappings

A domain mapping connects a source system to a destination for a given domain. It tells Twine which data to fetch, how to map fields, and where to send the result.

Property Mapping

Property mappings translate fields from a source system’s data model into Twine’s internal representation, and from there into whatever the destination expects.

Sync Triggers

Triggers control when Twine runs a sync - on a schedule, in response to a webhook, or manually on demand.

Conditions

Conditions filter data entities before they are processed or distributed, so that only relevant records reach each destination.

Data Engine

A node-based DAG editor for transforming data. Nodes perform discrete operations - mapping, filtering, arithmetic, branching - and are wired together into a pipeline.

The data model

Twine’s internal data model is built around employees as the primary entity. Most data that flows through Twine describes some aspect of a person’s employment - their role, their salary, their schedule, their identity. A key characteristic of this model is that many properties are date-tracked: rather than storing only the current value, Twine records values by when they take effect. This makes it possible to represent historical data and future-dated changes - though the depth of history available depends on what each source system exposes. Refer to the relevant integration page for details.

Data Model

Learn how Twine models employee data, including how dated properties work.