With flows you determine what data is transferred from system A towards system B and visa versa what is transferred from system B to system A. Within a integration, in most of the cases. you can not set up a flow that goes both ways. As this triggers an infinite loop of updates between both systems.
The basic principle of a flow is that there is always a get side and a send side of a functionality. That means that if a system has for example a getProduct it can talk with each system in our platform that has a sendProduct. As mentioned in the integrations part this also means that you can connect the same type of systems to each other, like ecommerce to ecommerce or ERP to ERP.
Not all systems support the same combinations, this is dependent on what is available via the API, so for example a POS where we can not create products via the api but we can retrieve products can have a getProduct but not a sendProduct. Always check the corresponding documentation of the application on the website or from the application portal to have the most up-to-date possibilities for the flows available.
A flow is a batch of jobs that are sequentially and process the data from system a to b. This is done on different parameters but for the understanding we will go more indept in the different chapters on this topic. As an example for a generic product flow;
Example
Get
- We retrieve a list of products from system A.
- From this products we select 10 articles that we want to handle and we start the processing.
- First we get the product details
- Then we check combined data from different endpoints to have the product complete
- We come in the stage where we have all raw data available from the source for a product.
- We apply filters
- We then hand over the data towards our core
Core
- We transform the received data towards our datamodel
- Adjust data content related to mapping
- Add fields that are not provided from the source data.
- Prepare the complete dataset for the target system in our datamodel
Send
- We receive the information as per our datamodel
- we transform this data on the send side towards the format the target system is expecting
- we sequence the jobs in a logical way so we can handle the application specific requirements.
Overview
In the overview you see if the flow is enabled or not, here you can resume the flow if necessary. If there are authorization issues the flow can stop automatically. Normally it will be resumed after 5-10 minutes. If the authorization issue still occurs it will wait for an hour and restart. After this the flow will be disabled till you took a look at it.
Settings
Settings on the flow level are related to both sides of the integrations. Next to every setting there is a mark where you can see more details of what that setting does.
Settings are setup in categories, if you used the wizard you will have set the most common settings already. In this overview there are more settings available. With settings we make a difference between true false statements, default value/ fallback value, mapping tables categories and attributes.
True/False
Can enable or disable a setting or functionality. Examples are,
Do you want to use a default value? Do you want to force and overwrite, so we always use this value?
Do you want to update categories? Do you want to update images? Do you want to update product texts? If this settings are enabled they trigger a functionality in target system so we do not overwrite for example your updated categories or images. If you enrich for example your product data in your ecommerce system you don’t want it updated with flat text lines from the erp for example.
Default values / fallback value
If we enable a default value you can adjust this value for your integration for example, we send orders standard to a certain administration ID. If you want to have this changed you can adjust this by changing the default value.
If you have a mapping table added to this, the default value will behave as a fallback that when the mapping is not found you can use the default value.
If there is a default value, you can also force this value that it is always used, for example if the standard warehouse is different per product in you ERP but you want to send everything from 1 warehouse and make sure this is always used.
Tables
Within the concept of the tables we make a difference between, mapping tables, configuration tables and custom tables
Mapping tables
Mapping tables are always related to a field in the data mapping. You can find the corresponding field in the description of the table. Basically what it does is transforming value X to Value Y. For example we retrieve a country code for the revenue account and need to transform this so for example the shipping address is the input to determine the revenue account.
NL à 8000
BE à8010
This is a 1-1 relation.
If you want to make use of more complex logic, business rules should be added to do this.
Configuration tables
Configuration tables are used for transforming data that does not met the criteria to be put in the standard data mapping or does not fit in to a mapping table. For example we use configuration tables for mapping additional fields for attributes. Both of the sides are not part of the standard mapping of 2 sides of the integration and in this way we can transfer and map the data accordingly.
The second example of a configuration table is the category table that sometimes appear. This is used when the source itself cannot pass an identifier on what level the category is being used. For example you want the following structure created in your ecommerce system
Main category
Sub category
Sub sub category
Field A à Main Category
Field B à Sub category
Field C àSub sub category
Business Rules
Within custom tables you can transform data while still map it towards the expected outcome. For example,
You want to make sure the correct revenue account is selected. With the normal mapping table you can create a mapping like
Country à Revenue
NL à8000
BEà8010
Within the custom table you can add additional values in the equation
For example
((Country = NL) and (Company name contains a value)) à8010
Data mapping
In the data mapping we map the fields between 2 different systems towards our datamodel. As you can see in the below picture we receive from the source a inventory number and translate it towards the product.sku. Please be aware that you can map fields multiple times from the get side of the flow but you should not map the fields multiple times at the send side as we use the last know value to send even if it is empty.
As you might notice there can be the following values within a field name the . and :
These values are based upon the JSON structure we retrieve from the source. In this way we can map on a generic method and make a difference between where the data is available within a json.
JSON towards dataformat
In the example below we receive a json with this format, please keep in mind that the values are case sensitive.
{
"item": {
"id": "string"
"values": [
{
"name": "string",
"value": "string"
}
]
}
}
We translate this into our datamodel into the following format.
- id
- values:name
Explanation
Symbols in the json have several functions, { starts what we call a associated array. That is an array where every key/field has a name. So in the above JSON you can identify this as item.id.
The [ starts a unassociated array. An unassociated array is an array where we retrieve multiple lines of the same value. So for example multiple order lines.
If we analyze the item.id, item is in this case the first value in the array, because a { is after this value we translate this towards a . the first “field” we find is the id so we put that directly after the dot so we get item.id the actual value in the string field is automatically retrieved.
to get towards the name we follow a similar path we have item then the . for the { and then we have a new array from values that start with a [ this we transform into a : this translates towards the following items.values:name. In this unassociated array there can be multiple lines and we can handle them correctly like this.
Calculated / manipulated fields
As not all source supply the data we need for the target system there are some fields where we can deliver a calculated value if not provided.
Price fields are an example of calculated fields. Not all source and target systems include a gross and net price field or even a tax % or identifier field. In the below case we see a mapping where the fields are available for a direct mapping. We have a get where we retrieve 3 out of 4 fields and we need to map it towards 2 fields
SEND GET
In this next example you see that there are only the prices including tax and the tax percentage on the get side, in this case we calculate unit price excl tax for the send side. We always need 2 values to make a correct calculations. If we have a including and excluding tax amount we calculate the percentage for example. The same is true for tax amount, including, excluding tax amounts.
SEND GET
In case of only 1 variable on the get side, we still can calculate the value needed, but this in this case we can only use 1 percentage and that is defined on integration level with the default tax rates.
A second example where we manipulate data is on the addresses. As not al systems deliver addresses in the same way as we expect, we transform them in to or datamodel. Names are provided in different ways, as a single line, split up to different fields and company names. The same as for prices as long as we have values we will cut them up to fill all the fields we have applicable in the datamdodel. It can occur that a infix is not correctly passed as this is not always clear when it is provided from the source.
The actual addresses have multiple possibilities as the differences between the systems are bigger.
When we have an addres, housenumber it can be populated towards the street field. In case of a complete address that needs to be populated in 1 field u can use the fullAddress field in Apicenter.
A third mapping that does not need a 1-1 mapping is the country identifier. Most of the systems supply this in a ISO 2 or 3 standard value. We translate this within the model to the correct expected value for the target system. In addition we also maintain custom tables for certain applications that do not follow the iso standard and we can translate the name to a iso standard. For the country names we tried to include all the translations that are available.
Customfields
We have different types of customfields these are string and value fields. On the string fields you can map fields that contain a string and the value fields expect an integer. You can use the microfunctions next to the mapping to make sure the field is transformed in the correct format.
Custom fields are never standard mapped.
What to do when fields are not available?
You can also add fields that are not available in the standard dataformat. These fields need to be available in the requests we do and you should check if these fields are available in the raw data. To add these please make sure you follow the structure as mentioned in the JSON towards dataformat chapter.
If these fields needs to be added in the dataformat cause the became part of the standard output of a system please contact support so they can make a request for change.
Why are not all fields mapped?
A lot of fields are automatically filled by the target system so we do not need to map them. For example the id of the customer in 99% of the cases you want the target system have it generated to make sure you get no double entries.
There are also a few known fields that cause issues. For example statecode’s. There is no standard for that in Europe so we are not able to map them accordingly.
VAT id’s/numbers are not mapped standard, because most ERP systems only accept correct id’s so not a random value. Most of the ecommerce systems do not check the vatnumbers and any value can be filled here and those are not accepted on the sendorder for example.
The same is true for chamber of commerce identifiers.
If you are missing standard fields in the mapping please let us know.
Endpoints
Within the endpoints you can see what the endpoints are that we use for retrieving or sending data. These endpoints are adjustable but we recommend not to do that as this might interfere with the way our integrations work.
The reason you might want to change an endpoint is when you want to add an additional filter on the URL to reduce the load on the source system or if you want to make sure a certain criteria is met.
In the above picture you see the there are values between { } these are values that come from fields in the source data. The type of request you see below, we make a difference in get, put, post, patch and delete. The delete function we do not use.
With Graphql integrations you will probably see only 1 or 2 endpoints as all data comes through the same endpoint
Filter Options
Sometimes it is difficult to filter on the data we retrieve, for example there are systems where we can not filter within the URL or the documented filter options are not sufficient enough. For this we added filteroptions. Within a filteroption you can apply a filter where we check on the completed raw data if certain criteria’s are met.
For this you start with the condition, what field do we need to check and what value needs to be equal or not equal to. You can add multiple values that needs to be checked to create a complex filter. For more filter examples please check the documentation about filter options.
Activity
Within the activity log you can monitor what is happening within the integration. As a partner or administrator you can see also debug logs as a customer/enduser you can only see the mainlogs.
As admin or partner you can enable the debug logs under the settings. These will be turned off automatically after 1 day when you enable them as a partner, as a admin these will be active till closed.
There are 5 symbols
Warning, this is not perse an error but it can happen that it needs attention
Failed, there is an error somewhere in the flow, some of the activities can be retried when there is an error.
Pending, we did not get a success or failure response back from the system we are connecting to. This can also happen if there is a timeout.
Succes, when the complete process has been finished without errors or warnings
Debug these logs are not visible for everybody, here you can see step by step what is happening and where I the process something stops.
Admin
In this panel there is admin functionality that is available for partners and admins.
Here you can trigger the flow manually. So you do not need to wait for a cronjob or webhook.
In case of the order flow you can also remove the orders from our database to resend them towards the target system.