I’m pondering how to get my existing data into my new Squidex schemas. I see that the CLI allows for import from CSV, which is great, but I have a couple of additional questions:
-
Some of my schema fields contain images and audio files. Can the CLI handle this somehow, or do I need to write a custom tool?
-
Can the CLI handle references from one schema to another, and if so what is the format or method?
Thanks for the help!
Thanks for your reply, Sebastian.
About my second question, here’s what I mean. Let’s say I have a “customer” schema that is already populated in Squidex and it includes a customerId field. And I have a “sales” schema of sales transactions that includes a reference type field that references the customer schema.
I have new sales transactions in a CSV that includes the customer id. The sales transactions have never been in Squidex before.
Is there a way to import the sales transactions such that the proper reference is made for each entry to the customer schema, matching on customerId?
Not automatically, you have to resolve the reference id first with a query, the CLI makes the same. I can tell you more about this tomorrow.
Here is what I remember:
The Sync tool of the CLI creates something like a workspace with different json files. One file per schema, one file per rule, one file for all your settings and a number of files for your content, each content is structured like this:
{
"schema": "my-schema"
"filter": {
"path" "data.iv.iv",
"op": "eq",
"value": 1
},
"data": {
"iv": {
"iv": 1
},
"text": {
"iv": "Text"
}
}
}
The problem of Squidex until end of last year was that IDs where generated automatically. Therefore you have to use something that makes your content easy to identify. In this example we have used a field “id” that is unique for our content. So the CLI makes a query first and checks if the filter returns exactly one item and then it either updates or creates the content item. For this the bulk endpoint is used in Squidex, so it is only one API call. If the filter returns multiple items we have an issue and cannot do the sync for this content item.
To solve the problem with the schemas the syntax was like this:
{
"schema": "my-schema"
"filter": {
"path" "data.iv.iv",
"op": "eq",
"value": 1
},
"data": {
"iv": {
"iv": 1
},
"text": {
"iv": "Text"
},
"reference": {
"iv": ["MY-REF1"]
}
},
"references": {
"MY-REF1": {
"schema": "my-schema"
"filter": {
"path" "data.iv.iv",
"op": "eq",
"value": 1
}
}
}
}
So the idea was that you use a placeholder in your data and then you define how the placeholder should be resolved.
Then the sync tool builds a dependency graph between your schemas and executed them in the right order.
There are a few problems with this:
- It can only build the dependencies on schema level, not on content level, so it does not work for tree-structures.
- It cannot export the contents to JSON files.
With the custom IDs we could improve that and extend the CLI to sync contents and scripts as well. Perhaps we would also need more features e.g. to avoid conflicts when a content item is newer in the target system than in the source system.
Thanks for that detailed reply!
Assuming I use a query to get the reference id first, which in my example would be the reference id of the “matching” customer record, would the path for the import be something like this:
salesCustomer.iv.0.schemaIds=referenceId
IF the CSV importer supports it it would be salesCustomer.iv.0
only
Got it. Thanks very much! Now I’ll do some experimentation.