The events show the same problem again. There is something missing in between. As if a part of the buffer or so is not sent.
There is no long term solution to delete events, you might loose the ability to change the content items that are affected.
The events show the same problem again. There is something missing in between. As if a part of the buffer or so is not sent.
There is no long term solution to delete events, you might loose the ability to change the content items that are affected.
Hi @Sebastian,
We are trying to write small Console App which can do below steps autmatically. We can run when data corruption happens. Can you please check steps and validate if these will not cause any other issues in Squidex? If you can suggest a better approach then please suggest.
1- getAllContent for the schema which got corrupted
1.1 - Get Latest Version for each content item
1.2 - Get One Version before : latest -1
1.3 - Collect all GUIDs for which above 2 steps throws exception (500 Error)
2- Print all Such GUIDs in Console output.
3- Connect to MongoDB and Query Events2 collection : { EventStream:âcontent-ApIDâGUIDâ }
4- Save output of Above collection in a Blob Storage or Local File
5- Try to Deserialise event payload and mark corrupted which canât Deserialise
6- Delete all records which was result of above Query from Events2 Collection
6- Query MongoDB States_Contents_All3 and States_Contents_Published3 {_id:âAppIDâGUIDâ}
7- Take back-up of the above record in a Blob Storage or Local File
8- Delete Above Records from Mongo found in step 6
9- Use Events which are deleted (and not corrupted) to recreate Content with Same GUID
10- Apply last update with latest content which was saved at step 6
Sounds like a plan, but it only solves the symptoms. I have still the opinion that it must be some kind of hardware problem.
Just want to make sure above update wonât cause any other problems. We are also hoping we never use this in Production environment.
@Sebastian one of my team member suggested that we can just Fix the corrupted events with good payload (may be latest one) and thatâs way we donât even need to delete them in Mongo and also we can avoid recreating from all events. What do you think about fixing corrupted eventsâs payload information?
How would you do that? Where do you get the original from? You can try to take the latest data and reconstruct the event, but I would never go with this into production.
Thanks for checking this. We will stick to old design plan.
1- getAllContent for the schema which got corrupted
1.1 - Get Latest Version for each content item
1.2 - Get One Version before : latest -1
1.3 - Collect all GUIDs for which above 2 steps throws exception (500 Error)
2- Print all Such GUIDs in Console output.
3- Connect to MongoDB and Query Events2 collection : { EventStream:âcontent-ApIDâGUIDâ }
4- Save output of Above collection in a Blob Storage or Local File
5- Try to Deserialise event payload and mark corrupted which canât Deserialise
6- Delete all records which was result of above Query from Events2 Collection
6- Query MongoDB States_Contents_All3 and States_Contents_Published3 {_id:âAppIDâGUIDâ}
7- Take back-up of the above record in a Blob Storage or Local File
8- Delete Above Records from Mongo found in step 6
9- Use Events which are deleted (and not corrupted) to recreate Content with Same GUID
10- Apply last update with latest content which was saved at step 6
While making a get call it got stuck here at one time with same error. When I ran again everything worked fine. It was not even data corruption or anything as squidex was fine.
@Sebastian When I delete all Events and Content from MongoDB for a GUID, I use those events to replay and recreate content. Everything works fine in first RUN. When I run the process for 2nd time with exact same steps I get below error. Somehow some where this GUID already present and I get OBJECT_CONFLICT error. Can you please suggest what could be wrong?
I donât want to change GUID as that might be a reference in some other schema. IF I change GUID in my recreate process it does work every time but I want to keep the same GUID.
2022-07-15 14:45:28,775 [NPVL201554] [14] Information logger [XpsSquidexDataFix.Service.ApplicationRunner] - filterval :8f4f18b8-8080-447a-a741-94c188927e89âd58ad080-e417-4491-966e-03388935d222
2022-07-15 14:45:29,169 [NPVL201554] [14] Information logger [XpsSquidexDataFix.Service.ApplicationRunner] - Trying to Deserialize event
System.AggregateException: One or more errors occurred. (Squidex Request failed: {âmessageâ:âEntity (8f4f18b8-8080-447a-a741-94c188927e89âd58ad080-e417-4491-966e-03388935d222) already exists.â,âerrorCodeâ:âOBJECT_CONFLICTâ,âtraceIdâ:â00-9e9cb2f57e3fa2531638641aeb883dea-5c7b6317b71740a5-01â,âtypeâ:âhttps://tools.ietf.org/html/rfc7231#section-6.5.8","statusCodeâ:409})
â> Squidex.ClientLibrary.SquidexException: Squidex Request failed: {âmessageâ:âEntity (8f4f18b8-8080-447a-a741-94c188927e89âd58ad080-e417-4491-966e-03388935d222) already exists.â,âerrorCodeâ:âOBJECT_CONFLICTâ,âtraceIdâ:â00-9e9cb2f57e3fa2531638641aeb883dea-5c7b6317b71740a5-01â,âtypeâ:âhttps://tools.ietf.org/html/rfc7231#section-6.5.8","statusCodeâ:409}
How do you know that it is not corruption? It is more or less the same error. Because it looks pretty random how a part of the string is not written to MongoDB (or returned?) the exception could happen basically everywhere.
I need more context, what are you doing here? The logs is not really helpful and the formatting difficult to read. Object_Conflict
basically means that a content with the same ID already exists. This is basically expected if you recreate content. But it also depends how you create it. For example Upsert should work fine, Create throws the error if the ID is taken.