Our service extension is successfully writing several concurrent records and a statistic item at the end of our commands. However, the overall runtime is very slow – 1500ms or longer. Most of this time is in reading the records and stats first, but I’ve drastically improved that with the bulk cloud save read API.
My problem is with our concurrent writes. There is no bulk concurrent write API, so I implemented a parallel path using the Task API (C#). I create a task for each concurrent write and stat write and then do Tasks.WaitAll(writeTasks). When I do this, it appears to take just as long in the async path as it does in the sync path. There aren’t errors and the saved data is correct in both paths, but I’d like to speed up the writes – they usually take ~400ms in total, and sometimes as much as 1500ms.
Do you have any suggestions on how to speed up concurrent writes?
Could you share what kind of record that you use?
Is it game record, player record, admin player record or admin game record?
For player record, to write cloudsave record to multiple user id there’s following bulk endpoint:
To write multiple cloudsave record under one user id there’s following bulk endpoint:
Other than that could you share what is the size of your cloud save record is? Because I believe the size of payload request is take affect on the latency of endpoint itself.
The goal is to write multiple records (concurrently) to the same user id. We have the possibility of users playing from a phone and a computer (or two phones) at once and getting write corruption, which is why we want to use the concurrent API in the first place.
A quick look says we have 5 records with the following elapsed write time, dictionary length, and size in bytes:
89 ms, 3 entries, 64 bytes
89 ms, 8 entries, 988 bytes
89 ms, 4 entries, 168 bytes
89 ms, 1 entry, 12 bytes
85 ms, 3 entries, 44 bytes
It would be great if we could combine these as our use case is pretty sensitive to latency, hence my question.
Thank you for considering us for this request. Currently cloud save service doesn’t have bulk concurrent update and only support the single one. However, after reviewing our team’s current workload and availability, we unfortunately won’t be able to support this in the near future. The expectation is that we could revisit this next year at the earliest.
As an alternative, may I know how often player play from multiple device at once? If its not that often and very small chance happen, I suggest to use put bulk write endpoint to minimize the io process. But if its very often, you can use single concurrent update but with decrease the number of record being updated.
Other than that we will also discuss this with C# SDK team in case any other suggestion.
The problem is we don’t know how often this case would come up, but it only has to happen once to lose data unless we use the concurrent API. That’s why we’re using it even though it’d be more convenient to use the put bulk write.
These records can’t easily be combined because they have different domains: per player or per character or per zone, etc. However, I can look into it.