Lag...again

Report a bug or find troubleshooting tips for known issues.
Forum rules
Before starting a thread, please check to make sure that others have not already reported the same issue. When reporting a bug, please try to provide a screenshot and to describe the proper instance in order for us to replicate the issue. After an issue has been confirmed and/or fixed on our end, the topic for it will be locked to prevent further reports. For all payment related issues, please open a ticket at http://cloudcade.helpshift.com to receive immediate support.
VeNoM0619
"Good" Mastery Member
Posts: 85
Joined: Wed Apr 19, 2017 2:07 am

Re: Lag...again

Postby VeNoM0619 » Mon May 01, 2017 6:41 am

Just a suggestion that may reduce CPU/Network/Disk load:
Change your trade house so we can do bulk buy/sell.

Me buying 700 artifacts = 1400 network packets, 700 commits, and 700 verifications/processing, which could all turn into 2 packets, 1 commit, 1 (bigger, but still small) verification, and my wrists/hands wouldn't get tired afterwards either.

Oh... and maybe I won't complain if the server lags... because its not 700 x 10 seconds (lag) each, its just 1 x 10 seconds. That's 2 hours (aka log out, and dont play) vs waiting 10 seconds.

User avatar
Shiroe
"Legendary" Mastery Member
Posts: 2300
Joined: Sun Aug 23, 2015 11:05 am
Location: the Netherlands
Contact:

Re: Lag...again

Postby Shiroe » Tue May 02, 2017 2:30 am

VeNoM0619 wrote:Just a suggestion that may reduce CPU/Network/Disk load:
Change your trade house so we can do bulk buy/sell.

Me buying 700 artifacts = 1400 network packets, 700 commits, and 700 verifications/processing, which could all turn into 2 packets, 1 commit, 1 (bigger, but still small) verification, and my wrists/hands wouldn't get tired afterwards either.

Oh... and maybe I won't complain if the server lags... because its not 700 x 10 seconds (lag) each, its just 1 x 10 seconds. That's 2 hours (aka log out, and dont play) vs waiting 10 seconds.

It's likely database load. There's likely hundreds of queries fired at the database every second (crafting, questing, fusing, tradehouse etc.) .
If you make the queries more complex by implementing bulk buy/sell as database queries, you end up with queries that run longer, so the fact the local display is of course out of sync with the remote database becomes more relevant (you've likely seen the red "this item is no longer available" popup lots of times when mass single buying/selling things at the tradehouse) and/or transactions could fail altogether since another one locked the row/table for the item/artifacts, scooped all those up and when it got unlocked for your transaction to process, at that point none within the wanted price range were left.

Bulk might lower database load, but it will likely up transaction failure. With single-buy "this item is no longer available" costs you maybe 0.1 seconds as a player, for bulk you'd likely end up setting up a new bulk buy/sell.
And if you instead keep single item/artifact database transactions when doing bulk, you wouldn't reduce database load and just add CPU/RAM load because the server software would need to do the bulk->many single conversion...

(I myself don't have experience with high volume simple database queries, just low volume high complexity ones and trawling through mysql logs on reported failures/slowness to find the deadlocked/slow queries to report to the coders...)
as of 2016-09-11: Player level: 44, City: Eolythes, Blueprints: 517, Mastered: 419, Crafted: 78.61K
(except for tier 1 and some tier 2 artifacts mostly running my shop/gearing self sufficient)

VeNoM0619
"Good" Mastery Member
Posts: 85
Joined: Wed Apr 19, 2017 2:07 am

Re: Lag...again

Postby VeNoM0619 » Tue May 02, 2017 4:10 pm

Interesting thoughts.

Hopefully the buy/sell requests are single threaded/single queue, so I doubt there would be any deadlock (transaction failures), however the bulk could still be implemented on a multi-threaded setup just by doing "1 at a time" server side until complete (for each loops are not that costly). Also prevents users from clicking/network traffic buildup already, and I wouldn't care for a "please wait" while it processes 700+ purchases (because I am saying, buy 700 @ <200k, first failure bails out completely and returns the 412 items or whatever)
Either I click 700 times and server processes 700 requests, or I click once, and the server processes 700 requests (if they still chose that route).


The commit could be thrown on a timer (once a second/minute/whatever). to reduce disk load. Committing every transaction is NEVER recommended anyways (and yet so many programmers do it, and it's always the first thing I clean up...). Even in a multi-threaded setup, this is doable: one server queue handles item IDs < 1000 while the other handles IDs > 1000. No deadlocks/transaction failure, but once again, I just don't see the need for multi server setup, peak players for steam isn't too high @ 1,700-4,200 http://steamcharts.com/app/506140

But chances are, they have each server geo submitting its own commit request to a central DB which leads right back to the "committing every transaction" problem, but at that point, why bother with a server in every geo, if it still funnels to a central DB and just adds another stop to the packet?

Either way, having all the servers connect to a central DB that "holds ALL the rows" means trouble eventually, (you want to "shard" the data, have a login server, have a trade house server for half the items, have another for a 2nd half, have a "craft" server etc.) Multi-connections are nice, when you aren't contending for the same spot of data, otherwise you put that data access in a single threaded queue, and commit once in a while. Commits are super expensive, I wish I found a highly detailed article on why...best I found was here http://stackoverflow.com/questions/33042679/is-committing-empty-transactions-expensive I know it can ruin absolute atomicity, but atomic transactions are why things are slow (cause it needs to sync EVERYTHING ALWAYS), so delaying/batching it a few seconds can give surprising results if disk load was a huge factor, and we aren't dealing with people's lives here/flying space shuttles.

Sorry if this is long/rambling/incoherent though, I attempted to address everything. It's always interesting to think about/discuss/learn these things for me, and yea, I feel your pain, high complexity queries are never any fun for a DB, full table scans always popup.


Return to “Bugs & Issues”



Who is online

Users browsing this forum: Google [Bot] and 2 guests

©2015 Cloudcade, Inc. All Rights Reserved.