TRIAL BALANCE slow on Server 2019

Dear Sage City users,

I am having an interesting case to solve.

I have two environments:

Sage v24 on Windows Server 2012R2 virtualised in our ESXI local server. 32GB Ram 8 cores at 2.8Ghz. Standard HDD drive. 
Sage v27 on Windows Server 2019      virtualised in MS Azure platform. 32GB Ram 8 cores at 2.5Ghz. Premium SSD drive.

When I run the transactional trial balance on the local server - it gets populated in about 3 minutes.
The same report on the much more powerful server takes 25 minutes.

From what I have found already the difference in time is due to a speed of a temporary file being built.:

When I run the report, the data files are being copied to a temp folder. Which runs at full speed in both environments.

Then it creates a file with the prefix "STG-" and an extension of .000 in folder C:\USERS\USERNAME\APPDATA\LOCAL



The old server process the file with the full speed of 50,000,000 bytes, while the new server utilizes only 600,000 bytes max.

The file populated is 200Mb in total. 

What I have tried already:

Disabled UAC
Enabled ELC
Enabled OPlocks on the server-client side.
Disabled SMB Signing
Disabled AV
Disabled Firewall
Changed the temp folder to an extremely quick SSD with 10 000 IOPS without any difference. 
Changed the default printer to XPS, PDF etc

Do you know what else may contribute to this file being written so slow, please?

Any ideas are greatly appreciated.

Kind Regards,
Rafal


 

Top Replies

Parents
  • 0

    Hi Rafal

    The one thing that stands out to me here is that we're comparing two different servers with two different versions of Sage.

    So that you can get a fair test this is what I'd consider trying.

    Install Sage Accounts v27 on Windows Server 2012R2 

    &

    Sage v24 on Windows Server 2019

    and run your test again.

    Kind Regards,

    Ian

  • 0 in reply to Ian C

    Hi Ian,

    I have found out what is the problem. It is not version related, as I have tested it a long time ago.

    The number one performance blocker is the RDS server role. Once you install Remote Desktop Services the Sage Data Object Service slows down to a grind. I am not sure if this is a restriction put by Sage devs as a feature or a bug.

    For myself now, it looks like the Sage devs are forcing you to go to Sage200, making Sage50 not fit for purpose on a multi-user server. I went even further, and have hidden the installed RDS role from SageDataService by putting an ACL on the registry keys that SDO was reading to establish the RDS installation. It worked.

    In essence - this finding really opened my eyes to the business model of Sage50.

    Thank for your help.

  • 0 in reply to Rafal Witek
    Sometimes the SDO process slows down significantly

    Just to confirm the terminology here. When you say SDO Service do you mean the "Sage 50 Accounts Data Service'? It's just that we don't have a service called that and SDO is short for Sage Data Objects, which is part of our 3rd party SDK, so wanted to be sure we're not talking about a 3rd party service implementation here? I suspect that you are referring to the Accounts Data Service but wanted to be 100% sure. The only thing that makes me pause is the example you note of running a report but reports are not generated by the data service at all, and are only ever generated by the client process (SBDDesktop.exe), or directly though Report Designer (SageReportDesigner.exe). So I'd not expect the data service process to be doing anything with temp files when running a report.

    There are the first 3 topics I was able to find, I can find you many more users that suffer from this problem, but no one has found a solution yet.

    Looking at the links it seems like only the first one could potentially related to the issue you are reporting. However I'm not sure that it is the UK version of Sage 50 Accounts that is being discussed there. It sounds a little more like the US version, which is a very different product/implementation from the UK version, given the mention of Sage 100 and also the 'Sage DB'. Hard to say with any degree of confidence if it is the UK or US version. Unfortunately there is not really enough detail on it to confirm if it is the same thing or to a identify a common pattern/cause even if it is related to the UK version.

    The other 2 links don't look to be the same thing as your issue. The 2nd link is related to the UK version of Accounts but is saying that the auto update process is sitting at 20% CPU (at leas that is the only specific thing noted), whereas your example is about running reports and I/O bound performance issues when doing so. The 3rd link is just saying that they had slow performance due to large data volumes and that reducing this resolved the issue. It is also referring to the US version of Sage 50 rather than the UK version - with this one I can be sure of it given the reference to the product name 'Sage 50 Accounting 2017 & 2018' and mention of Peachtree.

    Happy to look at any other links you might have but so far nothing looking like the same issue or a potential pattern I'm afraid.

    Let`s use the TB report as an example, but the same bug affects any report, remits or any process that requires recalculation at the TEMP FOLDER.

    Use of the temp folder when running a report is fully expected although I'm unsure what you mean by "recalculation at the temp folder". When running a report the content/layout rarely matches what is on disk in the actual data, so we perform the necessary filtering, calculations and transformations and make use of temporary files during this process. However as noted above the data service would not be doing this work and it would come from SBDDesktop.exe or SageReportDesigner.exe

    Once you uninstall the RDS role from the server the speed goes back to its full potential. This is why I was asking about inbuilt obsolescence.

    Definitely nothing built in to the code that has any special handling for, or even any knowledge of, RDS. As mentioned in my other reply my theory is that enabling RDS results in some changes to how Windows deals with I/O operations but what that might be and why it has such a negative impact on performance is impossible to say. It's all jut theory at this stage.

    I've asked our testers if we can try to replicate the issue. Could you provide details of the registry keys you mentioned changing access to please?

  • 0 in reply to Darron Cockram

    Actually one other question on this is are we running the Accounts application directly on the server in both cases (as in an interactive console session)? Or is it being run as a console session when RDS is not enabled and using an RDP client afterwards?

  • 0 in reply to Darron Cockram

    I have tried both versions. To run directly on the server, and over RDP. with the same results.
    And I will get back to you on the previous questions soon, as I am re-creating the issue to make some screenshots. 

  • 0 in reply to Darron Cockram

    I have done absolutely zero Windows development, but some under *nix (and other multi-user systems, eg Pick, OS4000).

    Is there a Windows equivalent of the *nix strace utility which tracks all syscalls so that you could track what is happening when the temp file is created and filled (especially those made from libraries used)?

  • 0 in reply to Robert N

    Process monitor is the closest equivalent that I am aware of but not sure if this would shed any light on the issue. Certainly no harm in trying though.

  • 0 in reply to Darron Cockram

    I don't know if this is a related issue, but when I use the "Send to Excel" from a list (eg transactions) it is extremely fast at first, but slows down dramatically.  I'm currently exporting 2010 transactions and for the first 1100 to 1200 it zipped along but has now slowed right down.

    PS: If there isn't a utility like strace can use you a debugger and get it to stop at every syscall (like the AIDA debugger I used on OS4000)?

  • 0 in reply to Robert N
    I'm currently exporting 2010 transactions and for the first 1100 to 1200 it zipped along but has now slowed right down.

    Is that also if RDS is enabled and it is faster when it's not enabled?

    PS: If there isn't a utility like strace can use you a debugger and get it to stop at every syscall (like the AIDA debugger I used on OS4000)?

    If we can replicate it in house then it should be possible to get a debugger attached, although given the code paths should be identical I'm not sure this will help. If we can replicate and can get a debugger attached then I should be able to monitor the calls and confirm if there is any difference in the number of calls or the amount of time it takes Windows to respond to them.

  • 0 in reply to Darron Cockram

    It appears RDS is enabled. Disabling remote access makes no difference.

    However, trying on the same export on a colleague's machine it didn't slow down (even though it has RDS enabled).  One difference is the SSDs that the machines have.

    I'm wondering if it's a SSD issue - they're going to replace mine and see if it has any effect.

  • 0 in reply to Robert N

    It appears RDS is enabled. Disabling remote access makes no difference.

    However, trying on the same export on a colleague's machine it didn't slow down (even though it has RDS enabled). 

    That is as I'd expect. As per other messages in the thread there's nothing in the code that knows about or reacts to RDS so it should be doing the same thing regardless. Interesting that you don't see a difference with RDS enabled/disabled but Rafal does. Although having said that it is a different operation and Send to Excel doesn't have the same kind of heavy temp file/disk access requirements that running a report does so perhaps it's not comparing like to like.

    No obvious reason for a slowdown part way through a send to Excel operation that I can see from a quick look. There are of course potential external factors such as available memory/disk paging and other processes including Anti Virus that could cause things to slow down at different points in the export, but I am not a fan of assuming things like that are to blame without further evidence to back it up. I guess the initial question would be is it repeatable and happens every time on export?

  • 0 in reply to Darron Cockram

    There *is* something about having the RDS role installed which could potentially affect I/O, as it uses a Fair Share algorithm to try and balance CPU, I/O (and probably some other stuff). There are reports out there of various software performance degradations which seem to point to Fair Share as their cause. Here's one:

    https://community.dynamics.com/gp/b/dynamicsgp/posts/tuning-remote-desktop-services-2012-for-microsoft-dynamics-gp

    and here's another:

    https://kb.parallels.com/en/128691


    Might be worth disabling the Disk Fair Share and seeing if that helps? Set the value of the following Registry key to 0 and reboot the server:

    HKLM\SYSTEM\CurrentControlSet\Services\TSFairShare\Disk\EnableFairshare

    (FWIW, we've got some Sage 50 clients using RDS (with Fair Share enabled, presumably) and they aren't reporting any problems like this)

Reply Children
  • 0 in reply to Chris Burke
    There *is* something about having the RDS role installed which could potentially affect I/O, as it uses a Fair Share algorithm to try and balance CPU, I/O (and probably some other stuff).

    Thanks Chris. That is just the kind of thing I was wondering about and that simply enabling RDS could affect file access in some way. Sounds a very likely cause of the problems. can you try again with Fair Share disabled and see if that resolves your issue?