Sage 500 RAID Configuration

Hello Everyone,
We are in the process of setting up our new server and was wondering if Sage 500 will work in a RAID environment? The SQL Server will be running in a VM and we have 16 1TB SSD drives to "play" with.What RAID configuration would be best or is it even possible?

Also, is it still common to separate the Data, log, and index onto separate drives?

Thanks in advance for any advice.

Jason

  • 0

    Sage 500 is hardware agnostic in that respect, you can array the drives SQL uses however you like. Depending on the storage array you're using, RAID 1+0 probably work best. Data, logs and tempdb should generally reside on different arrays in a simple RAID solution, it's your choice on where you locate indexes since the Sage tools will only create them on the primary. The more configuration features you enable within the engine or database the more you need to test and maintain the solution yourself. For example, the Sage database upgrade doesn't like memory-optimized objects, native or encrypted code, foreign key constraints or column store indexes within the app database. Version 2023 databases are set to SQL 2012 compatibility out of the box and begin to encounter performance problems if you go above that in an upgraded database due to the legacy code constructs (especially with a large database). There are ways around the performance issues, including some that Sage builds into the code base these days, you just have to sometimes be creative with your solutions. Don't forget about memory and CPU allocated to the VMs.

  • 0 in reply to Contefication

    Thank you for the quick response. Our "IT" company has setup the Hypervisor on a RAID 5 with 2 hot spare drives. They want to keep it this way and setup virtual drives for the data, logs, and temp drives. I'm sure that i read in one of the compatibility sheets that Sage 500 is not compatible with RAID 5 but i can't find the documentation now. We have a relatively small database (less than 10GB), 20 users. Our new server is probably overkill, more than enough cores and 128GB RAM. All drives are SSD. Should i keep the configuration the way they have it or insist that they recreate the drives....Thoughts?

  • 0 in reply to Ostrem

    Seems like pretty low volume and a small db. With modern storage devices even RAID 5 should be fine. More than enough memory, so it should perform more than adequately, but only you can really be the judge of that.

    Sage 500 Compatibility Guides:

    kb.sage.com/.../viewContent.do

  • 0 in reply to Ostrem

    As Conte mentions RAID5 is fine and supported. It is still a SQL Server database so think of specification and sizing more around SQL best practices. You mention more than enough cores. That raises a flag for me for your configuration. The reality is that you don't need more than 8 as even if you would see more than 8 best practices in SQL Server due to its inner workings are to set Max Degree of Parallelism to no more than 8 if there are more than 8 cores. 

    You will want to set SQL Server Min Memory settings, which I typically recommend to be 50% of the total RAM in this Server, in your case this Virtual Server. Then I would set SQL Server Max Memory to be about 80-85% of the total RAM in this Server. This leaves headroom for the OS and other things running on this Server.

    There are other SQL Server best practices that can assist in performance and reliability but these are some low hanging fruit that can reduce some initial pain. 

  • 0 in reply to jnoll

    Joe, why would you recommend a minimum 50% on a 128GB system with a < 10GB Sage 500 database? I would want to know if this is the only data store or application being served before doing that. As a general rule I always determine the min and max memory according to the activity level, data store sizes and types, resources available and the other applications the virtual server is hosting.

    We really don't know if it needs to be considered at all in this case. SQL is going to adjust itself for memory pressure exerted by applications the Windows kernel recognizes but you may have additional overhead because of VM management, backups, replication, etc. If SQL is only serving a 10GB Sage 500 OLTP database though, overhead and paging are not a concern unless there is excessive activity against the host.

    I've primarily encountered DOP problems with distributed data stores and queries, when there is insufficient memory, the storage devices are slow, or schema indexing isn't properly maintained. A lot of those slow tempdb spooling problems really don't occur any longer, and it's been many years since I've seen a DOP bug that hit a 500 server, SQL has managed DOP just fine. This setting should really only be adjusted after testing SQL under load.

  • 0 in reply to jnoll

    Thank you. I meant that the server is more than capable in the cpu department. We will only be using 4 cores for the SQL VM.