Sage 100 Premium 2020 SQL Server Transactional Replication for Reporting

Version: Sage 100cloud Premium 2020 (Version 6.20.0.01)

Additional software: Scanco Operations Management & Multi-bin, Some custom DSD programming

Hello all! First post here. Looking for some assistance when it comes to replicating data for reporting. We currently have around 18+ companies running on (at the moment) 4 different installations / environments on separate servers. That means separate application and DB servers for each environment. Naturally, this is an issue when it comes to consolidated reporting. We currently have SQL Server Transactional Replication configured for each production company in the respective environment. The data for each production company is being replicated to one reporting SQL Server. We have noticed that this interferes with certain utilities (i.e. Delete and Change Customers, Delete and Change Items) and renitializing data files. I have added screenshots of the errors we get for those. Here are some high level questions:

  • What are the best practices when it comes to using SQL Server Transactional replication as it relates to our version of Sage?
  • Will this interfere with normal Sage Application logic and / or cause corruption?
  • Are there alternative solutions that can be deployed in house using SQL server tools or other methods?
    • If not, what are recommended additional solutions?

Any and all help is appreciated!

  • My general approach is to use scheduled Merge scripts to keep mirror tables up to date (focused on the specific data needed), instead of using SQL tools (which can interfere, as you are experiencing).  Since Merge scripts are really just queries to the source data, they shouldn't have a functional impact.

  • in reply to Kevin M

    Kevin,

    Thanks for the quick response and info! Will look into using scheduled Merge scripts. A few questions though. How are you saving the scripts and kicking them off? Saving the scripts as SPs or SSIS packages and kicking them off with SQL jobs? As to the frequency, is there a recommended limit before performance suffers? Also, are there any considerations having multiple Sage environments?

    One primary issue at the moment is that a lot of what we are currently doing also entails a fair amount of ad hoc querying. In other words, the tables that are "needed" will frequently change. That is why the replication was so great. Understandable if it interferes with actual Sage application logic, or could potentially cause corruption. However, if it is simply prohibiting the use of certain utilities or reinitialization of data files (most of which we do scheduled after hours anyway), it shouldn't be an issue to drop and recreate the replication. Do you know of any issues that would directly affect the actual sage application using replication?

  • in reply to Forest Franzen

    I typically have very modest mirror table requirements.  Inventory QoH for a specific function, pull a bunch of Providex data into SQL for speeding up a complex custom report... you could technically do all the tables in each database, refreshing different tables daily, others hourly... whatever you need.  Since I usually have something specific in mind, I often use SQLCMD to trigger SP's that update just what I need, with everything on the same network.

    I'm not a programmer, but I think the problem with some forms of replication / mirroring, is the data locking to prevent concurrency issues.  The delete / change utilities process a huge volume of small transactions, and waiting extra time for each change, while SQL is holding things to be verified as copied somewhere else... ugh.  You could easily encounter the same kind of issue while posting a large invoice batch, which would be really bad.

    That being said, there are probably others here who are more versed in options for SQL replication.  I haven't been a DBA in about 20 years, so my knowledge of current tools / options is not great.

  • in reply to Kevin M

    Again, thanks for the info. Having read into SQL Server transactional replication, it appears the magic happens with the Log Reader Agent, which actually reads the transactions on the publisher (INSERT, UPDATE, DELETE) and propagates it to the subscriber. Pretty cool stuff... It does rely on at least an initial snapshot to determine the articles to replicate. I am wondering if that is where we are getting some of the strange errors we were noticing (i.e. duplicate key or index). Seeing as the log reader agent only propagates committed transactions and we are using standard transactional replication (publisher - prod database to subscriber - reporting database), I would imagine there shouldn't be any concurrency issues.

    I think for us, given the amount of companies we have and our reporting requirements, scripting out everything is not a viable nor scalable solution. I would love to explore options for getting data replicated as close to real time as possible with an easy set up similar to SQL server transactional replication. Do you know any resources that can help with this? I'd be glad to reach out to whoever can provide more insight.

    Additionally, if anyone is a database guru in this forum, I'd love to hear from you and soak up that knowledge :)