Sage 500+Scanco+Intermec-CN3

SUGGESTED

Hello Sage City,

My company currently uses Sage 500 ERP 2014 January Update (Version 7.60.5.0) with Scanco Warehouse Automation on Intermec CN3 devices.  The issue we are having is extreme waits for pulling and updating picklists. A picklist with 10-12 items takes 12+ seconds just to load; one with 58 items took over 11 minutes.  Updating is unuseably slow as well.

I have been working with Scanco, my Sage vendor, and my datacenter to try and resolve this with very limited improvements in performance.

My question:

Have any of you experienced a similar issue and what did you do to resolve it?

I know you may have questions and I have data so if you need info/clarification, just ask.

Thanks

  • 0
    Well not being all that familiar with Scanco's processes, it is going to have to be a process of elimination to determine if it is SQL Server and the code run there, the network, or the machine/software requesting the data. I would suggest first looking at the statements that retrieve and update the pick lists to ensure they are optimized and not the causing the issue. I am assuming the code being run is specific to Scanco's processes and not standard Sage 500 code.

    A SQL profiler Trace with the performance events selected when the task is run can help. The event Show Plan XML is especially useful and will display a visual image in profiler of the execution of statements. Thick pipes between the various sections of the image indicate a large section of data being read. Within profiler you can hover over these pipes to see how many records are in play. Here's a link further explaining this.

    www.mssqltips.com/.../

    Other useful items in the profiler trace would be the CPU, Number of Reads, Writes, and Duration. Large numbers here can indicate a problem as well with the statement that was executed.

    After reviewing this data and everything appears to be working normally and within expected times, then the next thing to check is the path of the data from the SQL Server to where the call was made for the data to see if there are any bottlenecks in the network that might be causing the issue. A network person should be able to put together something using the operating system's performance monitor to see if there are issues within the network itself or the machine SQL Server is running from. Here is a link that explains a bit more on this.

    www.sql-server-performance.com/.../

    If both SQL Server and the network are working as they should, then it is the machine/software that is calling for the records and how it is working with the data once it is received.

    I hope this helps..
  • 0 in reply to LouDavis

    I am in a situation where our picks using the scan process setup by Scanco is taking some 30 minutes for a simple 10-20 line picklist. This is causing a lot of trouble here at the organization and I have been tasked to figure out how to fix/optimize the process. I just came onboard to the organization, but they have been struggling with this issue for the past couple YEARS. the first time a blytheco technician began looking at the issue was in 2015. They have never been able to provide us with any help that actually made a difference. They ran traces, they looked at the network, they added indexes, triggers, etc.

    Being at a point where we cant simply switch to a new system without spending an arm and a leg, and also being that both our reseller and scanco have looked at the process and could make no improvements leaves me with a pretty big task. Like I said, I am new to the organization, so the task of figuring out what the whole process is (what objects are getting called/stored procedures etc) is a challenge all in itself. I see in the trace the various different objects being hit. So i can make a list of everything that gets called during a pick no problem. The problem becomes how do I debug and troubleshoot a process where its dynamically passing variable values through complex layers of transactional processing? I can pull one stored proc out , declare and set the variable for an ad hoc run and maybe be able to do some performance tuning, but how on earth do I debug the whole process without breaking it apart and tuning each individual piece? to me, it would seem that some of the performance loss is because of this all happening in an automated fashion and tuning each piece might or might not even help our situation.

    I feel pretty alone on this, as the last guy to have my job could not figure this out with blytheco;s/resellers help over the course of three plus years. 

    Any progress that has been made on this, could you all please share? 

    I know one of the steps that blytheco had us do in the past was trim records from certain high traffic tables that are involved in the picking process, but i checked them and they are still trimmed and there is no difference in performance. 

    please help!

  • 0 in reply to Cameron Loepker

    This sound like bad queries.  I've seen places where statements in stored procedures, triggers, and client calls result in pulling in all records from tables with little to no filtering of the data.  If you have a lot of data this can cause all sorts of performance issues in various modules.  We had a customer with similar issues (they don't use SCANCO) and tracked it down to bad code within tI_timInvtTran.  In a small database like Sage's demo database, the problem wouldn't be seen.  However, this customer had well over a million rows in timInvtTran.  As the table grew the performance became worse and worse.  Re-writing the trigger's logic a bit we cleared up this customer's problems and improved performance significantly. 

    How we found it was to run a SQL Trace using the completed events to monitor the SQL statements and stored procedures to determine what statements were taking the most time, had high reads, writes, CPU and IO.  Reviewing this trace we were able to identify the statements in the trigger causing the customer issues. I don't know if you or Blytheco have done this or not, but I would suggest doing so.  I would also suggest using Extended events instead of SQL Profiler as well as it has additional events you can use and doesn't impact performance like Profiler does. 

  • 0 in reply to LouDavis

    Lou, 

    thanks for the reply. I did check and there are over 9 million records in that table you referenced. This along with some bad queries could very will be our problem. Will repost when we make forward progress.

    thanks!

  • 0 in reply to Cameron Loepker
    SUGGESTED

    I was looking at the mod we did for this issue and it also was due to a large number of records in timInvtTranDist, timInvtTranCost and timCostTier.  What we found were the following statements  in tI_timInvtran were doing an index scan of all records for each record being inserted. On large tables this was effectively a table scan...which since it is an insert will lock the table until it is done.

    -- Check that the ItemKey matches the distribution ItemKey.
     if exists (select 1
                     from inserted it
                     join timInvtTranDist itd WITH (NOLOCK) on it.InvtTranKey = itd.InvtTranKey
                     where it.ItemKey <> itd.ItemKey)
          begin
          select @errno  = 50008,
                      @errparam1 = 'INSERT', @errparam2 = 'ItemKey', @errparam3 = 'timInvtTranDist.ItemKey'
                raiserror (@errno, 16, 1, @errparam1, @errparam2, @errparam3)
          end 

       -- Check that the WhseKey matches the distribution WhseKey.
          if exists (select 1
                      from inserted it
                      join timInvtTranDist itd WITH (NOLOCK) on it.InvtTranKey = itd.InvtTranKey
                      where it.WhseKey <> itd.WhseKey)
          begin
           select @errno  = 50008,
                       @errparam1 = 'INSERT', @errparam2 = 'WhseKey', @errparam3 = 'timInvtTranDist.WhseKey'
                raiserror (@errno, 16, 1, @errparam1, @errparam2, @errparam3)
          end

    -- Check that the ItemKey matches the cost tier ItemKey.
         -if exists (select 1
                     from inserted it
                     join timInvtTranCost itc WITH (NOLOCK) on it.InvtTranKey = itc.InvtTranKey
                     join timCostTier ct WITH (NOLOCK) on itc.CostTierKey = ct.CostTierKey
                     where it.ItemKey <> ct.ItemKey)
          begin
            select @errno  = 50008,
                       @errparam1 = 'INSERT', @errparam2 = 'ItemKey', @errparam3 = 'timCostTier.ItemKey'
                raiserror (@errno, 16, 1, @errparam1, @errparam2, @errparam3)
          end

    if exists (select 1
                      from inserted it
                      join timInvtTranCost itc WITH (NOLOCK) on it.InvtTranKey = itc.InvtTranKey
                      join timCostTier ct WITH (NOLOCK) on itc.CostTierKey = ct.CostTierKey
                      where it.WhseKey <> ct.WhseKey)
          begin
           select @errno  = 50008,
                       @errparam1 = 'INSERT', @errparam2 = 'WhseKey', @errparam3 = 'timCostTier.WhseKey'
                raiserror (@errno, 16, 1, @errparam1, @errparam2, @errparam3)

    end

    How we worked to around the performance issue was to preload the records we needed to work with into table variables and modified the statements to use those as the following example shows:

    DECLARE @_lDistMatch Table
       (
      InvtTranKey  INTEGER
        ,ItemKey   INTEGER
        ,WhseKey   INTEGER
       );

       INSERT INTO @_lDistMatch
       (InvtTranKey, ItemKey, WhseKey)
       SELECT itd.InvtTranKey, itd.ItemKey, itd.WhseKey
       FROM inserted it
       inner join timInvtTranDist itd WITH (NOLOCK)
      on it.InvtTranKey = itd.InvtTranKey

       -- Check that the ItemKey matches the distribution ItemKey.
          if exists (select 1
                      from inserted it
                      join @_lDistMatch itd on it.InvtTranKey = itd.InvtTranKey
                      where it.ItemKey <> itd.ItemKey)
          begin
           select @errno  = 50008,
                       @errparam1 = 'INSERT', @errparam2 = 'ItemKey', @errparam3 = 'timInvtTranDist.ItemKey'
                raiserror (@errno, 16, 1, @errparam1, @errparam2, @errparam3)
          end

    As the table variables will only have those records associated to the inventory trankey, the record count is much smaller and performance improved dramatically.

    I hope that helps..

  • 0 in reply to LouDavis

    Thank you so much for posting and thank you for your quick response. this could really impact my organization.

    Is this the exact code in your trigger now? 

    Did you have to purge those records? were you able to do it yourself or did you just use this data limiting technique in the trigger to handle all the records?

  • 0 in reply to Cameron Loepker

    This is only a partial code snippet of how we resolved the issue.  Basically what we did was to substitute in any statements using timInvTranDist with our pre-populate table variable @_lDistMatch which is a much smaller data set, which allowed for quicker processing since there are no longer any index scans of timInvtTranDist.  We didn't have to purge any data, but saw immediate improvement.

    This change may not help with all the performance problems, but it should help.  I would recommend working with your reseller to properly troubleshoot and document areas where you are having performance problems.  If they are unable to assist, have your reseller contact and work with a company that has experience with this such as E2b Teknologies, or RKL.

  • 0 in reply to LouDavis

    Can you include the full code by chance? We have a spending freeze and am not able to pay for help we need.... would really appreciate your help with this.