Executing OPENQUERY against Linked Sage 100 from SQL stops at IDBInitialize::Initialize

I'm not familiar with how OLEDB works so I hope someone here can tell me.  

We have a Sage 100 ERP 5.2 as a linked server in MSSQL2008.  Some scripts in Sage executes stored procs in SQL at post-write and post-validate events.  Most of the time there is no problem.  But about one every 100-1000 executions results in indefinite OLEDB Wait type and the instance has to be restarted.  Since it's sporadic, it took me a while to capture them in Profiler but I found that every time it happens, it stops at a particular method, IDBInitialize::Initialize.  After that SQL keeps waiting for the response from Sage, which I believe should be zero.

The question is....what could be causing Sage to not respond?  Where else should I look?  Both are running on the same server, and the scripts are server-side. 

Many thanks.

Ken

  • 0
    I've had scripts stall waiting for a record to be available. If I'm updating a value in a Sales Order, and someone has that specific order open in SO Entry, the record is locked and the script just hangs.
  • 0 in reply to Kevin M
    Thanks for the input Kevin!

    One of my scripts that stall fires at Tabel Post-Write on SO_SalesOrderHeader. It makes a connection to SQL Server, executes a stored proc that runs a query against MAS, and imports the sales order into a table in SQL (or update if it's an existing record). It's basically an integration routine.

    I tested out the potential locking issue based on your input. I opened a sales order in Sales Order Entry UI to lock the record, and ran a statement in SQL against MAS to see if it waits for release. It looks like the read-only ODBC really doesn't care and just reads it fine whether the record is locked or not. Back to square one....

    One thing that keeps me thinking is a particular DSN option, bustmode since i don't fully understand how that can affect a remote read-only query like mine. Do you have any input on that as well?
  • 0 in reply to msgking

    The scripts that I have lock up occasionally are writing (usually setting a UDF flag for an automated / stepped process), not just read only queries.
    I have also experienced issues with OPENQUERY, but they seem to have gone away when I updated the 64-bit ODBC driver used to the version that came with v2014 (...I was originally working with the driver that came with v2013). Another issue I had was the user context of the SQL statement execution. Using a trusted connection often let me down, but with a native SQL login it has been more consistent.

    Edit:

    For your burstmode question, I set up the connection as documented elsewhere in SageCity.  I'm sure you've seen that article already, but if not let me know and I'll hunt down a link.

  • 0 in reply to Kevin M

    Thanks again for the info.   I wonder why you had any issues with trusted connection.  Our stored procs are executed as owner, and all the sessions belong to the schema owner, so this might not be relevant to me.  But I'm curious....did you run any trace when you had your issues in Profiler or Extended Events?  What kind of issues were you having?   I'm thinking of tracing through DSN but I'm scared of the probable huge size of trace file...

    As for burstmode....i found some explanations 

    With Burst mode set, the ODBC driver locks the file header for either 20 file operations or three-tenths of a second, whichever occurs first. This decreases the number of times the file must be locked, and the number of times that internal buffers may need to be reloaded

    Normally, when the ODBC driver accesses data files, it must place a temporary lock on the file. This temporary lock guarantees that the driver reads key tables and
    structures that are in a consistent state and not in the process of being altered. Once the temporary lock is established, the driver checks the file header to see if it
    has been changed since the last time the file was accessed. If the file has not been altered, then the ODBC driver can use any of the data still maintained in its buffers.
    If the file has been altered, then all data in the buffers is discarded. When the driver has completed its access to the data file, the temporary lock is released.

    The process is repeated for each file accessed by the driver, for each operation on the file.

    I kinda get the implied performance boost and potential penalty in other areas, but I'm not certain how this could impact my transaction.

    I looked at the connection strings for the linked-server (not the System DSN setting), and i see burstmode=1.  I specified it to zero to see if this will help.  Thanks again!

  • 0 in reply to msgking
    In a document on creating a silent DSN based on the SOTAMAS90 DSN this was noted: 1.A very very important note is on the ODBC connection string, make sure that DirtyReads=1 and BurstMode=1 is removed. It is critcial to omit these options in the connection string.
  • 0 in reply to BigLouie

    Hi BigLouie,

    Thanks for chiming in. I've read many expert advice you left on this forum, some religiously, and I want you to know that I have lots of respect for you (that's not to say I don't respect you Kevin. You've helpmed me a few times already and I owe you big time!). I completely understand the need for DirtyReads to be off. Except in development and troubleshooting, and possibly in a very specialized queries, I can't think of a good reason for it. As for Burstmode, I saw you mentioned that somewhere after i last responded to Kevin. Thanks for sharing the info again here. Did Sage or anyone ever made it clear why burstmode should be off in this sort of applications? I'm wondering what the risk is when...say...2 Silent DSN on the network, one with burstmode on and the other off, do heavy read operations at the same time. With my limited knowledge, I can't think of anything other than some performance penalty.

    That said....I've tested out the new connection string with BurstMode set to zero to turn if off (DirtyReads was zero already). And just to avoid any funny business, I made sure the System DSN with the same name had the same options. I was really hoping this would, but didn't help. I ran a very simple openquery statement,, eg 'select * from so_soheader where So# IS NULL', 100 times concurrently spread out in 4 different sessions. All but one of them got stuck within seconds.

    At least I might be on to something on how to replicate the issue at will. If you have any suggestion, even if shooting in dark, I'd very much appreciate it! Many thanks again.

  • 0 in reply to msgking
    My openquery lockups were different than your situation. Overnight chained stored procedures to pull data from Sage 100, which would lock up part way through (requiring the SQL service to be restarted to allow the connection to work again).
    Are you using the latest version of the 64-bit ODBC driver?
  • 0 in reply to Kevin M

    I'm using the 32-bit driver 4.40.0004. The 64-bit driver is installed, but with the SQL Sever 2008 being 32-bit i don't think 64-bit is a viable option for me....unfortunately

    Last night I confirmed that I could replicate the issue at will using the method I described above by having multiple sessions run concurrently. Run the batch of 100 executions at the same in 2 separate sessions, or one right after another will lock up one session. Each sessions can successfully get data back for at least one execution.  If more than 2 session are involved, at least one or more sessions will hang while one or none finishes.

    Now I know how to cause it. Still need help on what causes it, what else (logs etc) might give me a clue on how to circumvent it. A bit hopeless at the moment....

  • 0 in reply to Kevin M
    Just to give you an update. I found out that the version of SQL Server is R2 SP1. Since this is out of support, and the problem happens only through Linked Server and no other environment, I'm concluding this as a bug on SQL Server side. Bummer.