Salvatore,
thank you for your explanation. I read your document on performance and found that we're trying a 'big data' variation of scenario 5 & 6. In general terms, the problem that we have is about scaling the background processing while doing online processing concurrently on the same server. Since we cannot control which transaction will run on each instance, our approach to solve this was to split the productive system in many systems, each one responsible for part of the load:
System 1: user processing - does not have scheduled transactions
System 2: batch processing part 1 - scheduled transactions for sites A and B
System 3: batch processing part 2 - scheduled transactions for site C (part 1)
System 4: batch processing part 3 - scheduled transactions for site C (part 2)
System 5: batch processing part 4 - scheduled transactions for site D
This architecture has many obvious support drawbacks, but it is the only way we found to separate user and batch loads on MII. On ABAP systems we have job target control.
I wonder what other users are doing to separate the load, or maybe they don't have that much processing on one server, like in your scenario. Here at Petrobras we're used to deal with massive amounts of data, say 5K tags por request times the varying granularity.
Please advice.
Regards,
Marcos