In Part I of instrumentation monitor page in IBM BPM we discussed how instrumentation monitor page can be used to monitor and troubleshoot performance issues in IBM BPM.
In Part II I would like to focus on some additional tricks you can do with instrumentation monitor page and how it can be used to monitor your system during performance benchmarks.
As we discussed in a previous part instrumentation page itself is a useful resource of statistics that can be monitored over time.
When you're preparing your environment for Production roll out you want it to perform as expected with X number of users and X number of instances/tasks generated per day. Common way of validating if your environment is ready for Production is to run automated and/or manual performance benchmark test simulating the load on the system. Most important question is - how can you determine for sure that BPM is performing well during your benchmark test?
You may want to monitor your environment during these tests. For resources like cpu, ram, etc there is a number of utilities in each OS and it's out of scope of this article. You may want to monitor JVM itself for things like - GC, heap, CPU consumption (I personally suggest using WAIT for this). Application server monitoring where you would check thread pools, transactions, db connections, etc - there is a number of built-in tools in WAS and number of third-party tools for this - again, it's out of scope of this article as well. Finally but not less important is monitoring of BPM itself. Coaches, caches, web services, services, BPD's, etc. Most of these can be monitored through instrumentation monitor page.
It's rather difficult to capture/save the data in a readable format from instrumentation monitor page in BPM but I found a handy way of doing it.
So, before you run your performance benchmark test goto Instrumentation monitor page and click on Reset. This will basically reset all the number to 0. Now use the following link:
This will start instrumentation logging (will generate .dat file) and in addition it will print the output of current stats in XML format in the browser. Save this file - it will be our baseline (before benchmark).
Run your performance benchmark test and once it's complete use this link:
It will stop logging and will also print the same type of XML dump of stats in your browser. Save this file - it will be our after benchmark run file.
Now we have two sets of data:
1) inst00X.dat file in profile_root/logs directory - this can be analyzed following the steps from a technote mentioned in part I.
2) We have two XML files before benchmark and after benchmark. You may use a tool like XML2CSV to easily convert data to CSV format and then you may compare numbers before and after run and make your conclusions and corresponding tunings based on the differences using the techniques described in part I.
If you run WAIT or any other tool to monitor JVM in parallel then you will likely have java cores generated at the same time (WAIT tool will store them inside zip file that it generates at the end).
If that is the case then you can use java cores and instrumentation data (when you convert .dat to .txt format) and threads in java cores would correspond to threads in instrumentation log file. That way you can tell if/when something is blocked/running/waiting and in instrumentation data you can tell for how long whereas in java cores you can see the whole stack of execution.