Neches is BP3's static code analyzer for IBM BPM solutions. One of the goals of the tool was to be able to provide a raw "score" for every BPM solution that would attempt to articulate the complexity (or conversely maintainability) of the underlying solution. This turns out to be far more complex than we thought it would be when we started out.
I've hit on an analogy that I believe helps people understand why this is so complex a problem. In addition it happens to give you a helpful framework for understanding the core of how Neches works.
For the analogy, lets assume that we have created a similar tool to Neches that is able to take any written item (e.g. poem, e-mail, book) and assess both the spelling and grammar used in that work. Every detected misspelled word, or every violation of proper grammar adds 1 point to your overall score. We will call our new tool "Webster".
Now any set of text you put into this tool will get a "Webster" score. What you can easily see here is there really is no upper bound to your possible webster score. But we have some interesting findings that directly relate to how Neches scores work. Specifically -
- If you put in an answer to an essay question, Webster cannot tell you if you actually answered the essay question asked, it can only tell you the spelling and grammar quality of the answer you provided. Likewise, Neches cannot tell you if your solution solves your process problem, it can only tell you how maintainable the provided solution is.
- It should be readily evident that the Webster score is effectively unbounded. The larger a document is, the more opportunities the author has to violate either a grammar or spelling rule. A 10 line poem will likely have a much lower score than an 800 page novel. The same is true for Neches. The more artifacts in your solution, the higher your score is likely to be.
- Webster is likely to apply rules that your decide don't apply to your paper. For example a work of poetry is likely to break a number of grammar rules. And Dr. Seuss books will violate spelling rules. The author needs to decide if the rule is important in the context of the writing. Likewise Neches may highlight things as difficult to maintain. However, your team may decide this is the best way to meet the underlying process need.
One of the largest things we struggled with in Neches is that people want to understand what is a "good" score in the tool. To continue with the Webster analogy, you likely would want to know "How does my score for my 10 page essay compare to other 10 page essays you have seen". That is what you really want is not the raw Webster score, but a grade. Since most of us are familiar with percentile rankings, Webster could pick the 10 essays most similar to yours and give you a percentile rank. If there are 5 essays with better scores and 5 with worse, Webster would say you are in the 50th percentile. If all the essays were worse than yours, you would be in the 100th.
Starting this week Neches will do away with the artificial boundary placed on Neches scores and instead show you a raw score that will be unbounded. If you have the BPM equivalent of a 800 page novel, your score will be high. In order to get to the core of the "how am I doing overall" we will be picking snapshots from other solutions that we feel are similar to yours in terms of how many and what types of assets are in the solution. We will then tell you where you rank compared to these other solutions.
It is important to understand that unless our rules change (or you upload a new snapshot) your absolute score will remain unchanged. However, your percentile ranking can change over time as the population of available solutions increases. So while your absolute score will remain constant, your percentile ranking can fluctuate as the population of matching solutions changes.