Weighted Micro Function Points

Weighted Micro Function Points (WMFP) is a modern software sizing algorithm invented by Logical Solutions[1] in 2009 which is a successor to solid ancestor scientific methods as COCOMO, COSYSMO, maintainability index, cyclomatic complexity, function points, and Halstead complexity. It produces more accurate results than traditional software sizing methodologies,[2] while requiring less configuration and knowledge from the end user, as most of the estimation is based on automatic measurements of an existing source code. As many ancestor measurement methods use source lines of code (SLOC) to measure software size, WMFP uses a parser to understand the source code breaking it down into micro functions and derive several code complexity and volume metrics, which are then dynamically interpolated into a final effort score. In addition to compatibility with the waterfall software development life cycle methodology, WMFP is also compatible with newer SDLCs, such as Six Sigma, Boehm spiral, and Agile (AUP/Lean/XP/DSDM) methodologies, due to its differential analysis capability made possible by its higher-precision measurement elements.[3]

Measured elements

The WMFP measured elements are several different software metrics deduced from the source code by the WMFP algorithm analysis. They are represented as percentage of the whole unit (project or file) effort, and are translated into time.

Flow complexity (FC) – Measures the complexity of a programs' flow control path in a similar way to the traditional cyclomatic complexity, with higher accuracy by using weights and relations calculation.
Object vocabulary (OV) – Measures the quantity of unique information contained by the programs' source code, similar to the traditional Halstead vocabulary with dynamic language compensation.
Object conjuration (OC) – Measures the quantity of usage done by information contained by the programs' source code.
Arithmetic intricacy (AI) – Measures the complexity of arithmetic calculations across the program
Data transfer (DT) – Measures the manipulation of data structures inside the program
Code structure (CS) – Measures the amount of effort spent on the program structure such as separating code into classes and functions
Inline data (ID) – Measures the amount of effort spent on the embedding hard coded data
Comments (CM) – Measures the amount of effort spent on writing program comments

Calculation

The WMFP algorithm uses a three-stage process: function analysis, APPW transform, and result translation. A dynamic algorithm balances and sums the measured elements and produces a total effort score. The basic formula is:

∑(WiMi)∏Dq
M = the source metrics value measured by the WMFP analysis stage
W = the adjusted weight assigned to metric M by the APPW model
N = the count of metric types
i = the current metric type index (iteration)
D = the cost drivers factor supplied by the user input
q = the current cost driver index (iteration)
K = the count of cost drivers

This score is then transformed into time by applying a statistical model called average programmer profile weights (APPW) which is a proprietary successor to COCOMO II 2000 and COSYSMO. The resulting time in programmer work hours is then multiplied by a user defined cost per hour of an average programmer, to produce an average project cost, translated to the user currency.

Downsides

The basic elements of WMFP, when compared to traditional sizing models such as COCOMO, are more complex to a degree that they cannot realistically be evaluated by hand, even on smaller projects, and require a software to analyze the source code. As a result, it can only be used with analogy based cost predictions, and not theoretical educated guesses.

See also

References

This article is issued from Wikipedia - version of the 3/30/2013. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.