Geek Stuff: Function Point Analysisgreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread
(Techno-Geek posting ahead. Abandon hope all ye who read on. Last chance to turn back for 4,000 arcane acronyms. Consider yourself warned.)
Most software developers agree that the 'lines of code' (LOC) metric is not a very good indication of software complexity.
I've been aware of the concept of using function point analysis (FPA) to define software complexity for a while but I do not myself have extensive experience. In all the large projects that I've been part of including my work on GPS for a major defense contractor, FPA never even entered into the equation. (Though I suspect it should have.)
The December 98 issue of Scientific American contained an excellent overview of FPA by Caper Jones. It can be read online at http://www.sciam.com/1998/1298issue/1298jones.html
In that article, Mr. Jones acknowledges that FPA is not widely used in the industry:
"Despite such helpful information, the regrettable fact is that most organizations do not perform any kind of useful software measurements at all. Although more than 100,000 projects have been sized with function points to date, that number is below 1 percent of the total number of software applications in use. Among smaller firms, especially those with fewer than 100 software professionals, neither function points nor any other measurement techniques are yet widespread. But function-point usage has become quite common among large companies with more than 10,000 people involved in developing, testing and maintaining computer programs. For such corporations, software is a major cost, one that demands accurate measurement and estimation."
It seems to me that if many of the large programs being remediated today were examined using this metric, most of the program mangers would be in for quite a shock. Indeed, I believe that failure to apply such a metric directly leads to a huge under-estimation of the effort/cost required to successfully complete such projects. Caper's graph of cost per function point (broken down by general application type) is one more reason I believe you will see remediation costs skyrocket this year - especially in the military apps.
My questions are to those of you with significant programming experience. Do any of you have significant FPA experience (are any of you certified?) Do you find the metric useful in practice? Is anyone applying this to Y2K remediation? What are your general feelings about FPA? Is my impression (that FPA is a much more useful and accurate measure of software complexity) correct?
-- Arnie Rimmer (firstname.lastname@example.org), December 28, 1998
Oy vey! I didn't even want to open this can of worms by mentioning function points (FP) in the essay I just wrote; but since the figures came from Capers, I felt that it was only ethical to use the metric units that he used.
As you point out, only a small minority of projects actually use FP's, and there is still great debate about their relevance in real-time systems, maintenance projects, object-oriented projects, etc. (In fairness, it should be pointed out that Capers and his colleagues have eloquent rebuttals to all of those concerns.)
Actually, the FP concept does have some relevance when looking at Y2K from a "macro" perspective, because it helps us avoid comparing "apples and oranges". If you hear that a Y2K project is trying to remediate 10 million lines of code, it's very important to know whether that's COBOL code (relatively easy) or assembler code (relatively hard). In any case, all of the metrics in the Y2K book that Capers has written are expressed in function points.
In casual conversations about this kind of stuff, especially with a non-technical audience, I would normally convert everything to the equivalent number of statements in a common 3rd-generation language like COBOL. But as noted above, I felt that a direct attribution of tables of figures from a Capers book demanded the use of his own metric unit.
-- Ed Yourdon (email@example.com), December 28, 1998.
Make FPA a rule of the game and about 90% of the PAs would have to go to the sideline, and 99.9% of management would be in bread lines.
-- curtis schalek (firstname.lastname@example.org), December 28, 1998.
I do programming and testing on PC systems (PC & Mac). I also do "white-box" testing. This kind of testing utilizes the original source code. When I'm preparing an estimate on certain kinds of testing, I utilize both linecounts AND FPA. Neither method of analysis gives a complete picture.This is especially true in the newer object oriented systems. The systems we're primarily concerned with regarding Y2K use older "Structured" programming techniques. These programs tend to have longer functions and fewer interconnections within the program than the newer object oriented programs (OOP). That is not to say that they aren't extremely complex. Most of the newer technologies' complexities lie in User Interface (UI) support. Less attention is paid to UI in the older systems. However, the internals of the program and how it manipulates its data are just as complex. Many of the problems lay in the database schema created and modified for the system. Relational database theory - which is what rules most of the systems possibly afflicted by Y2K is pretty much the same now as in the 70's and 80's. One not-to-obvious area of complexity is the way Relational Databases store their data broken up into many many pieces spanning many different files and even in different machines and locations. Using a simple metric - such as counting lines of code are still valuable because the project size and scope can be evaluated relative to other projects of its type. My point is that defining complexity is in itself a complex task and any one tool or method is inadequate for the task - but should also not be ignored.
Just a thought.
-- N.A. Cornelius (email@example.com), December 29, 1998.
"One not-to-obvious area of complexity is the way Relational Databases store their data broken up into many many pieces spanning many different files and even in different machines and locations."
Another complexity that is introduced in the context of what you posted above is the level of normalization for the RDBMS tables. This can, as you know, add greatly to the redundancy of stored values with the trade off of less joins when doing multiple table queries and updates.
In one shop that I worked in we tried using Function Point Analysis and found it to be very tedious and boring, to be truthful. It was a pain in the butt. Basically, it was only a trial, but I still remember the groans and it never did get off the ground so I have just that limited experience to go by.
-- Rob Michaels (firstname.lastname@example.org), December 29, 1998.