What happened.

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Let me start by confessing that I'm a $3,500 doomer. I was one of the very first to post to Ed Yourdon's forum (anyone remember surprises and history books?), although I mostly lurked for the following three years. I'm at the very center of a major industry (my reports and even some phrases I wrote were in Koskinen's reports. Now THAT's a pretty weird experienceseeing words I wrote relayed through nine levels of management, committees, government reporting and finally reporters comments. Just proves to me how much "cut and paste" we all do.)

I've never posted much about my industry because it's too easy to trace it back to the source. (Although some citizens from the original Gary North forums might remember). There are only a few people that actually have the raw, unsummarized data, and I was clearly the one with Doomer connections and tendencies.

I've been trying to analyze why our predictions were so off, and I believe I have some insight. I thought Flint's comments about embedded chips may have hit close to the mark. (Flint, sometimes you're brilliant and insightful, and other times a pain in the @$$. Guess that makes you human.)

I think something similar may have happened in big systems.

Let's start by looking at the "facts".

There was an awful lot of code out there, and all of it was suspect. Getting all the components (source code, compilers, copylibs, etc.) was difficult or impossible. The early "best practices" methodology described the "right" way to do things: inventory code, analyze, remediate, test, and implement.

What happened? To summarize, we all knew we didn't to it "right", but we did it "good enough".

I think the key "suspect" was the methodology. Like most methodologies, it described the ideal - but not practical - way to do the project. According to the methodology, we had to freeze all of our code for the three years it took to do the project. In the real world, you just can't do that. So as normal maintenance happened, the code got cleaned up. We upgraded the compilers. We didn't worry about the old stuff; if it couldn't be compiled it was scrapped and re-written. Probably wrong, but it was implemented in 1998 and we fixed the problems afterwards.

Testing was a royal pain in the neck. Getting all the databases, configuration files, pre- and post-rollover test plans created, filed, approved was difficult. Although we fell way short of the mark, the fear of litigation had the result that the testing and documentation on the Y2K projects far exceeds normal production systems.

Part of the anxiety was because we all rely on other vendors: hardware/operating systems, DBMS, application software, utilities, and tools. These vendors were facing the same obstacles that we were: how do you prove that everything is perfect when it's never perfect normally?

Did we waste the money? I'm not sure about that one.

The remediation -- clearly -- was worth the money. But I'll bet that true remediation was less than 10% (maybe even 5%) of the total cost. The test plans, time machines, test suites, documentation, contingency plans (and contingency plan seminars), vendor compliance statements, rollover bunkers, all were the "right" thing to do. But they add dramatically to the expense without changing the outcome of the event. The FOF crowd avoided all this expense. And even with all that expense, we still experienced minor problems. But we had 100 people on-site that know how to fix things. Everything was as fixed by Monday.

Or rather, back to normal. In fact, that became the joke over the rollover weekend. Every single mission critical and mission important application was tested and designated "pass" or "fail". The joke was that we should have had "normal" or "abnormal" categories. The applications and systems that experienced problems were the crotchety old systems that always have problems.

Gary North preached that Y2K would spell the end of the division of labor. I think we now have proof that society's only hope is the division of labor. It worked.

As far as my industry goes, there were (and still are) problems. But early on there was discussion about "fault tolerance" -- would the Y2K problems overwhelm our problem solving abilities or not? In my industry, the answer is clearly "not".

It's over folks. The problems will become fewer, not more. We've proved we can handle what has already failed. As a society, we succeeded.

I think that's good news.

-- Jim Smith (JDSmith@hotmail.com), January 07, 2000


Well said Jim! I think you have come close to the mark, we fixed it using old standby methods that are not acceptable to the University crowd but at the end we are the ones that get things done, not just talk about it. Yes we did spend more money than was necessary for the day to day function of life but if disaster hits we are now far better prepared then ever. Y2K was a great exercise in understanding the limits of technology and the need for humanity to have a way out to keep life and lim together. Y2K brought us closer to spirituality and closer to our families. It gave me a true insight into the internet of what it can and cannot do and what it should not do. We are all better for it now. My own personal preps were around $500. US and it was mostly for a generator that I wanted anyway. The food well be eaten and the wood will be burned. The next several months will prove interesting and prove out my original theory that y2k would be 1000'ans of mosquito bits but not the end of the world. Thanks for your comments Justthinkin com

-- justthinkin com (justthinkin@y2kaok.com), January 07, 2000.


Very insightful post. The observation that the methodology described the ideal, not the practical, is brilliant. Too many industry people realized that the ideal was unnattainable, so they assumed worst. They either forgot about the practical, roll-up-the sleeves-and-just-fix-it method (if they were in the computer/IT industry) or just assumed that was impossible (if they weren't.) The idea to re-categorize from "pass" and "fail" to "normal" and "abnormal" is also just right on the money. On Monday morning, our systems here happened to be better than "normal" - no glitches at all! ;^)

-- Bemused (and_amazed@you.people), January 07, 2000.

It's not over. Ed Yourdon said so. We won't know for months, if not years.

Keep preparing, unless you want to end up dead.

-- (britle@hertref.net), January 07, 2000.

P.S. - Jim, you mentioned that some of your past phrases on Y2K made it up through the adminisphere and into the Koskinen reports - I'd say your post here is worthy of the same ascension!

-- Bemused (and_amazed@you.people), January 07, 2000.

Good post, Jim.

Still don't understand the embedded systems issue; that is, why it hasn't been more of a problem. I can see now why Cory kept his distance from that topic, though -- better to stick with what you really know. Guess I need to learn more about embeddeds (must've missed Flint's posting).

-- =DSA. (dsangal@attglobal.net), January 07, 2000.

I'm pretty sure that it was Flint that enumerated why the chips weren't a big deal. I'm paraphrasing, and may be mixing answers but it came down to: Not all chips used dates; those that did, not all had problems; those with problems, not all were serious; most that were serious were caught during rudimentary testing; rarely does a technology failure cause immediate catastrophic error; volumes were down (shut down or reduced load) and monitoring staffs up during/after the rollover.

"Adminisphere"...I like that!

-- Jim Smith (JDSmith@hotmail.com), January 07, 2000.

The fat lady is singing, time to get on with life as we know it.

-- Rasty (Rasty@bulldoggg.xcom), January 07, 2000.

Moderation questions? read the FAQ