Re-corruption thread was...corrupted! answers here : LUSENET : TimeBomb 2000 (Y2000) : One Thread

Gee, if data corruption can occur here on Ed's forum.....

thanks bw and others for tentative answers (maybe all answers are tentative?) to re-corruption question. here are the responses for those who may want to add. previous thread got mixed up with another publication.

bw's answers do not inspire great confidence, given that it would appear that much of the failures/successes of remediation depend on human responsibility and skill. The wired article months ago re: how chaotic programming is was a good example of the same worrisome facts of programming. Corruption is essentially a question of computerized trust. Each system might or might not have careful screening of incoming data, called "edit/capture". Typically, once you have data in a system and then pass it from program to program, there is virtually no screening. For instance, the motor vehicle transaction that takes keyed data (from the person at the front counter) has very careful edits to make sure that all fields are correct, and then inserts a new record into the database. Later that night, the program that prints the new license is not going to have real tough screening - it assumes that the database record is valid. The programmer who wrote the batch print program trusted the person who wrote the transaction. Might be the same person.

Y2k remediation can cause problems in this situation, because the remediation may have been done by outside contractors. The original writer may be long gone. It might be VERY appropriate now to have tougher edits of the data, but that would not normally be part of the contract. In a situation with bad specs and loose control, one contractor may have fixed one program one way, not matching how another was fixed. Could get pretty nasty.

Each system has some degree of this editing when it gets data from outside, as where the motor vehicle system passes some information about new vehicles to the driver licensing system. How much editing goes on depends on who wrote each system, whether the programmers in one thought that the programmers in the other knew how to code, etc. If they had good standards, they'd do good edits. In the real world, you screen what you don't trust.

So you typically do a little editing on data coming from sibling systems - motor vehicle to driver licensing, for instance. And you do tougher edits on files coming from outside your organization, particularly if you've been burned. An example here would be records of vehicle sales coming from a trade group - you'd check that more carefully. Data corruption from Y2k is not as likely as some suggest, because screening is typically fairly good.

Some sloppy shops are going to go right down the tubes on bad data, going to load it directly into databases without so much as an EOJ crossfoot. This is the recorruption scenario. It's gonna happen. Some will reject the input, halt the process, down the feed line, and sail ahead without that data. Some can't live without that data, and will die. Some won't even notice the bad data coming in, until they've been fatally wounded.

Notice the precise terms I've used: "pretty tough", "not as likely", "fairly good". It all depends on the programming standards, on the attitudes of the system designers and programmers, on the atmosphere of the place. Notice particularly the extensive use of the term "some". We have no clue how many will fall into each category.

Gosh, this is a fascinating year.

-- bw (home@puget.sound), July 16, 1999. Without specifics of exactly what data is supplied in what circumstances, it's impossible to refute (or confirm). However, there's a chain of circumstances that seems relatively unlikely. First, data extering organisation A has to be corrupted as a result of a Y2K bug, without the management at A being aware that the corruption is going on. Then, the resulting corrupt data has to become part of a transaction with B, and again fail to be detected as corrupt, in this case by B. It seems to me much more probable that A's system will crash, or A's corrupt data will fail to pass B's data- validation, or that someone at B will notice the problem quite early and "pull the plug" on A until they prove that they've fixed their bugs.

Also it's exceptionally unlikely that a telco problem could corrupt data passed from A to B. In general, bad problems at the telco will be seen by A and B as a complete failure of service, with no data exchange being possible.

These aside, the question is basically "how bad will it be"? I don't think anyone can know until next year.

-- Nigel Arnot (, July 16, 1999.

-- Walter Skold (, July 16, 1999



``There is a potential risk that a lot of countries (and) companies, would refuse to interface with countries that have not brought their year 2000 readiness to the same level as they are,'' Phillip Wong, Intel's year 2000 programme manager for the Asia-Pacific, told Reuters.

``There is very little information'' about Pakistan's readiness ``and it is very important that companies dealing with Pakistan should have the confidence that the whole of Pakistan is year 2000 capable,'' he said.


``If you have a system that is so-called non-compliant trying to interface with a system that is compliant, the non-compliant could very well contaminate'' the compliant, Wong said.

``A lot of financial institutions in the world, especially in the West...will have very stiff criteria in terms of banking transactions.

Many of these institutions would question whether Pakistan counterparts' internal systems were compliant and not take a risk that those systems could potentially introduce a bug or erroneous data, he said.

Wong said because there is a general perception a lot of Asian countries lag behind in working on the problem, many companies around the world might avoid them or prefer manual dealing to avoid computer system contamination.


-- Linkmeister (, July 16, 1999.

Moderation questions? read the FAQ