Will noncompliant systems reinfect compliant systemsgreenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread
programs or will the noncompliant systems only introduce wrong numbers and thereby cause errors in other systems relying on this data? In other words, granted that wrong numbers entered will input wrong numbers into the accounts in the compliant system, but will these wrong numbers also make it necessary to reremediate the compliant system? If the answer is yes, this would appear to make firewalls critical to preclude the recorruption of the fixed systems or require small businesses etc. to take their systems offline to protect the existing data. If the answer is yes, the banks have a big problem. If the answer is no, there is hope that the error rate will eventually be reduced to an acceptable level as more systems become compliant. Comments please. I will be interested to see if there is general agreement on the impact on the programs of entering data from noncompliant systems to compliant systems.
-- Steve (firstname.lastname@example.org), January 02, 1999
this is discussed elsewhere on this forum at
VISA is toast Summary: yes its going to be a very big problem, all by itself.
-- a (email@example.com), January 02, 1999.
I dislike the term "infected"/"infection". Virtually all data systems have some kind of edit checking process (usually an extensive one), prior to writing data onto an active database. These audit functions are amoung the first things remediated in Y2K work. If the incoming data dates are 2 digits then they should process through either an encapsulation, window or expansion routine prior to hand off to the rest of the system. If 4 digits dates are expected and two digit dates are received, the data should fail the audit check and the sender of the dat is notified of the failure. (This is the most likely form of failure IMHO). Now that does cause a 'failure' in the sense that that data isn't processed correctly, but it isn't a corruption as such. Truthfully, I see far more interface failure occuring than "infection". Also, another problem set that will occur are non-coordinate windows as data is handed between program systems. (The term 'windowing' refers to an arbitrary decision to interpret 2 digit dates againsts a 'pivot' date. ie pivot 80, 2 digits 80-99 are meant 1980-1999, 00-79 are meant to be 2000 - 2079.)
Therefore, I expect lots of 'handoff' problems, but not a lot of "bad data" skipping from one system to another. Thats not much comfort if its your mortgage payment or tax bill that gets seriously screwed.
-- RD. ->H (firstname.lastname@example.org), January 02, 1999.
Steve - the simple answer is yes.
If I was you I would read all of the articles/links on the imported data problem at the following link. This will give you the main bacjground knowledge to do your own further research. IMHO this, and the embedded chip problem, will be our undoing.
The crux of the problem:-
"Examination of data exchanges is essential to every Year 2000 program. Even if an agency's--or company's--internal systems are Year 2000 compliant, unless external entities with which data are exchanged are likewise compliant, critical systems may fail. The first step is to inventory all data exchanges. Exchange partners, once inventoried, must be contacted; agreements must be reached as to what corrections must be made, by whom, and on what schedule; and requisite testing must be defined and performed to ensure that the corrections do, in fact, work."
This, in my view, is the biggest unsolvable problem of the y2k challenge. If a company somehow revises its computer systems' legacy code, tests it by parallel testing, does not crash its systems during the testing, and transports all of its old data to the newly compliant system, it faces a problem: it is part of a larger system. Computers transfer data to other computers. If the compliant computer imports data from a noncompliant computer, the noncompliant data will corrupt the compliant system's data. A company may have spent tens of millions on its repair, but once it imports noncompliant data, or extrapolations based on bad data, it has the y2k problem again.
Understand, this is a strictly hypothetical problem. There is no compliant industry anywhere on earth. I am aware of no company in any industry that (1) has 10 million lines of code and (2) claims to be compliant. I argue that there is not going to be a compliant industry, where the participants are all compliant. But if there were one where half the participants were compliant -- and we will not see this -- the other half would pass bad data on to the others. And if the others could somehow identify and block all noncompliant data based on noncompliant code, the industry would collapse. The data lockout would bankrupt the industry. Banking is the obvious example.
This has been denied by a few of my critics (Gary North), though not many. These people are in y2k denial. Here is the assessment of Action 2000, which the British government has set up to warn businesses about y2k. The problem is not just software; faulty embedded chips/systems can transmit bad data:
"In the most serious situation, embedded systems can stop working entirely, sometimes shutting down equipment or making it unsafe or unreliable. Less obviously, they can produce false information, which can mislead other systems or human users."
In short, a noncompliant computer's data can corrupt a compliant computer's data. But those in charge of the compliant computer may not recognize this when it happens. They may then allow their supposedly compliant computer to spread the data with others. Like a virus, the bad data will reinfect the system. I describe this dilemma as "reinfection vs. quarantine."
Every organization that operates in an environment of other organizations' computers is part of a larger system. If it imports bad data from other computers in the overall network, the y2k problem reappears. But if it locks out all data from noncompliant sources, it must remove itself from the overall system until that system is compliant. This threatens the survival of the entire system. Only if most of the participants in a system are compliant will the system survive.
Consider banking. A bank that is taken out of the banking system for a month -- possibly a week -- will go bankrupt. But if it imports noncompliant data, it will go bankrupt. A banking system filled with banks that lock out each other is no longer a system.
There is no universally agreed-upon y2k compliance standard. There is also no sovereign authority possessing negative sanctions that can impose such a standard. Who can oversee the repairs, so that all of the participants in an interdependent system adopt a technical solution that is coherent with others in the system?
Corrupt data vs. no system: here is a major dilemma. Anyone who says that y2k is a solvable problem ought to be able to present a technically workable solution to this dilemma, as well as a politically acceptable way to persuade every organization on earth to adopt it and apply it in the time remaining, including all those that have started their repairs using conflicting standards and approaches.
-- Andy (2000EOD@prodigy.net), January 02, 1999.
Well, I think the answer is maybe.
Data is received from a non-compliant system by a compliant system. That data is a bunch of digital levels, "1s" and "0s". It is checked to see that --
i. No transmission errors occur (parity checks, error detecting codes, error correcting codes, etc. are used to determine that what was received is what was transmitted.)
ii. It is in correct format (has the proper number of bits in the right places).
iii. Whatever else the designer needs (wants) to check.
If it passes these checks, it is accepted and used. If the data contains date fields, and the date fields are not acceptable, then the data is discarded, as R. D. indicates.
If the data has been processed, in some manner that makes garbage out of it, prior to transmission, but otherwise passes the firewall tests, it can be accepted by the receiver. The compliant system has accepted garbage.
Will this "infect" the receiving system? That depends on your definition of "infect.". It may cause errors in whatever calculations are associated with that particular field that's in error, but it shouldn't cause the need for re-coding of the receiving system if that's what you mean.
I don't know how many of these last type errors we can expect, but I don't call them a 're-infection.'
-- De (email@example.com), January 02, 1999.
"Will this "infect" the receiving system? That depends on your definition of "infect.". It may cause errors in whatever calculations are associated with that particular field that's in error, but it shouldn't cause the need for re-coding of the receiving system if that's what you mean."
We should all remember that we use computers to make our lives easier, to increase productivity etc. etc. We are all *users*. In other words we expect computers to *work*, we expect to *use* them on a daily basis.
Now, if as stated above the correctly formatted and parametered *BUT STILL CORRUPT* data passes from computer A to computer B, B will process this corrupt data (assuming as it does that it is perfectly accurate and valid as it has got through all the firewalls, edits, parameter validation routines etc. etc.) and give the result to a *USER*. That's you and me. And the result we will get from the computer will be garbage, wrong, useless. In other words, the computer has not *worked*.
Worse still, if, when, that result is spread to other computers, be they compliant or not, you have EBOLA spreading like wildfire. If the data is corrupt, all the perfect code in the known universe ain't gonna do you no good...
I realise that I am oversimplifying this drastically, but I hope you get my point.
Here's how somebody explained this problem better than I can now...
"Hmm. I work with "communicating computers" and must say that Andrew is precisely right. Date-dependant calculations have nothing to do with data-interchange validations routines. What Andrew is pointing out is that non-compliant programs will produce data that is wrongly calculated; these errors will spread magnitudinally throughout the global financial system. Validation routines between data interchanges simply verify that the parameters are correct: not the calculations forming the data. This is the meaning of corrupt data: bad information, not bad parameters. Andrew (and Gary North) are precisely correct. You are espousing the "misguided, unsupported idea of "corrupt" data " equalling bad parameter transfers. That is incorrect and a straw dummy. Corrupt data = data correctly parametered yet wrongly calculated. Wrong calculations beget wrong calculations ad nauseum. Within 24 hours of the turnover, the Global Financial System will either A) be completely corrupt B) be completely shut down so as to avoid A. The result is the same in either case; even if we don't go Milne, you are going to see a mess bigger than you can imagine. Alan Greenspan was entirely correct when he stated that 99% is not good enough. We will be nowhere close--not even in the ballpark. The engines have shutdown; the plane is falling--we simply haven't hit the ground yet. Scoff if you must; as a professional working with professionals, I know the score. It's going down. This is why at least 61% of IT professionals are pulling their money out before it hits--of course, in 10 or 11 months, that number will rise to 100%; but then, it will be too late. We know for a fact that 50% of all businesses in this, the best prepared of countries, will not perform real-time testing. As a Programmer/Test Engineer, I can therefore assure you that at least 50% of all businesses in this, the best prepared of countries, are going to experience mission-critical failures, Gartners new optimistic spin not withstanding. Remediation sans testing is not remediation. The code will still be broken, just in new and unknown ways."
In other words the computers *do not work*. Users, that's you and me, cannot use them, 'cause they're broke, dangnabbit.
What happens to the world as we know it if this happens endemically next year?
-- Andy (2000EOD@prodigy.net), January 02, 1999.
R D made a good point about pivot logic - This is one of the problems that I have already seen at work. Multiple internal organizations that share data between applications have stupidly used different pivot dates due to a lack of adequate inter-org communication. This results in a situation where data flowing between interfaced apps is interpreted differently depending on the application pivot logic processing it. What also happens is finger pointing and questions raised like, Ok, who is going to change (nobody wants to), or should we build an interface translator between the apps and just leave the two apps with their own pivot logic.
-- Rob Michaels (firstname.lastname@example.org), January 02, 1999.
And lets remember, assuming that we at least have the basics of electric power, clean water, and food, one of the biggest Y2K impacts will be a mistrust of computers -- you will simply not be able to have confidence in what banks and other financial institutions will say. We seem to always be talking in terms of Y2K causing computer failures, but in all probability the computers will work -- its just that they will cease to be reliable. Which, effectively, is pretty much the same thing.
-- Jack (email@example.com), January 02, 1999.
Steve, I generally agree with those who predict that it will be not the obviously-bad data (detectable by edit checks), but the not-so-noticeably-bad data (looks good, but is incorrect) that will cause the majority of problems.
Example -- Bank A sends Bank B a funds transfer with the following data:
Date: 2000-01-06 Time: 09:00:00 Type of transaction: Transfer checking-to-checking Source account: Bank A account number 123-456-789 Destination account: Bank B account number 987-654-321 Amount: $5,435.43
Looks okay. All field data correctly formatted, all numeric fields within bounds. Date is in ISO format with four-digit year. Bank A account 123-456-789 had to have sufficient funds in order for the transfer software to send the transfer request.
Except --- What is not apparent in the transfer data is that Bank A account 123-456-789 was incorrectly credited on 2000-01-06 at 08:31:00 with interest in the amount of $5,555.55 instead of the correct amount, $5.55, because of a Y2k bug. Thus its available balance at 09:00:00 is $5,550.00 larger than it should be. At 08:00:00 the Bank A account's correct balance was $321.00, and at 08:59:59 its balance should have been $326.55, quite insufficient for a transfer of $5,435.43 out of it. But because of that not-yet-detected Y2k bug, that Bank A account's balance appeared to have been $5,876.55 at the time the transfer was requested, and thus was deemed to have had sufficient funds.
Oh, and the Bank A account owner wasn't trying to get away with ill-gotten gains -- she didn't yet know that her account balance was so high, and she was trying to transfer merely $5.43 (an automated request to pay a monthly bill). A Y2k bug related to the other Y2k bug caused her requested transfer amount to be changed from $5.43 to $5,435.43!
To recap the amounts:
Bank A account 123-456-789 balance as of 08:00:00 = $321.00 Correct interest amount to credit at 08:31:00 = $5.55 Correct account balance at 08:59:59 = $326.55 Correct transfer amount at 09:00:00 = $5.43 Correct account balance at 09:00:01 = $321.12
Actual (incorrect) interest amount credited at 08:31:00 = $5,555.55 Actual (incorrect) account balance at 08:59:59 = $5,876.55 Actual (incorrect) transfer amount at 09:00:00 = $5,435.43 Actual (incorrect) account balance at 09:00:01 = $441.12
Bank B account 987-654-321 is credited with $5,430.00 too much as a result of the Y2K bugs. Bank A account 123-456-789 winds up with $120.00 too much as a result of the Y2K bugs.
-- No Spam Please (firstname.lastname@example.org), January 03, 1999.
In the given example, Bank A was not Y2K-compliant. Bank B was Y2K-compliant, but now has incorrect data not detectable by edit-checking of the transfer from Bank A.
-- No Spam Please (email@example.com), January 03, 1999.
"And lets remember, assuming that we at least have the basics of electric power, clean water, and food, one of the biggest Y2K impacts will be a mistrust of computers -- you will simply not be able to have confidence in what banks and other financial institutions will say. We seem to always be talking in terms of Y2K causing computer failures, but in all probability the computers will work -- its just that they will cease to be reliable. Which, effectively, is pretty much the same thing."
Jack, I think you are on the right track but I am going to be more definitive here. I have to disagree with a few points outlined above.
"but in all probability the computers will work" - I don't buy this at all. Leaving aside the three obvious grenades - power, banking (salary?, the odds are a Banking crisis will hit before 2k) and human personnell issues (i.e. Joe Shmuck the Operator and his girlfriend Joeline the Network Tech, driving through blizzards and dark streets and looters on the East Coast to a Data Centre conveniently situated "downtown" running on UPS (splutter) in a freezing building whilst their respective families are at home shivering in the burbs - yeah, right) - I believe the mainframes will crash due to a number of factors, such as - basic mainframe fragility, i.e. UPS not kicking in properly - power surges - telecomms problems taking it down - RTC problems - job streams hosing it due to y2k bugs, year end bugs, millennium end bugs - imported data problems - embedded chip problems - human error switching to backup systems - other computer systems swamping the mainframe with too much data - the unknown factor? - the list goes on and on and on.
I've worked with mainframes for over 20 years, they are finicky beasts, and nowadays Operations are more automated than ever before. Taking control of a large Data Centre hardware *AND* software debacle at the same time is not going to be a walk in the park. It will be a nightmare for those unfortunate enough to be on duty. Multiply this mess globally, with all the attendant interconnectivity, and you have another fine mess Ollie...
"its just that they will cease to be reliable" - incorrect. Computers are reliable now, by and large, and they will be reliable in 2000 if the power is there. They will just be *wrong* - they won't do their job as designed - they are literal animals and will do no more or less than the programs and infrastructure dictate *in 2000*.
"one of the biggest Y2K impacts will be a mistrust of computers" - understatement of the millennium.
If, when, TSHTF - what would you do in the initial stages of looting, you're fed for the time being, this is fun, you have lots of "stuff" and a hatred of Computers - why - hey! there's a lot of *GOLD* in mainframes - think about it - payback time! Woo hoo!!! Pass me the crowbar...
-- Andy (2000EOD@prodigy.net), January 03, 1999.