Do bank vault doors have embedded chips?

greenspun.com : LUSENET : TimeBomb 2000 (Y2000) : One Thread

It seems that "somewhere" I had read that bank vault doors contain Contain "impossible" to reach/replace chips...The bank buildings are built "around" the vault...Is this TRUE? Any authoritive opinions on the subject? If TRUE...Would this be the case of all or most banks? Comments please...

-- Robert E. Bowman (bbowman@gte.net), December 02, 1998

Answers

Certain chips are very difficult to reach, probabaly moreso on drilling platforms etc. However, the word impossible is stretching things too far.

A friend of mine is a locksmith and he often has to open safe doors and get to things in hard places. This notion that a building is built around a safe and you have to demolish the building if a chip malfunctions is almost funny...........

Locksmiths use very powerful drills, and although it may even take a few hours or a few days to drill through a few inches of steel, it not only is possible, it is already being done. Occassionaly the locking mechanism buggers up and drilling through the steel is one way of getting in to fix it.

-- Craig (craig@ccinet.ab.ca), December 02, 1998.


Doesn't matter. There won't be any money in them after Dec. 31, 1999 when the chips start to freak.

-- infoman (infoman@web.com), December 02, 1998.

>>Certain chips are very difficult to reach, probabaly moreso on drilling platforms etc. However, the word impossible is stretching things too far. <<

So far, so good, Craig.

>>A friend of mine is a locksmith and he often has to open safe doors and get to things in hard places. This notion that a building is built around a safe and you have to demolish the building if a chip malfunctions is almost funny........... <<

Here, however, you should avoid the hyperbole. Banks ARE built around the vaults, to be sure. And no one, to my knowledge has stated that a building had to be demolished to reach bad chips. There are accounts, however, that some vault-makers are disclaiming any liability for *fixing* any Y2K related malfunctions.

>>Locksmiths use very powerful drills, and although it may even take a few hours or a few days to drill through a few inches of steel...<<

People will be pretty twitchy come the turn of the millennium, and you think they won't be alarmed if their bank closes for "a few days" in order to repair the vault? Can you say "Bank Run"?

-- Elbow Grease (Elbow_Grease@AutoShop.com), December 02, 1998.


This is the same as the questions about refineries built on top of embedded systems.

Look, ladies and gentlemen, electronic devices fail. Routinely. On a daily basis. So, designers allow for failure, and for replacement.

This is a general statement, of course, and undoubtly somewhere in the world some nut actually got away with designing and installing a bank vault door with an embedded chip (system) inside, and that system will fail.........

Wouldn't even matter what year it was, would it? But, really, I wouldn't expect this to be a problem. Of course, I wouldn't lock myself in the vault over New Years, either.

-- rocky (rknolls@hotmail.com), December 02, 1998.


I asked that very question to the branch manager of my bank and was told that there is no computer technology of any kind in the vault door at this bank. The opening time for the next business day is set mechanically by a bank employee. Ask your banker.

-- cody varian (cody@y2ksurvive.com), December 02, 1998.


I'm not a locksmith, but if you think about a safe or vault with an electronic combination lock and time-lock features, the answer is almost certainly yes. Also by definition, it's date-aware and therefore potentially Y2K-affected. Most likely failure if any would be to wrap to 1900 and then get day-of-week calculations wrong, meaning a 2-day lockout and insecurity at week-ends.

The control electronics *must* be inside the safe. If it were outside, you could break the safe by breaking the (exposed) electronics. It's a special case of embedded system, where the design is explicitly NOT for easy maintenance, at least not while the door is shut! Any external override - whether mechanical or electrical - creates a security weakness for bad guys to attack.

Practical advice; contact the safe manufacturer. They should be able to tell you whether its electronics are Y2K-tested, and to arrange to replace the electronics - not the whole vault! - if not. Do this AS SOON AS POSSIBLE!!

-- Nigel Arnot (nra@maxwell.ph.kcl.ac.uk), December 03, 1998.


Nigel's post points out something to remember about Y2K failures: the magnitude of the effect.

NA> Most likely failure if any would be to wrap to 1900 and then get day-of-week calculations wrong, meaning a 2-day lockout and insecurity at week-ends.

If a Y2K failure results in catastrophe (off-by-a-weekday error causes pressure valve to stay closed 24 hours when it shouldn't), that's one thing. If it results in a day's delay and insecurity on weekends (off-by-a-weekday error keeps vault closed on Monday but opens on Tuesday and can be corrected once the door is open), that's another.

Work on the worst things first.

-- No Spam Please (anon@ymous.com), December 03, 1998.


I keep pointing this out to folks: a professionally-designed safety- critical system won't know *ANYTHING* about dates, unless that is required for its proper function. A pressure valve - whatever that may be - won't be directly locked to a clock. It'll be in some sort of feedback loop with pressure sensors. Then there will be over- pressure sensors that cause a controlled shutdown if the loop fails, and last-resort mechanical or electromechanical trip devices to release the pressure in a somewhat controlled way rather than allow an explosion.

In short, apart from rare cases where a safety-critical system was designed by complete amateurs ignoring every rule in the book, the thing will fail safe. No blow-up. However, that also means no production; this is itself dangerous if its a widespread and extended state of affairs. This is something that most of the TEOTWAWKI folks have actually got right. If you hear someone ranting about blow-ups and airplanes falling out of the sky, they are fairly clueless.

Back the the safe; this is also designed to fail safe. You'd far rather have your valuables imprisoned in a vault for a couple of days while extra 24-hour security guards and a locksmith are located, than have the vault fail open in the presence of dishonest folks.

-- Nigel Arnot (nra@maxwell.ph.kcl.ac.uk), December 04, 1998.


Nigel -- Sheesh! I didn't say the valve was locked to a clock. I wrote, "... error causes ... valve to stay closed ...". "Causes" was deliberately chosen to be vague because an exact mechanism wasn't important to the point I was trying to make (which I intended as complimentary to you).

Okay, try this: Chemical processing plant has a valve designed to meter an ingredient into a chemical reactor with a _minimum_ limit. That is, the valve is supposed to guarantee that _at least_ a minimum proportion of the ingredient is added. Without this minimum proportion, the resulting chemical mixture has an undesirable property -- say, toxicity to humans.

The valve position is computer-controlled because the proper position depends on a combination of measurements of other, variable ingredients and the environment, integrated over time intervals. There is a Y2K bug in the software (circa 1975), such that for the 24 hours of January 1, 2000 only, the integration calculation will mistakenly result in an instruction for the valve to go all the way closed. Also, the Y2K bug causes any downstream sampling to check the amount of the metered ingredient to be unreportedly bypassed for those same 24 hours.

The result is that 24 hours' worth of chemical output is toxic to humans.

If you retort that a professionally-designed safety-critical system wouldn't allow the valve to close completely, I'll counter that a) well, there shouldn't have been a Y2K bug in a professionally-designed system, either, and b) it is necessary to allow the valve to close completely when the chemical reactor is periodically shut down for maintenance. I can counter other retorts, too, even though that wasn't the point of my preceding post.

As for software not knowing *ANYTHING* about dates, unless that is required for its proper function: It is _not necessary_ for application software to use date functions or to "know" anything about dates in order for a Y2K bug to cause it to produce erroneous results!!!!

Example: A computer's BIOS stores current date in YYMMDD, ASCII character format, in memory addresses 42-47. These characters are generated by a BIOS binary-to-ASCII conversion routine whose input is the system clock. Unfortunately, that BIOS routine, written in 1975, has a Y2K bug such that when the year rolls from 1999 to 2000, it will output three ASCII characters, "100", instead of two, for the year subfield. It stores the characters right-justified from location 43. This results in storing character "1" in location 41, something not intended or envisioned by the BIOS designers long ago.

Those designers designated location 41 to be an error flag of some sort (if you want, I can construct more specificity).

When January 1, 2000 arrives, the BIOS conversion routine stores an ASCII "1" in location 41. The next application program, which _does not use date functions in any way_, checks location 41, finds it non-zero, and branches accordingly in a way that it wouldn't have if it were not for that BIOS Y2K bug, and produces erroneous results. QED.

Satisfied?

>In short, apart from rare cases where a safety-critical system was designed by complete amateurs ignoring every rule in the book, the thing will fail safe.

Or, in more common cases where a safety-critical system was designed by complete professionals who followed every rule in the book, but didn't realize that _two or three rules were missing from the book_ ... maybe the thing will fail safe through 1999, but not fail safe in 2000.

>However, that also means no production

Not necessarily - as my example shows, a Y2K bug could lead to erroneous production, with the error possibly going undetected for a while because no one in 1975 thought of checking for what weird circumstances might happen in 2000.

>this is itself dangerous if its a widespread and extended state of affairs.

Ditto. I agree with you on this clause -- Undetected erroneous results may be widespread and extended, possibly causing the majority of Y2K damage, compared to obvious Y2K-caused failures and shutdowns.

-- No Spam Please (anon@ymous.com), December 04, 1998.


Moderation questions? read the FAQ