lederhosen: (Default)
[personal profile] lederhosen
I am ambivalent on nuclear power. In theory, I think it could be a good thing if properly handled. I don't have a knee-jerk anti-nuclear reflex, and quite willing to acknowledge that all energy solutions have their failings, whether they involve pollution or scale.

And I'm not impressed by the quality of a lot of the nuclear debate. A lot of people are willing to decry nuclear power simply because the worst-case scenario is very very bad, without attempting to look at the whole of the risk/benefit tradeoff: nuclear has a risk of killing a lot of people and rendering countryside uninhabitable in a spectacular fashion, whereas coal kills a small number of people every day in an unexciting sort of way that doesn't sell newspapers. (And via climate change, has the potential to kill a large number of people in an indirect, deniable sort of way.)

So in theory, my view is that if nuclear works out better than coal when all possible scenarios are taken into account (weighted by their probability), we should replace coal with nuclear. That's the theory.

The problem is... well, let's look at that disaster that happened in 1986.

No, not Chernobyl, the other one.

In January 1986, Space Shuttle Challenger blew up on launch. The disaster was investigated by the Rogers Commission, which included Richard Feynman. The best-known part of his role in that investigation is the bit where he dunked a shuttle O-ring in ice water and demonstrated that it wasn't suitable for a cold launch, but the full story went a lot further than that; you can find a lot more detail in his autobiography and in Appendix F of the commission's report.

One of the issues Feynman discusses is the question of the Shuttle's reliability. NASA management had estimated the risk of a 'catastrophic failure' at one per 100,000 launches.

Now, there are two ways to get that sort of number. One is by observation: if you run a million shuttle launches and ten of them blow up, it's reasonable to conclude that the risk is about one per 100,000 launches. This is a pretty reliable way to estimate risk (at least, under conditions similar to test conditions) but it has a couple of obvious drawbacks. You can only get the data by exposing yourself to the risk, and for a rare event it takes a very long time to get reliable data. When the last Shuttles are retired later this year, they'll have clocked up a total of 135 launches (and, fingers crossed, 133 landings).

So, until you have enough of a track record to observe risk, and especially when you're deciding whether to start down that path in the first place, you need to take the other approach: probabilistic risk assessment ('PRA'). This boils down to "identify the possible chains of events via which things could go wrong, calculate the probability of each of those happening, and do the maths to calculate the resulting risk of a catastrophic failure". PRA is especially important when the probability of failure is low, but the consequences of failure are severe - airliner design, nuclear power, etc etc.

The problem with PRA is that it's hard to do right. You need to be imaginative enough to identify all the possible things that could lead to a system failure, and you need enough information to figure out how likely they are. This is a mixture of background knowledge and advanced probability theory - it's not just enough to know the probability that one engine will fail, you need to know the probability of combinations of events happening. These are inevitably influenced by human factors (as the old saying says, "if you make a system foolproof Nature will invent a better fool").

Making those calls is difficult, and when you have a lot of uncertainty it's very easy for bias to creep in - especially when you know the answer you WANT to get. Feynman found the 1/100,000 risk factor hard to believe, and polled NASA engineers, whose estimates were more like 1/50 to 1/100 (and with two disasters out of 100-odd launches, those estimates look pretty plausible.) As it turned out, the 1/100,000 figure seems to have been created in reverse: management decided that 1/100,000 was an acceptable level of risk, worked backwards to determine what the component failure rates ought to be, and somewhere along the line "ought to be" got blurred into "is".

Now let's move on from 1986 to 2002, at which stage Fukushima I had been running for 31 years (which, incidentally, is a good long time for complacency to set in).

In 2002, the Japanese Nuclear Safety Organization published a report on "Severe Accident and Accident Management". (Site seems to be down, you may need to look at this cached version.) That report says:

PSA was performed on all [Japanese] nuclear power reactor facilities by 2002, and the results showed that the frequency of occurrence of a core damage accident is 1/100,000 or less per one year for one reactor and the frequency of occurrence of an accident leading to containment damage is 1/1,000,000 or less per one year for one reactor...

Safety goals: "Occurrence frequency of a core damage accident shall be 1/100,000 or less per
one nuclear reactor for one reactor and that of an accident which results in a containment failure
shall be 1/1,000,000 or less per one nuclear reactor for one reactor...

Electric utilities issued the results of PSAs after [accident management] measures were established. The results satisfy the safety goals and some of those are as follows:

Frequency of core damage (/reactor year): 1.6 x 10-7 (an example for existing BWR-4) [Most of the reactors at Fukushima I are BWR-4s; #1 is a BWR-3 and #6 is a BWR-5.]

Frequency of containment failure (/reactor year): 1.2 x 10-8 (an example for existing BWR-4)

Frequency of core damage (/reactor year): 2.4 x 10-8 (an example for existing BWR-5)

Frequency of containment failure (/reactor year): 5.5 x 10-9 (an example for existing BWR-5)


A couple of things to note here:

(1) It's not clear whether those chosen examples are typical or exceptional; from context they ought to be typical, but the report doesn't actually say this; it's quite possible that out of Japan's 50-odd reactors they picked the best.

(2) The risks are given to two significant figures - not 1.5 x 10-7, not 1.7 x 10-7, but 1.6 x 10-7. In scientific usage, this implies that these are not just ballpark numbers but precise estimates.

This alone should start ringing warning bells. By definition, the sort of things likely to cause core damage are rare and extreme events. Do we really believe that the operators can predict the chances of (say) a catastrophic earthquake, or a high-powered terrorist attack, to two significant figures?

But for now, let's suppose those figures are exceptional, and not relevant to Fukushima. Let's go back to the minimum figures given earlier up, which state that all power reactors had a 1/100,000 year risk or less.

How often can a Japanese reactor expect to be hit by a 9.0 magnitude quake and resulting tsunami? Obviously this is hard to estimate precisely (certainly not to two significant figures!) but the media coverage has been calling it a "thousand-year event", apparently with some basis in science. On average, a 9.0 hits the world about once every 20 years; not all of them are in Japan, but it's a seismic hotspot. Once in a thousand years seems pretty plausible; perhaps one in ten thousand if you want to be generous. It's definitely not a hundred-thousand-year event.

So when I hear people saying "nobody could have predicted this, it's unreasonable to expect them to withstand a 9.0", I call bullshit. According to their owners, as reported by JNSO above, core damage for any Japanese power reactor is a hundred-thousand-year event or less.

If their owners didn't realise that 9.0s come along more often than this, then they are horrendously incompetent. If they knew this, and knew their reactors weren't built to withstand a 9.0, and yet still claimed a 1/100,000 year safety standard, then they are bald-faced liars. TEPCO's handling of the current crisis suggests there might be a bit of both involved.

If nuclear operators are incapable of providing realistic figures for the safety of old reactors - which at least have the benefit of some track record behind them - why on earth would anybody trust them when they tell us how safe the newer designs are?

To quote the last words of Feynman's appendix:

"For a successful technology, reality must take precedence over public relations, for nature cannot be fooled."
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

If you are unable to use this captcha for any reason, please contact us by email at support@dreamwidth.org

Profile

lederhosen: (Default)
lederhosen

July 2017

S M T W T F S
      1
2345678
9101112131415
16171819202122
2324252627 2829
3031     

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jul. 6th, 2025 12:49 pm
Powered by Dreamwidth Studios