Since the catastrophic fire in Kensington in June, it has become clear that England has fundamental problems with building safety. This is an opportunity to fix one critical but subtle problem in how we make sure products are safe - and not just in buildings, but more widely.
Imagine, for a moment, I have set up a company that makes the component parts of some very efficient and elegant cladding systems. But some of the components do not meet the requirements for automatic use on the outside of a tall building because they do not have good fire-safety ratings.
To get this stuff approved for use, it needs to be used in a design that will keep the combustible parts shielded from fire and, if it cannot do that, stop fire spread. To make sure that this is done properly, any designs my components are used in will need to be tested in a fire lab. That means what it sounds like: we mock up each design we might want to use it in in a laboratory, and start a fire underneath it. If the design resists the fire, it can be used. If it does not, it cannot.
Let us suppose that there are five proposed designs that I want to get my components approved for use in. And let us call them designs A to E.
Each is similar, but has a slightly different set of materials for use in slightly different scenarios.
It might be that I run tests on designs A to C, and the designs pass comfortably. It is, however, expensive to keep running these tests. So, to save money, I commission a so-called "desktop study" to look at designs D and E.
What does that mean? I go to an accredited fire engineering company and ask them if they think D and E will pass. Since the results on A, B and C are clear passes and because the designs of D and E are so similar to them, they agree the designs look safe. They use their expertise to declare there is no need for a test.
To recap: we now have five approved designs. Three, A to C, have actually been tested. Two, D and E, have not. But a qualified person believes that the test data for A, B and C implies similar D and E are safe. That is how the system is supposed to work. It all seems to roughly make sense.
But it is not a safe system. Because, hidden in the weeds, there is a set of fundamental problems.
The biggest of these is that the system is opaque and secretive. I paid for the tests, so only I and the testing house know what is in them. Not the government. Not building inspectors. Not unless I say so. If you - or the government - ring the test houses and ask if a certain combination has passed or failed, they will refuse to answer.
This creates fundamental problems. These desktop studies and the original reports are all confidential, so there is no way for an outsider to know whether designs A, B and C are really close enough to designs D and E to allow my fire engineer to draw the conclusions they have.
This is not a hypothetical: we have published excerpts from desktop studies which, in the view of experts, extrapolate apples into oranges. One of them assumes that an aluminium panel would behave in the same way as a ceramic tile during a fire.
Furthermore, what if a third-party were to run a test on, say, design D - and it failed? There is no system that makes sure I will hear about it and withdraw my desktop study. And there is also no process, even if I do hear of it, to make sure I act on the test failure by withdrawing the desktop study.
The reason why the government's tests of seven designs, released earlier this year, had such enormous impact. They were published. No-one could say that they did not know that certain designs now appear to be unsafe.
The fire test is also intrinsically unpredictable. The fire labs are, ultimately, building a wall and seeing how it copes with a bonfire - a complex, non-replicable phenomenon. And you only need to pass once.
Small issues can make the difference between pass and fail. Sometimes, the installation of the wall design will be perfect. Sometimes not. We have heard it said from several places that ventilation differences mean one of the test rigs within one of the test centres is easier to pass on than other rigs.
Luck matters: what if design A might pass only one fifth of the time - and we just got lucky? And what if I took luck out of it: I am allowed to retake the tests. Suppose I repeated the test until it passed, and then I stopped - and never told anyone about the failures?
The test labs might refuse to run identical tests one after another, but what if you made a cosmetic change to the design - perhaps just changing the name of one of the products - so the test lab would think it was different? As a component producer, only I know - and I can hide failures, so it looks like a perfect first-time pass.
Time to open up testing
Remember, too, that the secrecy means I can withhold data from desktop study authors.
So what if I actually ran multiple tests on all the designs - including designs D and E, but I could not get D and E to pass? So I took the test passes for A, B and C to a fire engineer, and asked them to write a Desktop Study for D and E without telling them that they had already been tested and failed?
Note, by the way, if I were going to do this, I would need to use a different fire engineering company to the one that did the testing. Someone who knew about testing of D and E could, in good conscience, never write such a report.
It was, therefore, a safeguard that there are just a few accredited fire engineering companies. The building industry, however, has dropped the requirement that desktop studies needed to be by accredited companies. These days, there are no stipulations about who writes these studies.
We have found a dozen companies producing them. But even who they are is secret: the accredited companies do not really know who their competitors are. We do not know what goes into the reports being produced. We do not know what the authors' qualifications are. We do not know on what basis they are arguing designs are safe. But their studies can be used to get material onto a building.
If, after the regulations review, tests remain a major part of our system, we need to make the whole system open-access.
This is. by the way, not the only place where this sort of problem exists. We need a new mantra to run through everything from pharmaceuticals to insulation: we need to talk about replicability and what levels of evidence we need in different scenarios. We need to ask questions about what the proper role is for the state in testing and certification. Above all, though, when safety is at stake, there must be no secrets.