How to science a science with science

The purest description of the scientific method I ever saw was in a novel by Kurt Vonnegut. A workman discovers that if he puts a bucket full of nuts and bolts on one of the many supporting struts of the Tevatron supercollider, he can talk to the dead. He shares his finding with a scientist who, rather than scoffing as one might be inclined to do, says “show me”. That’s it. Not a method as such: more an attitude of mind.

No one has yet found a scientific method that can regularly mint scientific truths, but somehow scientists go on finding things out anyway. Of course, they get a lot of stuff wrong too, but the willingness to be wrong – to find, despite everything life has taught you up to this point, that one can communicate with the dead using nothing more than a bucket full of bolts and a synchrotron – is perhaps the most important part: being right is inseparable from being wrong. To learn is – among other things – to move from a state of ignorance to a state of slightly better constrained ignorance, or perhaps, if you are lucky, even of being demonstrably wrong. It gives you the impetus to move onto the next thing. The next thing might even be right. Most people are looking for confirmation of their ideas. A scientist, in contrast, is someone who can’t take yes for an answer. Their gift horses all turn out to have rotten teeth.

Often, the corrective factor is reality. “No amount of experiments can prove me right, but it just takes the one to prove me wrong“. This quote and variations on it, generally gets attributed to whichever scientist the quoter most reveres – usually Einstein or Feynman. Whenever it was that whichever one said it – if indeed either one did – it was already a well established idea. It’s appealing: if an idea disagrees with experiment then it can’t be right – and if you ask a random group of scientists what the scientific method is, more than one will likely say something along these lines. The scientist has an hypothesis, they then contrive an experiment that will either falsify the hypothesis or not. They perform the experiment. Lather, rinse, repeat.

Alas, it leaves a lot out, and even what’s left in isn’t quite so simple. An experiment pits one set of ideas against another. Reality might be in there somewhere, or not (reality itself might just be an idea and not one that’s needed to do science1). The experiment might have gone wrong, the apparatus might be broken or unplugged2, there may be unknown influences upon the target, or perhaps the sample size wasn’t big enough. It might be that the experiment doesn’t test what you thought it did. The list goes on. These too are hypotheses, which can lead to further experiments and further hypotheses. There is an infinite regress buried in this overly neat summary that threatens to blow the whole thing apart.

Where does the infinite regress stop? Practically speaking, it stops when you agree that it stops, people run out of new objections, or everyone get bored3. While a single scientist might quickly tire of trying to work out why they are wrong, thus slowing this important process to a stop, scientists are fortunately well furnished with colleagues who like nothing better than casually (sometimes innocently) demolishing other people’s pet theories. It is a quirk of psychology that it is much easier to find fault in someone else’s reasoning than our own4 and science leverages this quirk to maximal effect5. It is hard to be a scientist in isolation but it’s hard to be a scientist in the society of other scientists too, just for very different reasons. Anyone who has encountered reviewer 2 would be hard put to disagree. Smiling over gritted teeth, we can treat reviewer 2 as an opportunity to be the better person and sometimes, often, grudgingly, we have to admit that reviewer 2 might just have a point6. We also have scientific friends, those with whom we share a much wider common basis, who can be a companion for the exploration of new and wilder ideas, or with whom it is a pleasure to argue. It takes all sorts.

So, our colleagues and peers are the second corrective factor. But we can ask, by what standards they are assessing or judging our work. If there is no “method” then what are we all doing? One might say, while there is no single method, there are lots of methods, plural. Each discipline has a unique set. Each person in each discipline has a unique set. It won’t be completely their own because people and disciplines share ideas (and borrow and steal) but each person has their own idiosyncratic take on science, as they have about any complex topic. Individual disciplines evolve different ways of carrying on according to their needs at any particular time.

One of the fantastic things about science is we – scientists – make it up as we go along with whatever we have at hand. It’s a magnificent never ending riff on the comical idea that we might actually know something. The punchlines come thick and fast. As with any improv, the engine is “yes, and”, to keep putting up new ideas as fast as reality can tear them down. Eventually some will stick.

In a similar vein, it has been said that in science “anything goes” – as least as an historical observation – but I feel like that’s not quite right. I might offer in its place, “whatever works” or “whatever you can get away with”, but the latter doesn’t really sound better – it sounds, in fact, like an invitation for fraud and some people, it’s true, try to get away with far too much – but if there is something wrong in what you say or do, you won’t get away with it for long7. Science is a social enterprise and cheaters – as in any social enterprise – are not well tolerated. No one likes being lied to and in a competitive endeavour8 anyone who wins by deceit or chicanery is despised. Or should be: it’s a social enterprise, so there will be those who cheat and bend the rules but get away with it because of other personal qualities, or – more usually – because they have power over those who might object. If we are thinking about how science works, then we need to deal not only with the impenetrable mysteries of the world itself, but also the foibles of those who are laying the siege and the society in which they find themselves.

As scientists we are not forced to accept the criticisms of our peers. While they can judge our work, their judgement isn’t final. We can argue back. The process though has to take the ideas under debate seriously but that’s not always the case. Ideas need advocates, particularly in the early stages. Some of the best ideas started life as absurdities, dismissed with a laugh and airy wave. If we are not careful, we can throw out the good ideas with the bad before they ever get a chance to prove themselves. It can take a lot of preparatory work to make even a good idea seem plausible.

A lot is made of consensus in science but it’s a rather hard thing to define. Or else else, being easy to define, it is almost impossible to write down. Every scientist will have an idea of what the consensus in their field encompasses, but not everyone would draw the boundary the same and some would mark “here be dragons” on the map where others have been busy getting the lay of the land. Indeed, in a field of any appreciable size, there may be things the specialist would include that the generalist might not (or vice versa). Outside of the core consensus and its wispier fringes, there is a cloud of ideas and beliefs that are more provisional – held by some but not all – and, even further out, a seething chaos of ideas, the raw stuff from which knowledge is formed. Vital to that formative process – the dark matter that holds the galaxy together, if you like – is a cloud of eclectic knowledge and experience brought to the task by each individual researcher, not a part of science in any formal sense, but essential anyway.

If we define a consensus in terms of the things that are well known and understood (or in the negative sense of things we know are not understood), we have to ask by whom? There is no consensus independent of a particular community of scientists but the knowledge and capability embodied by that community is far broader than a mere consensus suggests. It includes, for example, an enormous store of knowledge about how the science in that field is conducted, the specific techniques and methods that have been found to work (and a wonderful store usually of things that went spectacularly wrong). Not all of this is written down and some of it can’t be. Some of it takes practice and experience9 . If we are trying to work out what scientific method is, then anything beyond the broadest of definitions – that is to say, even the simplest applications – will depend on practical skills and experience that are readily transmitted from one person to another but only by example and through careful training and practice. In addition there are the small accommodations, habits of thought and action that individually are perhaps negligible, but add up to making each scientist who they are.

The other way our colleagues can help us (or at least help science) is by repeating experiments. When Vonnegut’s scientist asked the workman to show him how a bucket of bolts could be used to talk to the dead, he was replicating the experiment. A lot of scientific papers are like this. The promise is that if you follow the instructions in the paper, you will get the same answer. It’s not always the case. Many results are statistical in nature in which case it’s possible that a particular result was a quirk of chance. The experiment might be arranged to reduce the likelihood of such quirks, but even so experiments can fail to replicate for many reasons. Some of these are bad – perhaps someone made up their data – but others are good, or at least, useful. A failure to replicate might be due to an undocumented difference in procedure, or due to a variation not thought to have a bearing on the experiment.

Replication is no guarantee of correctness either. An experiment might be perfectly repeatable, but the theory behind it can still be wrong. Alternatively, a bad experiment might always give the same results, but do so for reasons unrelated to the theory under test. My favourite example is the method used to demonstrate the Dunning Kruger effect, which states that people with limited domain knowledge tend to overestimate their abilities. The result – which is replicable – might simply be due to a statistical artifact. The replicability therefore might tell us nothing other than that the method is bad. This, of course, is disputed. The method can be bad, but the effect can still be real. Everything is incremental, what one experiment or paper tells us is strictly limited, and the increments are typically smaller than we fondly believe them to be.

More compelling – though it can be quite hard to say exactly why – is when a theory is tested in multiple different ways, using a variety of methods in different times and places, or is shown to be consistent with a broad range of different phenomena which have each been independently tested. While a single experiment can be inconclusive for many reasons, other experiments (each with their own shortcomings) can help to build confidence if only so far.

Substituting many methods for one method doesn’t feel like it’s got us anywhere. If there’s not a single scientific method, what are all these other methods exactly? It multiplies the problem rather than solving it. The answer – which might be unsatisfying – is that the methods are subject to the same process as the ideas and theories they are employed to find, which is to say a constant process of revision. New methods are dreamed up, tested, opened to critique, refined, and, occasionally, frequently (I don’t know what the rates are), discarded. At one end of the spectrum, we have tools like arithmetic and logic, which are more or less indisputable10 – if the sums don’t add up the fault is likely with the arithmetic and not with arithmetic itself11 – but as we move away from such certainties, we find methods which have been tested to varying degrees, that rely on understanding or assumptions that have been tested to varying degrees, and some of it is more tentative and shakier than we might care to admit. In this broad sense, science is self-correcting, but it is not always clear when an error has been made and scientists might legitimately disagree on the interpretation.

We also need to stare long and hard at “correcting”. The corrections when they come are corrections in the negative sense of rejecting something we believe to be unhelpful or wrong. It doesn’t follow that the alternatives are error free. All methods come with caveats.

If we go far enough, or deep enough down into the gubbins of science, there are profound questions about the nature of the scientific enterprise itself. Lots of scientists shy away from these questions as they can seem rather philosophical, a term that causes many practicing scientists to shudder12, but they are there nevertheless. Part of science is that not everyone has to worry about everything, we can continue building work on the 74th floor even as the philosophers in the basement are fretting about how such a towering edifice was got up on foundations of thin air13.

The scientific treatment of the tools and methods of science is a part of the whole thing, but it is sometimes – mistakenly – treated as something separate, as if the rules of science can be imposed from outside or above. Science education gives this impression. It has to, to a certain extent, but we shouldn’t mistake pedagogic necessity for some kind of philosophical truth. The facts and methods we are taught in classrooms, lecture halls, and laboratories are all questionable, but if we insist on questioning every single one as it comes up then we would have to speedrun the whole history of modern thought and sooner or later (probably sooner) we would come up against something our teacher didn’t know. Still, some things get in without being questioned at all, which can cause problems later down the line.

Even if the teacher knew everything that is known, there would still be gaps because, helpfully, some of the things we know are what we don’t know – the known unknowns. By saying gaps, one might think of Swiss cheese, a dense and tasty matrix in which occasional bubbles of unknowing are to be found. It may be the other way round, with local areas of cohesion separated by vast gulfs of ignorance like dust in space. Or something else entirely14: our mental models of science are debatable, contingent, tentative too

The term given to the scientific study of science is metascience. Of course, metascience can – and should – be studied scientifically itself, and so, in a sense, it’s just science even if the meta sometimes gives it ideas above itself. Nothing in it is separate from the ongoing endless argument; it’s just a difference in emphasis. In so far as particular methods are used in disparate disciplines of science, metascience can have a claim to wide applicability, but we must take care to understand not just the limitations, strengths and weaknesses of any particular method, but also what the consequences of those actually are. They might not be as wide ranging as their champions claim.

While, the description I have given here is perhaps unsatisfying, even unedifying15 (and almost certainly wrong in some important or trivial way), it’s not the only one. There are others. Of course there are: this is science. This is my own oddball take on the matter, but you can find hundreds of others if you look or ask. Some will be obviously odder, others might seem less so. Whether I carry on like this myself, or only wish I did (or indeed only wish you did) is a question that needs to borne in mind.

Like some Greek paradox that has resisted 2000 years of explanation, progress can sometimes feel infinitesimal even when science is racing along. The small gains add up and eventually, they add up to an avalanche. Still there are those that are concerned that progress isn’t fast enough. They might be right, but like everything else, if they have a plan to fix it – or at least to speed it up – they need to do the work to prove that it does.

Some of the solutions being touted are simplistic. They risk reiterating the errors they seek to address without exactly repeating them: a simple method applied without deeper thought.

Everyone has an idea about how we might do better and I suspect doing better means following a thousand different ideas rather than enforcing a small few. Those seeking a scientific method – one that can be written down and followed mechanically, perhaps by a machine, or else concocted and dispense by a machine without human intervention – betray a kind of childish impatience with a process they clearly don’t understand.

-fin-

  1. There are those who think we can do without the idea of reality entirely. Really, science is a broad church. ↩︎
  2. In a job interview once, I was asked how I would test an algorithm for turning radar data into rainfall estimates. I said I would unplug the radar and see what happened. This was the wrong answer. ↩︎
  3. Don’t underrate boredom as a driving force. ↩︎
  4. And, if you are not careful, very much more satisfying. Of course, the satisfaction is short lived and its easy to forget that your are probably overlooking your own flaws. ↩︎
  5. There are lots of psychological aspects to science. Any notion of science that doesn’t treat people as people is bound to fail. Occasionally people will feel the need to point out that scientists are just regular people. While this is true in one sense, I think it misses an important role that science plays which is to keep the kind of people who become scientists out of more important jobs. ↩︎
  6. They might, in fact, be the better person. ↩︎
  7. Science, they say is self-correcting. There can be no counter-examples to this statement. As soon as the error is spotted, a correction has started. There are errors and admitting the possibility is – among other things – what science is. But there are errors and errors. Something can be wrong and useful. General relativity and quantum mechanics are based in fundamentally different views of reality. They can’t both be right about reality, but both still have astonishing predictive powers. By deploying them both, we can work out that most of the mass of the universe is missing or invisible which is pretty wild. ↩︎
  8. Admittedly science is a weird competition. Everyone has the same goal: to increase the size of the heap of things we think we know (or remove stuff we now know we don’t know). The winners in this process – though you might disagree with that designation – are chosen by some very strange processes like how many things they manage to add to the heap (regardless of quality), how many times someone refers to stuff they added to the heap, adding stuff to the heap in the right place, being the last person to touch stuff before it landed on the heap, etc. ↩︎
  9. As an undergraduate I was compelled to perform experiments. These had been devised, one must assume, by people considered wise and knowledgeable within their domain, but they had not reckoned with my capacity for chaos. Electronic circuits I created would oscillate wildly for a brief moment then never work again. When working with a powerful laser, I would get close enough to the beam that it passed through the far corner of my spectacles briefly illuminating the whole laboratory like an ill advised disco. And, on one fateful day, I wandered around the lab holding a glass rod that had been dipped in a polymer solution with a mozzarella like capacity to stretch out into fantastically long, but nearly invisible, strings. By the time, I (or anyone else) realised what was happening I had managed to construct a gigantic spiders web of glistening strands that bound together everyone and everything in the lab. Other people’s experiments went perfectly and I was always a little jealous. ↩︎
  10. You can’t argue with logic, but you can’t argue without it either. ↩︎
  11. If logic is wrong, how would we even know? ↩︎
  12. The best scientific conference I ever went to was the IMSC meeting in Edinburgh. The mix of climate scientists and statisticians is a marvellously fruitful one, but the IMSC in Edinburgh also had bona fide philosophers. As philosophers, they got to say obvious things about the relationship of – say – output from climate model ensembles to actual uncertainty in the future state of the climate that sounded very much like someone describing an elephant in a room formerly believed to be empty. ↩︎
  13. Part of the answer, of course, is that we were never starting from scratch. Before the first human uttered a word, they already knew a lot about how the world worked. Science has built upwards, dug down and sprawled. ↩︎
  14. One of my favourite science fiction ideas, offered by M John Harrison. His space going civilizations had all discovered superluminal travel, but each one had discovered something completely different that is incompatible with all the others. Why not? ↩︎
  15. I’ve been told I don’t take these things seriously enough so I assure you I take these things very seriously indeed and <blows raspberry that lasts an uncomfortably long time>. ↩︎

#education #history #philosophy #religion #science

What if the Death Star was not a mere weapon, but a Sith altar of human sacrifice? A dark place this line of thought will carry you...

Read the full (and dare I say chillingly plausible!) theory at my #blog: https://www.adamasnemesis.com/2025/06/18/star-wars-the-dark-theory/

This post's featured image is Edvard Munch's "The Vampire" (trust me, the Sith are so much worse...).

#fantheory#DeathStar#Sith#StarWars #cinema #film#scifi #fantasy #spaceopera #sciencefiction #movies #philosophy

🚨 Today in the Intro to the Ethics of AI lecture: Data Protection & Fundamental Rights

🔹 What’s the difference between privacy and data protection?
🔹 How do the US and Europe approach data protection differently?
🔹 Why we protect fundamental rights – not just data.

🧠 Join live on Zoom | 14:15–15:45 CEST: https://tinyurl.com/EoAI25

🎥 Watch later on YouTube: https://lnkd.in/ePcdbrvi

#AIethics#CriticalAI#Philosophy#AI#Privacy#DataProtection

The core book of #solarpunk #ttrpg is missing two chapters. You will get updates later this year. The reason: everything is play tested before being published. And this takes time. One missing chapter is #religion or #philosophy . I will add "The Children of Gaia" and similar groups. As there are several flavours of this religion I need more and similar names. Not only "children of". Any ideas? #solarpunk2050