The scientific method is a systematic approach to understanding the world through observation, hypothesis formation, experimentation, and evidence-based conclusions. It relies on empirical evidence, reproducibility, and falsifiability to distinguish scientific knowledge from non-scientific claims. Falsifiability means a scientific theory must be capable of being proven false through testing. If a claim cannot be potentially disproven, it is not considered scientifically valid. This principle helps separate scientific hypotheses from pseudoscientific or unfalsifiable assertions. Key distinctions include reliance on empirical evidence, peer review process, reproducibility of results, willingness to modify theories based on new evidence, transparent methodology, and predictive power of theories. Peer review ensures quality control of research, validation of methodological rigor, identification of potential errors, maintaining research integrity, and advancing collective scientific understanding. Through transparent research methods, declaring potential conflicts of interest, reproducible experimental designs, ethical data collection and reporting, open access to research findings, and continuous critical evaluation. Critical thinking allows researchers to question existing assumptions, evaluate evidence objectively, avoid confirmation bias, develop robust hypotheses, challenge established theories, and promote intellectual skepticism. Reproducibility validates research findings, allows independent verification, prevents fraudulent or incorrect research, ensures scientific reliability, and supports cumulative knowledge development. By understanding basic scientific methods, critically evaluating information sources, recognizing the difference between evidence and opinion, reading peer-reviewed research, attending scientific lectures, and following reputable scientific publications. Challenges include the complexity of emerging research fields, limitations of current scientific methodologies, potential bias in research, the interdisciplinary nature of modern science, and public misunderstanding of scientific processes. Open access increases research visibility, democratizes scientific knowledge, accelerates scientific discovery, reduces publication barriers, and promotes global scientific collaboration.Key Questions
What is the Scientific Method?
How Does Falsifiability Determine Scientific Validity?
What Separates Science from Pseudoscience?
Why is Peer Review Critical in Scientific Research?
How Do Scientists Maintain Research Integrity?
What Role Does Critical Thinking Play in Scientific Inquiry?
Why is Reproducibility Important in Scientific Research?
How Can Non-Scientists Develop Scientific Literacy?
What Challenges Exist in Distinguishing Science from Non-Science?
How Does Open Access Publishing Support Scientific Communication?
What distinguishes science from non-science?
This may seem like a silly question at first, but
it is one that has been debated for many centuries. Science has made an immense
contribution to our world. The science tag commands authority and respect and
can attract financial gains. There is an incentive for some people to pose as
scientists and claim that their ideas or products are ‘scientifically proven’
without merit. Here we explore what is science or non-science, it may seem
obvious but close to the boundary the lines may be a little more blurred.
Merriam-Webster defines science as “knowledge or
a system of knowledge covering general truths or the operation of general laws
especially as obtained and tested through scientific method”. This is a pretty wordy definition, but the key
part of it is the term “scientific method”.
The scientific method as we know it today is a
series set of steps. You observe something you can’t explain, form a
hypothesis, which means coming up with a plausible explanation, and then
conducting experiments to support the hypothesis. The data gathered from these
experiments helps to form a conclusion.
Yet this definition is still not specific
enough, as was demonstrated by the phrenologists of the 18th century.
Phrenology was a “science” that measured the mental traits of individuals using
the shape of their skulls. The principle was that the brain could be separated
into anywhere between 27 and 40 parts, each with their own distinct functions.
The relative sizes of those sectors represented how competent an individual was
in the functions they were associated with.
Since the shape of the skull is determined by
the shape of the brain, phrenologists predicted the characteristics of a person’s
mind simply by feeling their head. Some traits they searched for were wit,
self-esteem, aggression, and generosity. According to phrenology, a person with
a large forehead was bound to be intelligent, as the sector responsible for
reasoning was located at the front of the brain.
However, a major blow was struck to Phrenology
by a researcher named William Hamilton. Phrenologists predicted that a part of
the brain known as the Cerebellum is responsible for sexual activity. They
concluded that men experience sexual arousal more often and more easily than
women because men have a larger cerebellum.
Yet, Hamilton discovered that women were the
ones with a larger cerebellum. Today, we know that Hamilton’s measurements were
wrong, and women do have a smaller cerebellum, but the phrenologists at the
time did not respond by conducting their own experiments. Instead they defended
phrenology claiming the best they could do was make estimates, and that the
true science was based on the “bumps” felt by the phrenologist’s hands.
This revealed a fundamental flaw in phrenology
as a science. When its truth and authority were threatened, the phrenologists
resorted to offering excuses for phrenology’s shortcomings. Their theories were
not based on facts and could not be verified through experiments as the response
to a contrary observation was to move the goal posts. While phrenology was
dismissed as non-science, there was still no clear thinking at that time as to
what constitutes science.
Perhaps the best attempt to tackle this issue
was made by a famous philosopher of the 20th century named Karl Popper. Popper
was familiar with the theories of Marxism, and believed it to be a scientific
theory. Marx had made observations and predicted that eventually the working
class would revolt against the existing capitalist systems and establish
communist nations.
Decades after the death of Marx, the workers
had not yet revolted. The global wealth gap had seemingly worsened, but
communism was nowhere in sight. Regimes were not controlled by the working class. Marxists offered
explanations, stating that the workers lacked class consciousness, and that
Marx’s predictions were still valid.
Popper was bothered by this sort of reasoning.
According to the Marxists, Marxism was true regardless of whether the
revolution occurred or not. There was no way to disprove the initial theory.
Karl Popper believed that any good scientist must at least let themselves be
disproven, and in his opinion, Albert Einstein was the embodiment of this idea.
Karl Popper’s introduction to Einstein was one
of the great physicist’s lectures on special relativity conducted in Vienna.
Popper was amazed by the distinction between Einstein and the Marxists of his
time. Einstein was making bold predictions that contradicted Newtonian physics.
If Einstein’s predictions failed, his entire theory of relativity would fall
apart. This “risky” nature of Einstein’s approach to science appealed to Karl
Popper.
Thus, the risk of being wrong became the metric
that Popper would use to separate science from Pseudoscience. He stated that a
theory is scientific if it risks being disproved by an experiment that
contradicts its predictions. This principle is called “falsification”, so if a
prediction can be falsified, it is scientific.
In Einstein’s case, his theory of relativity
predicted that gravity could deflect rays of light. This was proven in 1919 by
a British physicist named Arthur Eddington, who was not directly associated
with Einstein. Yet, if Eddington had observed that the light from far-away
stars was not deflected by the sun, Einstein would have been proven
wrong.
While falsification now serves as an effective
filter to separate science from pseudoscience, it is also a driver of
scientific innovation. This is because science consists of forming explanations
for natural phenomena. Thus, new theories are required when old theories are
falsified. Certain scientific experiments can be defined as tools for
falsification.
One of the best examples of the use of
falsification to develop new theories is something we all learned in high
school: The structure of the atom. In the 1800s, scientists had believed for
many years that matter was made up of atoms, but they did not know what an atom
looked like. The first person to put forward a theory backed by science was J.
J. Thomson.
Thomson had discovered the electron in 1897
through his experiments with cathode rays. He suggested a structure for the atom
which some referred to as the “plum pudding model”. The “pudding” was a cloud
of positive charge, with electrons embedded in it like little plums.
Thomson had a student named Ernest Rutherford,
who aimed to test his professor’s predictions. Rutherford’s gold foil
experiment involved researchers firing positively charged “Alpha particles” at
a thin sheet of Gold. If Thomson was correct, the Alpha particles would pass
through the “pudding” undeflected. But the Gold foil experiment showed a small
fraction of the particles being deflected by over 90 degrees. Some of them
bounced off the foil right back in the direction of the source.
Thus, Thomson's model was falsified and
Rutherford predicted a new atomic model. The mass of an atom had to be
concentrated at the centre in a “nucleus”. The nucleus would account for only a
tiny portion of the volume of an atom, but it would contain nearly all of the
atom’s mass. Rutherford predicted that an atom would be mostly empty space,
with the electrons surrounding the nucleus being held in place by electrostatic
forces.
Many improvements, such as predicting the
specific positions where an atom’s electrons reside, have been made to
Rutherford’s model. Currently, the prevailing theory that predicts the behaviour
of subatomic particle is quantum mechanics; the electron is both a wave and a
particle, whose domain around an atom is probabilistic and could be described
as dumbbell shaped. Quantum mechanics is one of the most testable scientific
theories ever created and has passed many scientific inquiries. However,
physics has been in a quandary in that its two main theories quantum mechanics
the theory of tiny particles and general relativity the theory of gravity and
large particles are philosophically incompatible; each theory does not provide
useful predictions for each other’s domains and there has been a drive towards
a theory of everything, one theory that can explain everything. Physicists are understandably
uncomfortable with having two main universal but incompatible theories; in some
way this might seem like moving goal posts for each other theories as they both
purport to be theories of everything, but another question is does quantum
mechanics make predictions that contradicts general relativity and vice versa;
general relativity fails on a small scale where the uncertainty of position of
small particles is predictive and as for quantum mechanics on a large scale, we
know from the Schrodinger’s cat thought experiment that we do not get superposition
of large objects.
Physics may distinguish itself from other sciences, in that it has a universal outlook in its scope; it sets
itself in the most perilous position of creating theories of the widest scope
and testability with the most opportunities for contradiction. Much of this stems
from Newton’s thinking. Before he postulated his laws, he started with the
notion that the laws would govern every object in the universe no matter how
big or small, without exception and this is something that we largely still
believe today. Such is the scope of physics that it interacts with or underpins
almost every other science. Relativity superseded Newton’s laws as it could explain everything that was experimentally observed in support of Newton’s laws and the observation’s that contradicted Newton’s laws at the time; the absoluteness
of the speed of light in a vacuum regardless of the motion of the observer. It also created
many other postulates that could be tested in favour of Relativity rather than
Newton’s laws.
It is remarkable that so many scientific
theories seem quite elegant, this is an unwritten law of science, that a theory
makes many testable predictions but the theory itself should have limited
degrees of freedom otherwise it could be compatible with all observations, so
the more elegant and simpler the theory the more remarkable the theory is. You might think
that physics would be immune from this as it provides huge scope for
observations and testability, but one of the leading theories of everything,
string theory, has more recently come under scrutiny as currently it does not produce any
new testable observations, though this may change in the future. One
of the problems with string theory, that has been pointed out, is that it
postulates 10 dimensions but 6 of these dimensions have to be removed to bring
about our observable world1. However, the number of ways of removing
these dimensions is practically infinite, leading to some to regard string
theory as more of a philosophy, appealing though it may be, rather than a
scientific theory and even for some proponents to argue that its elegance rather than
the empirical support for its postulates, should be sufficient to elevate it to
a scientific theory, which is contrary to the scientific process.
The
scientific process means being open to the evidence, that any theory is held tentatively
as new evidence may arise that contradicts the theory. We are making observations
now that we couldn’t have made in the past due to technological progress. The
science which underpinned these technologies may succumb to contrary
observations yielded by the technology that it produced. In this way no
theories are true, they will always be usurped at some time in the future. This does not mean that they weren’t useful. If contrary observations are accrued
then a scientific theory is not discarded instantly as it needs to be replaced
by a superior theory, but if one does not exist then we still need the
prevailing theory as we still benefit form the observations that it predicts
and its framework of thinking and philosophy. Such a theory is better than no
theory all, though we know it is not perfect, but we try and find some better explanation,
elusive that it may be. If we simply ignore the evidence to the contrary, redefine
the language or move the goal posts (increasing the degrees of freedom), then
we are departing from science.
Identify a question or problem based on careful observation of a phenomenon. Gather background information and existing knowledge related to the observed phenomenon. Formulate a testable explanation or prediction based on observations and research. Design and conduct experiments to test the hypothesis. This involves: Examine and interpret the collected data, often using statistical methods. Draw conclusions based on the analysis. Either accept or reject the hypothesis. Share results through peer-reviewed publications, presentations, or reports. Other scientists attempt to reproduce the results to validate findings. Based on new evidence or failed replications, refine or revise the hypothesis and repeat the process. This iterative process ensures that scientific knowledge is self-correcting and continuously evolving. It's important to note that the scientific method is not always linear and may involve cycling back through steps as new information emerges.The Scientific Method: A Step-by-Step Process
1. Observation
2. Research
3. Hypothesis
4. Experimentation
5. Data Analysis
6. Conclusion
7. Communication
8. Replication
9. Refinement
References
1) Why String Theory Is Not A Scientific Theory (forbes.com)