The Socratic Method and Futures Studies

All of the methodologies presented in the Future Studies section of this website are of paramount importance to futurists, particularly the Socratic method.  As futurists, we have a particular talent, a gift to be able to see what the world will be like under a variety of assumptions, variables, and probabilistic outcomes. It is natural for us to see the fluidity of reality and the inevitability of change. The future is something to be guided and shaped by our actions today, toward a more positive and deliberate future tomorrow.

The gift of forward thinking ability is not bestowed widely nor equally. In fact, common sense would indicate that most people are evolutionarily disposed to fear change. The tribesman who ate the odd-looking berry or departed down the road-less-traveled occasionally did meet his demise. However, it must be noted that without said tribesman and his kind, humans would never have left the cave. Below is a breakdown of how the population is split between those who prefer the comfort of the past vs. those who prefer the possibilities of the future.  Those of us in the minority must be prepared to engage, rationally, in a progressive discourse with those that fear change.



What is a Socratic seminar or discussion?

A Socratic Seminar is a method to try to understand information by creating a dialectic in regards to a specific information source. In a Socratic Seminar, participants seek deeper understanding of complex ideas in the material through rigorously thoughtful dialogue, rather than by memorizing bits of information.

An Appropriate Discussion Source

Socratic Seminar sources are chosen for their richness in ideas, issues, and values and their ability to stimulate extended, thoughtful dialogue. A seminar can be drawn from readings in literature, history, science, math, health, and philosophy or from electronic media, works of art, or musical compositions. A good source raises important questions in the participants’ minds, questions for which there are no right or wrong answers. At the end of a successful Socratic Seminar participants often leave with more questions than they brought with them.

The Question

A Socratic Seminar opens with a question either posed by the leader or solicited from participants as they acquire more Experience in seminars. An opening question has no right answer, instead it reflects a genuine curiosity on the part of the questioner. A good opening question leads participants back to the text as they speculate, evaluate, define, and clarify the issues involved. Responses to the opening question generate new questions from the leader and participants, leading to new responses. In this way, the line of inquiry in a Socratic Seminar evolves on the spot rather than being pre-determined by the leader.

The Leader

In a Socratic Seminar, the leader plays a dual role as leader and participant. The seminar leader consciously demonstrates habits of mind that lead to a thoughtful exploration of the ideas in the text by keeping the discussion focused on the text, asking follow-up questions, helping participants clarify their positions when arguments become confused, and involving reluctant participants while restraining their more vocal peers.

What does Socratic mean?

Socratic comes from the name Socrates. Socrates (ca. 470-399 B.C.) was a Classical Greek philosopher who developed a Theory of Knowledge.

What was Socrates’ Theory of Knowledge?

Socrates was convinced that the surest way to attain reliable knowledge was through the practice of disciplined conversation. He called this method dialectic.

What does dialectic mean?

di-a-lec-tic (noun) means the art or practice of examining opinions or ideas logically, often by the method of question and answer, so as to determine their validity.

What is the difference between dialogue and debate?

  • Dialogue is collaborative: multiple sides work toward shared understanding. Debate is oppositional: two opposing sides try to prove each other wrong.
  • In dialogue, one listens to understand, to make meaning, and to find common ground. In debate, one listens to find flaws, to spot differences, and to counter arguments.
  • Dialogue enlarges and possibly changes a participant’s point of view. Debate defends assumptions as truth.
  • Dialogue creates an open-minded attitude: an openness to being wrong and an openness to change. Debate creates a close-minded attitude, a determination to be right.
  • In dialogue, one submits one’s best thinking, expecting that other people’s reflections will help improve it rather than threaten it. In debate, one submits one’s best thinking and defends it against challenge to show that it is right.
  • Dialogue calls for temporarily suspending one’s beliefs. Debate calls for investing wholeheartedly in one’s beliefs.
  • In dialogue, one searches for strengths in all positions. In debate, one searches for weaknesses in the other position.
  • Dialogue respects all the other participants and seeks not to alienate or offend. Debate rebuts contrary positions and may belittle or deprecate other participants.
  • Dialogue assumes that many people have pieces of answers and that cooperation can lead to a greater understanding. Debate assumes a single right answer that somebody already has.
  • Dialogue remains open-ended. Debate demands a conclusion.

Dialogue is characterized by:

  1. suspending judgment
  2. examining our own work without defensiveness
  3. exposing our reasoning and looking for limits to it
  4. communicating our underlying assumptions
  5. exploring viewpoints more broadly and deeply
  6. being open to disconfirming data
  7. approaching someone who sees a problem differently not as an adversary, but as a colleague in common pursuit of better solution.

Sample questions that demonstrate constructive participation in Socratic Seminars.

  • Here is my view and how I arrived at it. How does it sound to you?
  • Do you see gaps in my reasoning?
  • Do you have different data?
  • Do you have different conclusions?
  • How did you arrive at your view?
  • Are you taking into account something different from what I have considered?

Information for this article came from the following sources:

> Murphy,J.(2000) Professional Development: Socratic Seminars. Regions 8 and 11 Professional Development Consortia, Los Angeles County Office of Education 6

> Stumpf, S. E. (1999) Socrates to Sartre: A History of Philosophy, 6th ed. McGraw-Hill.

Future Studies and Critical Thinking

The-modern-mind-and-the-future-of-human-evolutionThe intellectual roots of critical thinking are as ancient as its etymology, traceable, ultimately, to the teaching practice and vision of Socrates 2,500 years ago who discovered by a method of probing questioning that people could not rationally justify their confident claims to knowledge. Confused meanings, inadequate evidence, or self-contradictory beliefs often lurked beneath smooth but largely empty rhetoric. Socrates established the fact that one cannot depend upon those in “authority” to have sound knowledge and insight. He demonstrated that persons may have power and high position and yet be deeply confused and irrational. He established the importance of asking deep questions that probe profoundly into thinking before we accept ideas as worthy of belief.

Socrates established the importance of seeking evidence, closely examining reasoning and assumptions, analyzing basic concepts, and tracing out implications not only of what is said but of what is done as well. His method of questioning is now known as “Socratic questioning” and is the best known critical thinking teaching strategy. In his mode of questioning, Socrates highlighted the need in thinking for clarity and logical consistency.

Socrates set the agenda for the tradition of critical thinking, namely, to reflectively question common beliefs and explanations, carefully distinguishing those beliefs that are reasonable and logical from those which–however appealing they may be to our native egocentrism, however much they serve our vested interests, however comfortable or comforting they may be-lack adequate evidence or rational foundation to warrant our belief.


Critical Thinking for the Future of Human Evolution

Socrates’ practice was followed by the critical thinking of Plato (who recorded Socrates’ thought), Aristotle, and the Greek skeptics, all of whom emphasized that things are often very different from what they appear to be and that only the trained mind is prepared to see through the way things look to us on the surface (delusive appearances) to the way they really are beneath the surface (the deeper realities of life). From this ancient Greek tradition emerged the need, for anyone who aspired to understand the deeper realities, to think systematically, to trace implications broadly and deeply, for only thinking that is comprehensive, well-reasoned, and responsive to objections can take us beyond the surface.

In the middle ages, the tradition of systematic critical thinking was embodied in the writings and teachings of such thinkers as Thomas Aquinas (Sumna Theologica) who to ensure his thinking met the test of critical thought-always systematically stated, considered, and answered all criticisms of his ideas as a necessary stage in developing them. Aquinas heightened our awareness not only of the potential power of reasoning but also of the need for reasoning to be systematically cultivated and “cross-examined.” Of course, Aquinas’ thinking also illustrates that those who think critically do not always reject established beliefs, only those beliefs that lack reasonable foundations.

In the Renaissance (15th and 16th Centuries), a flood of scholars in Europe began to think critically about religion, art, society, human nature, law, and freedom. They proceeded with the assumption that most of the domains of human life were in need of searching analysis and critique. Among these scholars were Colet, Erasmus, and More in England. They followed up on the insight of the ancients.

Francis Bacon, in England, was explicitly concerned with the way we misuse our minds in seeking knowledge. He recognized explicitly that the mind cannot safely be left to its natural tendencies. In his book The Advancement of Learning, he argued for the importance of studying the world empirically. He laid the foundation for modern science with his emphasis on the information-gathering processes. He also called attention to the fact that most people, if left to their own devices, develop bad habits of thought (which he called “idols”) that lead them to believe what is false or misleading. He called attention to “Idols of the tribe” (the ways our mind naturally tends to trick itself), “Idols of the market-place” (the ways we misuse words), “Idols of the theater” (our tendency to become trapped in conventional systems of thought), and “Idols of the schools” (the problems in thinking when based on blind rules and poor instruction). His book could be considered one of the earliest texts in critical thinking, for his agenda was very much the traditional agenda of critical thinking.

Some fifty years later in France , Descartes wrote what might be called the second text in critical thinking, Rules For the Direction of the Mind. In it, Descartes argued for the need for a special systematic disciplining of the mind to guide it in thinking. He articulated and defended the need in thinking for clarity and precision. He developed a method of critical thought based on the principle of systematic doubt. He emphasized the need to base thinking on well-thought through foundational assumptions. Every part of thinking, he argued, should be questioned, doubted, and tested.

In the same time period, Sir Thomas More developed a model of a new social order, Utopia, in which every domain of the present world was subject to critique. His implicit thesis was that established social systems are in need of radical analysis and critique. The critical thinking of these Renaissance and post-Renaissance scholars opened the way for the emergence of science and for the development of democracy, human rights, and freedom for thought.

In the Italian Renaissance, Machiavelli’s The Prince critically assessed the politics of the day, and laid the foundation for modern critical political thought. He refused to assume that government functioned as those in power said it did. Rather, he critically analyzed how it did function and laid the foundation for political thinking that exposes both, on the one hand, the real agendas of politicians and, on the other hand, the many contradictions and inconsistencies of the hard, cruel, world of the politics of his day

Thomas Hobbes, Jonh Locke, Critical Thinking, and the foundations of Futrue Human EvolutionHobbes and Locke (in 16th and 17th Century England) displayed the same confidence in the critical mind of the thinker that we find in Machiavelli. Neither accepted the traditional picture of things dominant in the thinking of their day. Neither accepted as necessarily rational that which was considered “normal” in their culture. Both looked to the critical mind to open up new vistas of learning. Hobbes adopted a naturalistic view of the world in which everything was to be explained by evidence and reasoning. Locke defended a common sense analysis of everyday life and thought. He laid the theoretical foundation for critical thinking about basic human rights and the responsibilities of all governments to submit to the reasoned criticism of thoughtful citizens.

It was in this spirit of intellectual freedom and critical thought that people such as Robert Boyle (in the 17th Century) and Sir Isaac Newton (in the 17th and 18th Century) did their work. In his Sceptical Chymist, Boyle severely criticized the chemical theory that had preceded him. Newton, in turn, developed a far-reaching framework of thought which roundly criticized the traditionally accepted world view. He extended the critical thought of such minds as Copernicus, Galileo, and Kepler. After Boyle and Newton, it was recognized by those who reflected seriously on the natural world that egocentric views of world must be abandoned in favor of views based entirely on carefully gathered evidence and sound reasoning .

Another significant contribution to critical thinking was made by the thinkers of the French enlightenment: Bayle, Montesquieu, Voltaire, and Diderot. They all began with the premise that the human mind, when disciplined by reason, is better able to figure out the nature of the social and political world. What is more, for these thinkers, reason must turn inward upon itself, in order to determine weaknesses and strengths of thought. They valued disciplined intellectual exchange, in which all views had to be submitted to serious analysis and critique. They believed that all authority must submit in one way or another to the scrutiny of reasonable critical questioning.

Eighteenth Century thinkers extended our conception of critical thought even further, developing our sense of the power of critical thought and of its tools. Applied to the problem of economics, it produced Adam Smith’s Wealth of Nations. In the same year, applied to the traditional concept of loyalty to the king, it produced the Declaration of Independence. Applied to reason itself, it produced Kant’s Critique of Pure Reason.

In the 19th Century, critical thought was extended even further into the domain of human social life by Comte and Spencer. Applied to the problems of capitalism, it produced the searching social and economic critique of Karl Marx. Applied to the history of human culture and the basis of biological life, it led to Darwin’s Descent of Man. Applied to the unconscious mind, it is reflected in the works of Sigmund Freud. Applied to cultures, it led to the establishment of the field of Anthropological studies. Applied to language, it led to the field of Linguistics and to many deep probings of the functions of symbols and language in human life.

Pink Floyd's, "The Wall" illustrates the dangers of social indoctrination

Pink Floyd’s, “The Wall” illustrates the dangers of social indoctrination

In the 20th Century, our understanding of the power and nature of critical thinking has emerged in increasingly more explicit formulations. In 1906, William Graham Sumner published a land-breaking study of the foundations of sociology and anthropology, Folkways, in which he documented the tendency of the human mind to think sociocentrically and the parallel tendency for schools to serve the (uncritical) function of social indoctrination :

“Schools make persons all on one pattern, orthodoxy. School education, unless it is regulated by the best knowledge and good sense, will produce men and women who are all of one pattern, as if turned in a lathe…An orthodoxy is produced in regard to all the great doctrines of life. It consists of the most worn and commonplace opinions which are common in the masses. The popular opinions always contain broad fallacies, half-truths, and glib generalizations (p. 630).”

At the same time, Sumner recognized the deep need for critical thinking in life and in education:

“Criticism is the examination and test of propositions of any kind which are offered for acceptance, in order to find out whether they correspond to reality or not. The critical faculty is a product of education and training. It is a mental habit and power. It is a prime condition of human welfare that men and women should be trained in it. It is our only guarantee against delusion, deception, superstition, and misapprehension of ourselves and our earthly circumstances. Education is good just so far as it produces well-developed critical faculty. …A teacher of any subject who insists on accuracy and a rational control of all processes and methods, and who holds everything open to unlimited verification and revision is cultivating that method as a habit in the pupils. Men educated in it cannot be stampeded…They are slow to believe. They can hold things as possible or probable in all degrees, without certainty and without pain. They can wait for evidence and weigh evidence…They can resist appeals to their dearest prejudices…Education in the critical faculty is the only education of which it can be truly said that it makes good citizens (pp. 632, 633).”

John Dewey agreed. From his work, we have increased our sense of the pragmatic basis of human thought (its instrumental nature) , and especially its grounding in actual human purposes, goals, and objectives. From the work of Ludwig Wittgenstein we have increased our awareness not only of the importance of concepts in human thought, but also of the need to analyze concepts and assess their power and limitations. From the work of Piaget, we have increased our awareness of the egocentric and sociocentric tendencies of human thought and of the special need to develop critical thought which is able to reason within multiple standpoints, and to be raised to the level of “conscious realization.” From the massive contribution of all the “hard” sciences, we have learned the power of information and the importance of gathering information with great care and precision, and with sensitivity to its potential inaccuracy, distortion, or misuse. From the contribution of depth-psychology, we have learned how easily the human mind is self-deceived, how easily it unconsciously constructs illusions and delusions, how easily it rationalizes and stereotypes, projects and scapegoats.

To sum up, the tools and resources of the critical thinker have been vastly increased in virtue of the history of critical thought. Hundreds of thinkers have contributed to its development. Each major discipline has made some contribution to critical thought. Yet for most educational purposes, it is the summing up of base-line common denominators for critical thinking that is most important. Let us consider now that summation.

The Common Denominators of Critical Thinking Are the Most Important By-products of the History of Critical Thinking

We now recognize that critical thinking, by its very nature, requires, for example, the systematic monitoring of thought, that thinking, to be critical, must not be accepted at face value but must be analyzed and assessed for its clarity, accuracy, relevance, depth, breadth, and logicalness. We now recognize that critical thinking, by its very nature, requires, for example, the recognition that all reasoning occurs within points of view and frames of reference, that all reasoning proceeds from some goals and objectives, has an informational base, that all data when used in reasoning must be interpreted, that interpretation involves concepts, that concepts entail assumptions, and that all basic inferences in thought have implications. We now recognize that each of these dimensions of thinking need to be monitored and that problems of thinking can occur in any of them.

The result of the collective contribution of the history of critical thought is that the basic questions of Socrates can now be much more powerfully and focally framed and used. In every domain of human thought, and within every use of reasoning within any domain, it is now possible to question:

  • ends and objectives,
  • the status and wording of questions,
  • the sources of information and fact,
  • the method and quality of information collection,
  • the mode of judgment and reasoning used,
  • the concepts that make that reasoning possible,
  • the assumptions that underlie concepts in use,
  • the implications that follow from their use, and
  • the point of view or frame of reference within which reasoning takes place.

In other words, questioning that focuses on these fundamentals of thought and reasoning are now baseline in critical thinking. It is beyond question that intellectual errors or mistakes can occur in any of these dimensions, and that students need to be fluent in talking about these structures and standards.

Independent of the subject studied, students need to be able to articulate thinking about thinking that reflects basic command of the intellectual dimensions of thought: “Let’s see, what is the most fundamental issue here? From what point of view should I approach this problem? Does it make sense for me to assume this? From these data may I infer this? What is implied in this graph? What is the fundamental concept here? Is this consistent with that? What makes this question complex? How could I check the accuracy of these data? If this is so, what else is implied? Is this a credible source of information?, etc…, etc…”

With intellectual language such as this in the foreground, students can now be taught at least minimal critical thinking moves within any subject field. What is more, there is no reason in principle that students cannot take the basic tools of critical thought which they learn in one domain of study and extend it (with appropriate adjustments) to all the other domains and subjects which they study. For example, having questioned the wording of a problem in math, I am more likely to question the wording of a problem in the other subjects I study.

As a result of the fact that students can learn these generalizable critical thinking moves, they need not be taught history simply as a body of facts to memorize; they can now be taught history as historical reasoning. Classes can be designed so that students learn to think historically and develop skills and abilities essential to historical thought. Math can be taught so that the emphasis is on mathematical reasoning. Students can learn to think geographically, economically, biologically, chemically, in courses within these disciplines. In principle, then, all students can be taught so that they learn how to bring the basic tools of disciplined reasoning into every subject they study. Futuring encompasses nearly all disciplines.


Richard Paul, Linda Elder, and Ted Bartell, (1997). California Teacher Preparation for Instruction in Critical Thinking: Research Findings and Policy Recommendations: State of California, California Commission on Teacher Credentialing, Sacramento, CA

The Scientific Method

Scientific Method Diagram Drawing on the Future of Human Evolution Website

Click for Larger Scientific Method Diagram

In pursuit of a more deliberate and positive future, a number of methods, procedures, and disciplines are needed to study, formulate, and enact positive change. One of the most important of these is the scientific method. While this method is the best (and only reliable) method to use for proving the laws that govern physical reality and for confirming our assumptions about the universe around us, it is not without its limitations. Generally, these limitations are in the form of our ability to accurately measure what we suspect may exist (5+ dimensions and the basic constituents of matter to name but two) for lack of instrument sophistication.

As a result of these limitations, there are many fundamental questions that science can not yet answer. We have little doubt that as our collective knowledge increases, as artificial intelligences gain thousands of times our current human processing power, and as instrumentation develops on the molecular level (nanotechnology), many if not all, of the secrets of the universe may be revealed.

Until that time, we are all well advised to keep an open mind with regard to the nature of the universe and its creation. But there is no need to prevent the unfolding of our future.

Contents on this Page

Introduction to the Scientific Method

I.  The scientific method has four steps
II. Testing hypotheses
III. Common Mistakes in Applying the Scientific Method
IV. Hypotheses, Models, Theories and Laws
V.  Are there circumstances in which the Scientific Method is not applicable?
VI. Conclusion
VII. References

 Introduction to the Scientific Method

The scientific method is the process by which scientists, collectively
and over time, endeavor to construct an accurate (that is, reliable, consistent
and non-arbitrary) representation of the world.

Recognizing that personal and cultural beliefs influence both our
perceptions and our interpretations of natural phenomena, we aim through the
use of standard procedures and criteria to minimize those influences when
developing a theory. As a famous scientist once said, “Smart people (like smart
lawyers) can come up with very good explanations for mistaken points of view.”
In summary, the scientific method attempts to minimize the influence of bias or
prejudice in the experimenter when testing an hypothesis or a theory.

I. The scientific method has four steps

1. Observation and description of a phenomenon or group of phenomena.

2. Formulation of an hypothesis to explain the phenomena. In physics, the
hypothesis often takes the form of a causal mechanism or a mathematical

3. Use of the hypothesis to predict the existence of other phenomena, or to
predict quantitatively the results of new observations.

4. Performance of experimental tests of the predictions by several independent
experimenters and properly performed experiments.

If the experiments bear out the hypothesis it may come to be regarded as a
theory or law of nature (more on the concepts of hypothesis, model, theory and
law below). If the experiments do not bear out the hypothesis, it must be
rejected or modified. What is key in the description of the scientific method
just given is the predictive power (the ability to get more out of the theory
than you put in; see Barrow, 1991) of the hypothesis or theory, as tested by
experiment. It is often said in science that theories can never be proved, only
disproved. There is always the possibility that a new observation or a new
experiment will conflict with a long-standing theory.

II. Testing hypotheses

As just stated, experimental tests may lead either to the confirmation
of the hypothesis, or to the ruling out of the hypothesis. The scientific
method requires that an hypothesis be ruled out or modified if its predictions
are clearly and repeatedly incompatible with experimental tests. Further, no
matter how elegant a theory is, its predictions must agree with experimental
results if we are to believe that it is a valid description of nature. In
physics, as in every experimental science, “experiment is supreme” and
experimental verification of hypothetical predictions is absolutely necessary.
Experiments may test the theory directly (for example, the observation of a new
particle) or may test for consequences derived from the theory using
mathematics and logic (the rate of a radioactive decay process requiring the
existence of the new particle). Note that the necessity of experiment also
implies that a theory must be testable. Theories which cannot be tested,
because, for instance, they have no observable ramifications (such as, a
particle whose characteristics make it unobservable), do not qualify as
scientific theories.

If the predictions of a long-standing theory are found to be in disagreement
with new experimental results, the theory may be discarded as a description of
reality, but it may continue to be applicable within a limited range of
measurable parameters. For example, the laws of classical mechanics (Newton’s
Laws) are valid only when the velocities of interest are much smaller than the
speed of light (that is, in algebraic form, when v/c << 1). Since this is
the domain of a large portion of human experience, the laws of classical
mechanics are widely, usefully and correctly applied in a large range of
technological and scientific problems. Yet in nature we observe a domain in
which v/c is not small. The motions of objects in this domain, as well as
motion in the “classical” domain, are accurately described through the
equations of Einstein’s theory of relativity. We believe, due to experimental
tests, that relativistic theory provides a more general, and therefore more
accurate, description of the principles governing our universe, than the
earlier “classical” theory. Further, we find that the relativistic equations
reduce to the classical equations in the limit v/c << 1. Similarly,
classical physics is valid only at distances much larger than atomic scales (x
>> 10-8 m). A description which is valid at all length scales
is given by the equations of quantum mechanics.

We are all familiar with theories which had to be discarded in the face of
experimental evidence. In the field of astronomy, the earth-centered
description of the planetary orbits was overthrown by the Copernican system, in
which the sun was placed at the center of a series of concentric, circular
planetary orbits. Later, this theory was modified, as measurements of the
planets motions were found to be compatible with elliptical, not circular,
orbits, and still later planetary motion was found to be derivable from
Newton’s laws.

Error in experiments have several sources. First, there is error intrinsic to
instruments of measurement. Because this type of error has equal probability of
producing a measurement higher or lower numerically than the “true” value, it
is called random error. Second, there is non-random or systematic error, due to
factors which bias the result in one direction. No measurement, and therefore
no experiment, can be perfectly precise. At the same time, in science we have
standard ways of estimating and in some cases reducing errors. Thus it is
important to determine the accuracy of a particular measurement and, when
stating quantitative results, to quote the measurement error. A measurement
without a quoted error is meaningless. The comparison between experiment and
theory is made within the context of experimental errors. Scientists ask, how
many standard deviations are the results from the theoretical prediction? Have
all sources of systematic and random errors been properly estimated?

III. Common Mistakes in Applying the Scientific Method

As stated earlier, the scientific method attempts to minimize the
influence of the scientist’s bias on the outcome of an experiment. That is,
when testing an hypothesis or a theory, the scientist may have a preference for
one outcome or another, and it is important that this preference not bias the
results or their interpretation. The most fundamental error is to mistake the
hypothesis for an explanation of a phenomenon, without performing experimental
tests. Sometimes “common sense” and “logic” tempt us into believing that no
test is needed. There are numerous examples of this, dating from the Greek
philosophers to the present day.

Another common mistake is to ignore or rule out data which do not support the
hypothesis. Ideally, the experimenter is open to the possibility that the
hypothesis is correct or incorrect. Sometimes, however, a scientist may have a
strong belief that the hypothesis is true (or false), or feels internal or
external pressure to get a specific result. In that case, there may be a
psychological tendency to find “something wrong”, such as systematic effects,
with data which do not support the scientist’s expectations, while data which
do agree with those expectations may not be checked as carefully. The lesson is
that all data must be handled in the same way.

Another common mistake arises from the failure to estimate
quantitatively systematic errors (and all errors). There are many
examples of discoveries which were missed by experimenters whose data contained
a new phenomenon, but who explained it away as a systematic background.
Conversely, there are many examples of alleged “new discoveries” which later
proved to be due to systematic errors not accounted for by the “discoverers.”

In a field where there is active experimentation and open communication
among members of the scientific community, the biases of individuals or groups
may cancel out, because experimental tests are repeated by different scientists
who may have different biases. In addition, different types of experimental
setups have different sources of systematic errors. Over a period spanning a
variety of experimental tests (usually at least several years), a consensus
develops in the community as to which experimental results have stood the test
of time.

IV. Hypotheses, Models, Theories and Laws

In physics and other science disciplines, the words “hypothesis,”
“model,” “theory” and “law” have different connotations in relation to the
stage of acceptance or knowledge about a group of phenomena.

An hypothesis is a limited statement regarding cause and effect in
specific situations; it also refers to our state of knowledge before
experimental work has been performed and perhaps even before new phenomena have
been predicted. To take an example from daily life, suppose you discover that
your car will not start. You may say, “My car does not start because the
battery is low.” This is your first hypothesis. You may then check whether the
lights were left on, or if the engine makes a particular sound when you turn
the ignition key. You might actually check the voltage across the terminals of
the battery. If you discover that the battery is not low, you might attempt
another hypothesis (“The starter is broken”; “This is really not my car.”)

The word model is reserved for situations when it is known that the
hypothesis has at least limited validity. A often-cited example of this is the
Bohr model of the atom, in which, in an analogy to the solar system, the
electrons are described has moving in circular orbits around the nucleus. This
is not an accurate depiction of what an atom “looks like,” but the model
succeeds in mathematically representing the energies (but not the correct
angular momenta) of the quantum states of the electron in the simplest case,
the hydrogen atom. Another example is Hook’s Law (which should be called Hook’s
principle, or Hook’s model), which states that the force exerted by a mass
attached to a spring is proportional to the amount the spring is stretched. We
know that this principle is only valid for small amounts of stretching. The
“law” fails when the spring is stretched beyond its elastic limit (it can
break). This principle, however, leads to the prediction of simple harmonic
motion, and, as a model of the behavior of a spring, has been versatile
in an extremely broad range of applications.

A scientific theory or law represents an hypothesis, or a group of
related hypotheses, which has been confirmed through repeated experimental
tests. Theories in physics are often formulated in terms of a few concepts and
equations, which are identified with “laws of nature,” suggesting their
universal applicability. Accepted scientific theories and laws become part of
our understanding of the universe and the basis for exploring less
well-understood areas of knowledge. Theories are not easily discarded; new
discoveries are first assumed to fit into the existing theoretical framework.
It is only when, after repeated experimental tests, the new phenomenon cannot
be accommodated that scientists seriously question the theory and attempt to
modify it. The validity that we attach to scientific theories as representing
realities of the physical world is to be contrasted with the facile
invalidation implied by the expression, “It’s only a theory.” For example, it
is unlikely that a person will step off a tall building on the assumption that
they will not fall, because “Gravity is only a theory.”

Changes in scientific thought and theories occur, of course, sometimes
revolutionizing our view of the world (Kuhn, 1962). Again, the key force for
change is the scientific method, and its emphasis on experiment.

V. Are there circumstances in which the Scientific Method is not applicable?

While the scientific method is necessary in developing scientific
knowledge, it is also useful in everyday problem-solving. What do you do when
your telephone doesn’t work? Is the problem in the hand set, the cabling inside
your house, the hookup outside, or in the workings of the phone company? The
process you might go through to solve this problem could involve scientific
thinking, and the results might contradict your initial expectations.

Like any good scientist, you may question the range of situations (outside of
science) in which the scientific method may be applied. From what has been
stated above, we determine that the scientific method works best in situations
where one can isolate the phenomenon of interest, by eliminating or accounting
for extraneous factors, and where one can repeatedly test the system under
study after making limited, controlled changes in it.

There are, of course, circumstances when one cannot isolate the phenomena or
when one cannot repeat the measurement over and over again. In such cases the
results may depend in part on the history of a situation. This often occurs in
social interactions between people. For example, when a lawyer makes arguments
in front of a jury in court, she or he cannot try other approaches by repeating
the trial over and over again in front of the same jury. In a new trial, the
jury composition will be different. Even the same jury hearing a new set of
arguments cannot be expected to forget what they heard before.

VI. Conclusion

The scientific method is intricately associated with science, the
process of human inquiry that pervades the modern era on many levels. While the
method appears simple and logical in description, there is perhaps no more
complex question than that of knowing how we come to know things. In this
introduction, we have emphasized that the scientific method distinguishes
science from other forms of explanation because of its requirement of
systematic experimentation. We have also tried to point out some of the
criteria and practices developed by scientists to reduce the influence of
individual or social bias on scientific findings. Further investigations of the
scientific method and other aspects of scientific practice may be found in the
references listed below.

VII. References

1. Wilson, E. Bright. An Introduction to Scientific Research
(McGraw-Hill, 1952).

2. Kuhn, Thomas. The Structure of Scientific Revolutions (Univ. of Chicago Press, 1962).

3. Barrow, John. Theories of Everything (Oxford Univ. Press, 1991).

The Future Human Evolution Website Education Methodology

As stated on our home page, our primary purpose is to stimulate your thinking about humanity’s collective future. Our objective is to do this following a particular methodology and for a particular purpose. In addition to the pure enjoyment we hope you receive from learning about the diverse topics we cover, it is our sincere desire that you should walk away from your visit more aware of the choices we as a society will soon be called upon to make, and the potential implications of those choices.

If you are new to the discipline of “Futuring”, we highly recommend you visit our brief introduction to Future Studies

Foundational Principles

Speculation on the future is just that- speculation. As we enter an era of uncharted and unprecedented changes in technology, society, the mind, the body, and the law, we must look to the past to help guide our future. By understanding the concepts and various theories on freedom, rights, human dignity, the nature of the universe, possible reasons for existence, and the full range of human activity and imagination, we can begin to look at underlying themes that we can choose to apply to our malleable future.

News and Current Events

So we begin with principles and theories as the foundation, or road if you will, for our journey into the future. What else is needed to help us understand the possible consequences of our choices in applying science and technology to change ourselves? A look at the present provides a mileage marker for where we are at the present, to more accurately begin the triangulation process for charting our potential destination(s). By understanding the current applications of technology, laws governing them, social attitudes, and today’s areas of research, we can more accurately anticipate the consequences (good and bad) and shape social/technical policy toward a more deliberate future.


Speculation forms the third point of our triangulation of our future position. Our methods of speculation include everything from likely scenarios, possible outcomes, and highly unlikely events. We use short excerpts and long essays. The purpose is to demonstrate the innumerable ways science and technology might be used, and possible consequences of those decisions and applications so that we may better understand options for guiding our current actions in the pursuit of a more deliberate future.

Understanding and Conclusions

After the triangulation of the past, the present, and the future, it is our sincere desire that you should walk away from your visit to our site more aware of the decisions we as a society will soon be called upon to make, and the potential implications of those choices.

The conclusions you draw will be your own. The actions you decide to take will be your own. We only hope that you participate in guiding a more deliberate future for all of us.


The Bell Curve of Possible Futures

Inspired by Dr. Peter Bishop’s Introduction to Future Studies, WFS Conference 2005, the principles remain relevant today.

Possible Futures: Probable, Plausible, and Highly Unlikely

Bell Curve of Possible Futures

The Job of a Futurist and the Goals of this Site

The job of any futurist is to consider the probable, the plausible and the very unlikely in order to challenge our assumptions. There are some futurists who also engage in presenting preferable futures.

The bell curve to the right depicts the range of possibilities. At the apex of the curve we have the most likely future based on current knowledge and expectations. At the tail end we have the highly unlikely. The actual future falls somewhere between the two.

Then why delve into the highly unlikely if it will not occur? Because to get to the highly unlikely, you have to take into consideration some “far out” assumptions. By doing so, the futurist challenges in-the-box thinking, helping to eliminate surprise.

The Cone of Plausible Futures

Cone of Plausible Futures
The image to the left is another view of possible futures, focusing on the plausible. Dr. Bishop uses this image to illustrate:

1. The present is made up a wide range of converging events.

2. The future has multiple possibilities based on the actions of many.

3. Plausible futures are more narrowly defined than possible futures, more broadly than probable.

4. The most likely future, the straight extrapolation, never happens. There is always an element of unpredictability.

Probable vs. Plausible Futures

Probable and Plausible Futures
The table to the right lists several distinctions between futures that are probable (more likely) and plausible (possible but less likely).


The Goal of this Site

So, how does all of this relate to the goal of this site? In accordance with the job of a futurist, we present factual information regarding the probable, we forecast events that are plausible, and we present speculation that is very unlikely in order to challenge your assumptions about the possible. When is a futurist successful? According to Dr. Bishop, “Success means no surprises.” If we challenge your assumptions that life will be the same 20 years from now as it is today, we have been successful. If we motivate you to take positive action toward your preferable future, well, that’s icing on the cake!
Range of Possible Futures
What does your preferable future look like?

Please visit our Futures Portal for more on future scenarios and futuring methods.