Wednesday, February 07, 2007

The Scientific Method, part 3

In the previous essays, I explored why and how the scientific method was developed, and how the method addresses the procedural epistemic problems that various philosophers have identified.

The scientific method is a procedure for explaining some set of facts, statements directly verifiable as true. These facts form the foundation of the scientific method. In white-coats-and-expensive-equipment science, these facts are repeatable, verifiable common statements about perceptions, but the scientific method can apply to any sort of verifiable facts, such as the construction of an individual's view of reality from her private subjective experience.

In the scientific method, we make up simple premises, deduce theorems from those premises which correspond to specific facts, and match the validity of those theorems to assent of the facts. If they match, good. If they don't match, we change something and try again.

The scientific method can even operate without any scientists, so long as there's some physical way that eliminates theories which are incompatible with the facts. Evolution, for instance, is a scientific process: variations (mutuations) of theoretic constructs (genomes, which entail organisms) either do or do not match the facts (either reproduce or fail to reproduce) and are thus retained or eliminated.

Everything else that scientists do are particular techniques to operate the scientific method efficiently. When a single experiment can cost $1,000,000 (or even $1,000), it pays to be very careful and efficient, to use controls, etc. But when "experiments" are performed a thousand times a day by billions of ordinary people (such as those experiences which underlie our rough idea of gravity or causation), we can afford to be less precise and rigorous.

Now I'd like to address some of the objections raised by my readers.

Timmo writes:
Quine has argued that no individual sentence has a distinctive verification or falsification condition, except relative to a mass of background theory against which observation takes place. All experimental observations are theory laden. If we were studying a astronomical phenomon, we would be working with observations made with telescopes (of one sort or another). However, in using such telescopes (and interpreting what we see through them) we assume an understanding, if only an intuitive one, of optics.


Timmo gets one detail wrong: when astronomers use telescopes, they assume not an intuitive but a very rigorous and scientific understanding of optics. But this scientific understanding is still theory-laden.

Theory-loading is a decisive objection to Logical Positivism's notion of Empiricism (which I will refer to from now on as Axiomatic Foundationalism"[1]). If we are going to deduce theoretical knowledge from statements of experience, those statements--like mathematical axioms--have to be very simple and unequivocal. However, the meanings of our statements of experience are complicated and equivocal; just posing a question requires all sorts of assumptions and premises which are not justified by experience. Quine shows that Axiomatic Foundationalist Empiricism just can't get off the ground.

Quine's objection is important to the Popper's notion of falsificationism (which I'll refer to from now on as Evidentiary Foundationalism[1]). It destroys Popper's First Mistake[2], that it is necessary for individual hypotheses to be falsifiable. It would really be terrific if individual statements were falsifiable: We could then then go through our theories line by line, and not only decide on the meaning of each statement individually, but also be able to test (attempt to falsify) each statement individually. If we could conduct science so efficiently, we'd have all the secrets of the universe nailed down by teatime Thursday. Sadly, Quine demolishes any notion of such marvelous efficiency.

But Quine's objection is not nearly so decisive or destructive for Evidentiary Foundationalism. The premises of Evidentiary Foundationalism are explicitly made up; they have no special epistemic status. Quine casts doubt on premises that are assumed to be doubtful. What matters to Evidentiary Foundationalism is not that the theorems are not theory-laden, but that the theory-loading is precise and rigorous. And the theory loading can be rigorous, since we can arbitrarily choose our premises, because we can choose premises that have the simple character of mathematical axioms from which we can construct rigorous deductions.

Similarly, Evidentiary Foundationalism does not depend on a precise understanding of the meaning of our natural language statements of perception. All it needs to do is predict assent or dissent to those statements. "Yes" and "no" are not theory-laden; even Quine allows us this much.

Timmo also notes:
Falsification can only tell you what not believe, but not what is true. Even a well-tested theory that has not yet been falsified may turn out to be wrong. It leaves the question: what makes it justified for you to believe it?


We can never absolutely confirm a theory. But we can measure with good precision how well a theory fits the evidence that has been evaluated. And we can even, to some extent, measure how well a theory has actually been tested.

This first measurement is not so complicated[3]. We look at those theorems in a theory which correspond to statements of experience. We compare the size of the range of values in the theorem with the range of logically possible values which we might experience. For instance, if a theory entails "yes" where the logically possible experiences are "yes and no", then the theory is 50% permissive. If the theory predicts, "the voltmeter will read 16 V +/- 1 V," and the voltmeter can logically read anywhere from 0 to 100 volts, then the theory predicts 3 possible readings (15, 16 or 17) out of 100, and is therefore 3% permissive. Again, we are not worried about the details of how the theorems of the theory match up to statements of experience, we are simply concerned with being able to correlate the responses to the predictions.

We then perform a number of experiments. Every time our theory agrees with experiment, we multiply by its permissivity. Ten straight "yes" answers yields 0.5^10 = 0.001; ten straight 15-17 V readings yields .03^10 = 0.000000000000001. Subtract this number from 1.0 and you get how well the theory fits those ten facts.[4]

Note the key part that falsificationism plays in figuring out how well a theory positively fits the facts: A theory that predicts "yes or no", a theory which predicts "50V +/- 50V" (i.e. unfalsifiable theories) has a permissivity of 100%. No matter how many experiments we perform 1 - 1^n = 0: The theory fails to fit the facts in the just the same sense that, while I might don William Howard Taft's overcoat, it could not justly be said that it "fit" me--or that I fit it.

As to what such a measure of "fitness" says about the truth of a theory, we'll have to venture into the murky swamps of metaphysics, which will be the subject of a future essay--which will address other of Timmo's more metaphysical concerns as well as the query about "parsimony".


[1] Taken literally, "Axiomatic Foundationalism" is redundant. Both terms literally mean, "accepted a priori as true." I want to differentiate the direction of deduction. Axiomatic Foundationalism means treating the foundation (the a priori true statements) like mathematical axioms, such as the axioms (premises) of Peano's Arithmetic in part 2. Evidentiary Foundationalism, on the other hand, means treating the foundation like derived theorems, and hypothesizing the axioms from which we could deduce those theorems.

[2] I apologize in advance if I'm unfairly maligning Popper. I'm a lousy, lazy, and utterly slack scholar. Still, I've seen enough people, far better scholars than me, consider individual falsification crucial to Popper's epistemology that I feel justified in attributing the mistake to Popper himself.

[3] The second measurement, power analysis, measures how "thoroughly" a theory has been tested.

[4] I'm vastly oversimplifying here, skipping over volumes of statistical theory. I offer what I think is (given a suitably charitable interpretation) the essential "flavor" without any gross inaccuracy.

2 comments:

  1. Barefoot Bum,

    My comments are not terribly prompt, but I hope they won't be overlooked on that account!

    First, you seem to be offering a theory about how certain we can be in an experimental result. You write:

    Ten straight "yes" answers yields 0.5^10 = 0.001; ten straight 15-17 V readings yields .03^10 = 0.000000000000001. Subtract this number from 1.0 and you get how well the theory fits those ten facts.

    Though you write that you are oversimplifying matters, I should say that this isn't even correct so far as it goes. Many experiments are performed (such as difficult ones in particle accelerators) only a few times, but these results are considered very certain. How confident one ought to be in an experimental result is not simply, or primarily, a function of how many times the experiment has been performed.

    Also, I should like to dispute your interpretation of Quine on "natural language statements of perception". These are not the cut-and-dry statements that you seem to suggest they are. Hypotheses are sometimes completely refuted by experience, but this is only possible because the scientists involved are holding other assumptions fixed, assumptions which might be contestable.

    Quine also disputes the notion that the meanings of scientific hypotheses -- or any other proposition -- really are fixed. He imagines a field linguist investigating unknown, native language and attempting to prepare a Native-English translation manual. The evidence available to the linguist (the behavior of the natives) under-determines what the appropriate translation is. Many different rival manuals can be constructed from the behavior of the natives. Quine thinks this underdetermination is really very radical: even given all of the physical facts of the universe, there is no way to settle questions about the correct translation. As a result of this indeterminacy of translation, Quine is skeptical about the existence of meanings.

    This includes even reports of sensory experience. When we dicuss a sensory experience, we are not simply described what is 'given' to us, but instead couching it in terms of a conceptual scheme whose character is under-determined by experience. Indeed, in "Two Dogmas of Empiricism", Quine describes our talk of physical objects as a convenient myth for organizing our thoughts about sensory experiences. Such a myth is only valuable so long as it is pragmatic for us to maintain it.

    For Quine, every belief, even the simplest, more every day ones, are part of a great web of belief. No place in the web has special normative privilege over the others: any belief may be maintained in the face of new experience given suitable adjustments throughout the web.

    As always, I enjoy reading your blog and look forward to your venture into the murky swamps.

    ReplyDelete
  2. Your reply to my latest post was more prompt than my reply to your comment. And as you can see, you are definitely not being overlooked. As always, I eagerly await all your comments.

    I should say that this [statistical interpretation] isn't even correct so far as it goes.

    What I'm leaving out here (and for which I beg a charitable interpretation) is the underlying statistical theory that compares the probability of seeing the experimental results by chance. Cases where a single (or very few) experiments are seen as decisive are just those experiments where the result is highly unlikely by chance.

    Even so, there is an ambiguity in the meaning of "an experiment", between something written up in a scientific paper and a single one of the data points actually collected. Even a "single" experiment (such as a particle collider experiment) will consist of scores, if not hundreds, thousands or more, of individual data points. It is to these individual data points that I'm referring to as an experiment. Perhaps a better word would be "observation".

    "[N]atural language statements of perception"... are not the cut-and-dry statements that you seem to suggest they are.

    I don't suggest these statements are at all cut-and-dry. quite the contrary: I agree with Quine. Our assent or dissent is, however, cut-and-dry.

    Hypotheses are sometimes completely refuted by experience, but this is only possible because the scientists involved are holding other assumptions fixed, assumptions which might be contestable.

    This is precisely correct. It is the whole theory which is subjected to testing; the choice of which hypothesis to abandon or alter is not determined by the outcome of any experiment. I'll address the metaphysical consequences of this interpretation in my next post.

    Quine also disputes the notion that the meanings of scientific hypotheses -- or any other proposition -- really are fixed.

    I assume you're referring to Quine's comments on radical translation in Word and Object, especially in chapter 2 [I'm a poor scholar; if there's a better version of this argument than in W&O chapter 2, please let me know]. This is an interesting argument, but has some serious metaphysical failings (echoed later in Plantinga's argument against Evolution). Again, I'll address the metaphysical dimension in my next post.

    Indeed, in "Two Dogmas of Empiricism", Quine describes our talk of physical objects as a convenient myth for organizing our thoughts about sensory experiences. Such a myth is only valuable so long as it is pragmatic for us to maintain it.

    And indeed, the heart of where I differ with Quine is his labeling our talk of physical objects as a myth, which implies that our notion of reality is false, or at least not truth-apt.

    For Quine, every belief, even the simplest, more every day ones, are part of a great web of belief. No place in the web has special normative privilege over the others...

    I'll examine such radical anti-foundational coherentism in a future post, if not the next.

    ReplyDelete

Please pick a handle or moniker for your comment. It's much easier to address someone by a name or pseudonym than simply "hey you". I have the option of requiring a "hard" identity, but I don't want to turn that on... yet.

With few exceptions, I will not respond or reply to anonymous comments, and I may delete them. I keep a copy of all comments; if you want the text of your comment to repost with something vaguely resembling an identity, email me.

No spam, pr0n, commercial advertising, insanity, lies, repetition or off-topic comments. Creationists, Global Warming deniers, anti-vaxers, Randians, and Libertarians are automatically presumed to be idiots; Christians and Muslims might get the benefit of the doubt, if I'm in a good mood.

See the Debate Flowchart for some basic rules.

Sourced factual corrections are always published and acknowledged.

I will respond or not respond to comments as the mood takes me. See my latest comment policy for details. I am not a pseudonomous-American: my real name is Larry.

Comments may be moderated from time to time. When I do moderate comments, anonymous comments are far more likely to be rejected.

I've already answered some typical comments.

I have jqMath enabled for the blog. If you have a dollar sign (\$) in your comment, put a \\ in front of it: \\\$, unless you want to include a formula in your comment.

Note: Only a member of this blog may post a comment.