[Faith-talk] Why bother with truth?

Poppa Bear via Faith-talk faith-talk at nfbnet.org
Thu May 22 18:03:59 UTC 2014


Well, maybe some of you have noticed, I have been in my Faith folder for
much of the morning, and in reading some stuff, it seems that God would be
leading me through some of the things that have came up around here once
again. Here is an article that gives us a basic philosophical, and what I
believe a God centered guideline for understanding the ways in which we can
understand and pursue truth and knowledge in our lives as we strive to
become more discerning and wiser about truth, fallacies, and lies.

Why Bother With Truth?

Published in Areopagus Journal 2/2 (April 2002): 13-18.

Posted with permission by Areopagus Journal (

How can we know the world around us? How can we know God? How can we know
anything at all? These are some of the questions of epistemology, the study
of

theories of knowledge.

Epistemology has two main goals. First, we want to find as much truth as
possible. And second, we want to avoid as much falsehood as possible. These
two

goals stand in tension with each other. I can easily acquire very large
amounts of truth. If I were totally gullible, I’d believe just everything I
hear.

That would give me the largest number of true beliefs possible. But the
problem is that along with all the true beliefs I’d acquire, I’d also obtain
many

false beliefs. So I’d have some needles of truth hidden in a very large
haystack of error. That wouldn’t help me much.

Similarly, I could easily avoid as much error as possible. If I were
completely skeptical, I’d disbelieve everything. That would safeguard me
against every

falsehood. But the problem is that I’d miss out on all truth whatsoever—and
some truth might be very important. So that wouldn’t help me much either.

No one urges us to believe absolutely everything. But some very important
and influential thinkers do advise us to believe nothing (or very little)—or
at

least they recommend that we believe only when an idea is incredibly well
supported. This is skepticism. Skepticism puts most of its energies into
avoiding

error, and very little effort into finding truth. So how can we develop an
understanding of epistemology that goes beyond skepticism? How can we
balance

our desire for truth with our need to avoid error?

 

Truth and Knowledge

 

It’s critical to distinguish truth and knowledge. Too many people equate
these two concepts, with chaotic results. But truth and knowledge are
different

concepts. Put simply, true affirmations are those that correspond to
reality. So truth is a characteristic of statements that properly describe
aspects

of the real world. This is called the correspondence view of truth.

The correspondence view of truth isn’t a method for testing truth claims or
discovering knowledge. It’s a definition of what we mean when we say that a

statement “is true.” According to the correspondence view, what makes a
statement true is reality itself. A statement like, “This car is red,” is
true,

simply if the car in question actually is red. Truth doesn’t depend on
anyone knowing the truth. So, for instance, even if no one’s around to
discover

that it’s 115° at 2:00 p.m. on August 15, 1977, in the middle of Death
Valley, it’s still true that it’s 115° out in that desert. The statement,
“It’s

115° at 2:00 p.m. on August 15, 1977, in the middle of Death Valley,” is
true even if no one thinks about it. Truth is independent of human minds.

The word knowledge denotes a person’s proper understanding of the true
nature of reality. This proper grasping of reality can be knowledge by
acquaintance.

In this sense, we know what the color blue looks like. An accurate
perception of reality can also take the form of knowledge of true statements
that describe

that reality. Both of these are important. Knowing a friend is more akin to
knowing by acquaintance, and it’s more important than just knowing about a

friend. But knowing true statements is also important. In fact, the two
kinds of knowing are related, because knowing by acquaintance entails the
truth

of descriptive statements. If I know a friend named Greg, then I know many
true propositions, including “Greg exists” and “I count Greg as a friend.”

For a belief to count as knowledge for a person, it must meet three
conditions. First, knowledge must be true. We don’t just mean that someone
thinks the

idea is true. We mean that the idea is true. Members of the Flat Earth
Society (believe it or not, there is such a thing!) think that the earth is
flat.

Do we count their belief as knowledge? Of course not! They believe the earth
is flat, but their belief is false and hence can’t count as knowledge.
Genuine

knowledge is true.

Second, knowledge must be believed. We must believe a claim (that is, we
have to hold a belief as true) in order to know it. Of course, believing
something

isn’t enough to make it true, and not believing it doesn’t make it false.
But without believing, a true idea isn’t knowledge for us. Suppose it’s true

that one of my great-great-grandfathers was a Confederate Army lieutenant
whose troops played a key role at the Battle of Fredericksburg. Now suppose
I

don’t know this fact and don’t have any particular beliefs about the
lieutenant. In this case, it’s obviously true that my
great-great-grandfather was

this lieutenant, but it would be very odd to say that I know this about my
great-great-grandfather. In fact, I probably have very few beliefs about my

great-great-grandfathers. I can know generic things: eight persons who lived
sometime in the last 250 years are my great-great-grandfathers. They were

males; they fathered my great-grandparents; and none of them ever watched TV
or received an e-mail. But since I don’t believe anything individually about

any of them, I can’t be said to know anything distinctive about them as
individuals. We must believe something to know it.

Third, knowledge requires some other fact that legitimates the knower’s
holding that belief. The belief must arise out of this legitimating fact; it
must

be grounded in this “something else.” Now we’re being vague because the
exact nature of this legitimating fact is very hotly debated. But the
importance

of this legitimating fact is that it separates genuine knowledge from true
beliefs that are held purely by chance. Obviously, we shouldn’t consider a
true

belief as knowledge if that belief was the result of a wild guess. Say I win
the lottery by guessing the winning numbers. Sure, I hoped that the winning

numbers would be the first five digits of my Social Security number, but
it’s wrong-headed to say that I knew that they would be the winning numbers!
In

sum, by the word knowledge, we mean a true belief held by a person for an
appropriate reason—that is, grounded in a legitimating “something else.”

 

Forming and Testing Beliefs

 

If knowledge is true belief plus some legitimating fact, then how should we
set the standards for assessing these legitimating facts? The 17th century
philosopher

Rene Descartes concentrated on this very problem. His philosophy set the
stage for modern discussions of knowledge. Descartes’ approach posited very
high—too

high—standards for that “something else,” that legitimating fact that
distinguishes merely true belief from genuine knowledge.

 

Methodism Vs. Particularism

In order to weed out false beliefs and gain genuine knowledge, Descartes
required that all candidates for genuine knowledge must arise from a method.
Correct

method (for Descartes, the geometric method) is the key to finding true
knowledge. This approach is called methodism. Methodism, in this discussion,
isn’t

the religious denomination. Rather, it’s an epistemic theory that stipulates
this: we know any particular true belief if and only if we arrive at or
produce

that knowledge by following a correct method.

Here’s a specific example. Suppose someone asks me whether I know the
statement, “My coffee cup is blue.” (Let’s call this statement p.) Methodism
requires

that before I can truly know p, I must follow a proper method by which I
know p. So to know any particular truth, methodism says I must follow a
proper

epistemic method.

Although Descartes’ methodism may seem like a promising way to ground
knowledge, it’s fundamentally flawed. Methodism requires that before I can
know anything,

I must have prior knowledge of the method by which to know that thing. But
then how do I know that method itself? My coming to know what method to use

would itself require following a prior method. This quickly leads to what’s
called an infinite regress. Every time I try to answer the problem, the
problem

keeps appearing. I start moving back a chain of questions. But every time I
move back to a prior link in the chain, the problem repeatedly emerges. It’s

like asking, “What explains Michael’s existence?” If I say, “His parents,” I
just raise again the very question I hoped to answer: “What explains his
parents’

existence?” “Their parents?” Ultimately, given the methodist approach,
there’s no way to end this infinite series of questions. In the end, if
methodism

were true, I’d have to know something (the right method) before I could know
anything. There’s no way out of this double bind.

But there’s another approach to finding the legitimating fact that separates
true belief from knowledge. It’s called particularism. Particularism starts

by assuming that it’s right to know particular things directly (that is,
without following a method) since we find that we already know many
particular

things. In certain conditions, we directly and properly form true beliefs.
And we form these beliefs through a variety of means. We see a tree or hear

a train. We compute things. We infer conclusions from things we see or hear.
We learn from experts. We read the Bible. Each of these processes generally

leads to true beliefs. And so it’s legitimate, particularism says, to count
particular beliefs like these as knowledge. We shouldn’t be required to step

back and first prove that, say, our vision is perfect, before we rightly
know something we see. That would lead us back to the methodist trap (since
we’d

have to prove the method that we use to prove our vision is perfect). So
it’s better just to assume that our properly formed beliefs are innocent
until

proven guilty. With these particular beliefs in hand as examples, we can
begin to understand what knowledge is—and gradually to increase the number
of

things we know.

 

Testing Individual Beliefs

But difficulties arise when we run into contrary evidence. Let’s say that,
just by looking at it, I form the belief that a particular stick is
straight.

I have no reason to doubt this because my eyesight’s generally very good.
Then I put the stick in water, and suddenly I form the belief that it’s
bent.

Again, my eyesight’s pretty good. But my mind tells me that the stick can’t
be both straight and bent. So which of my two beliefs is true? Or let’s say

my wife helps me pick out a tie that looks gray to me. I protest: “It’s too
drab.” But she assures me that the tie is a nice shade of rose. Should I
trust

her judgment?

It’s when this sort of thing happens that testing procedures become
important. This is where we follow methods. We have procedures to help us
figure out

which of the conflicting things that our normally reliable belief-forming
processes are telling us is actually correct. The conflict between beliefs
produced

by these normally reliable indicators leads us to question whether what we
think we saw could really be so. I remember something in my high school
physics

class about light refracting when it passes through water, and this accounts
for the bent appearance. Or I remember that I’m color blind in reds and
greens,

and this explains why the rose-colored tie looks gray to me. So what do we
do about conflicting facts? We go to procedures to help us sort them out.
(This

is the correct insight that methodism takes too far.) Should we just give up
and concede skepticism? Hardly.

What are the procedures or strategies for evaluating competing beliefs?
First, our beliefs should be rational. At a minimum, this means that our
beliefs

shouldn’t contradict one another. This is coherence, a negative test. Say I
believe both that “I’m the world’s leading microbiologist” and that “I don’t

know much about microbiology.” These beliefs are obviously incompatible, and
so holding both beliefs at the same time is irrational. One of the two (or

maybe both) must go. Coherence is necessary. But it doesn’t guarantee truth.
Incoherence is a significant red flag. It guarantees that some beliefs are

false. We should pursue strategies in order to avoid holding incoherent
beliefs.

Second, our belief should fit with the evidence. If a belief doesn’t fit
with data we know to be true, we should give up that belief. Take the claim,
“I’m

the sixteenth president of the U.S.” This belief conflicts with many
well-established facts: “The sixteenth U.S. president’s name was Abraham
Lincoln”;

“My name is David Clark”; “Abraham Lincoln is dead”; “I’m alive”; and so on.
So I’m really not the sixteenth U.S. president.

Generally, we look for beliefs that fit the evidence. But notice something
very important. We don’t stipulate a rule: “Every belief must be proved by
evidence

before it counts as knowledge.” Among other problems, that rule would land
us back in methodism. The problem with making this rule into an absolute
requirement

for knowledge is that the rule itself can’t be proved by evidence. No
evidence could ever prove that “Every belief must be proved by evidence
before it

counts as knowledge.” So we do look for evidence to help us, but only when
it’s appropriate.

 

Testing Large-Scale Models

So far we’ve been talking about particular beliefs. But we also seek
knowledge about large networks of truth claims. A large-scale scientific
theory, for

example, is a complex set of interlocking claims, all connected in a large
network. Large-scale models include many different kinds of things,
including

scientific, historical, and even religious convictions.

Large-scale models compete with each other to see which one does the best
job of explaining all (or most of) the known facts. Thus, for instance, the
heliocentric

(sun-centered) model of our solar system competed with the geocentric
(earth-centered) model. Though this isn’t well known, both the heliocentric
and geocentric

models explained the available physical facts equally well for centuries.
Physical observations didn’t finally confirm the heliocentric model until
more

than 200 years after Galileo’s controversies. Thus, the heliocentric model
didn’t compete with the facts. Rather, it competed with and finally defeated

the geocentric model of the solar system by doing a better job of explaining
the most facts. This is one way that large-scale models gain support—by
outdoing

their rivals at explaining the data.

Here’s another example. When National Transportation Safety Board (NTSB)
investigators are trying to explain a plane crash, they look for evidence.
They

know what to look for because they’ve explained other crashes by finding
telltale facts that guided them to large-scale explanations. The telltale
facts

are clues that unlock patterns of interpretation and lead to strongly
supported explanations. The NTSB puts all the data together and concludes,
say, that

the plane crashed because a turbine blade in one of the engines shattered.
The power of this explanation to incorporate all the relevant data—like the

loud explosion passengers heard and the sudden loss of airspeed reported on
the cockpit data recorder—is a major reason we hold that the large-scale
theory

is a properly supported, interlocking set of true beliefs. The individual
facts are themselves grounded in experience (such as the sound of the
explosion,

the report of the plane’s reduced airspeed, and the shattered pieces of the
blade). The large-scale theory incorporates and explains these and many
other

facts.

Complex explanatory models can form ongoing programs of research and
investigation. They not only explain what we know already. They can also
guide us to

what we don’t yet know. Take, for example, the discovery of Neptune. Uranus
didn’t orbit the sun as the large-scale models suggested it should. But when

scientists imagined that another planet was exerting gravitational force on
Uranus, then its orbit suddenly made sense. So scientists began looking for

this other planet, and sure enough, they found Neptune. This is similar to
“superstring theory” which developed when theorists used mathematics to
explain

their observations. The mathematical calculations worked out beautifully
when scientists assumed the existence of things they called superstrings.
The

calculations are powerful in that they explain a number of related issues.
So researchers posit that superstrings exist even though they can’t observe

them. Research programs that guide researchers to new discoveries are
progressive. This helps confirm their connection with the real world.

But testing large-scale constellations of beliefs isn’t simple. In fact,
it’s sometimes impossible. Theories about particular events, like why a
particular

ship went down in a perfectly calm sea, may never in fact be understood. The
problem might be that certain key pieces of evidence are stuck too far down

on the sea floor. This means we could explain the event in principle, but
can’t in fact. That is, there’s no logical reason why we can’t explain this
event,

but there’s a practical barrier to our understanding. So in this case, we
should remain agnostic rather than claim to know what we really can’t
know—at

least until we develop a new submersible vessel that can get down to the
wreck and find the key evidence. The truth about some complex processes
might

just remain hidden.

Testing models is even more complex because it requires making judgments of
several different kinds. What are the facts to be explained? (Sometimes the

two models will explain different ranges of data, and there’s no way to step
outside the two models to know which range of apparent facts is really most

relevant.) What are the criteria by which we decide which explanation is
best? (Sometimes the two explanations will excel at two different
criteria—one

model might be simpler while the other is more helpful in guiding us to new
discoveries.) So our procedures aren’t straightforward and linear. But
reasonable

judgments are still possible. When the NTSB investigators find a cracked
turbine blade, we know we shouldn’t blame the pilots for the crash (and
maybe

we should blame the jet engine manufacturer). Gathering knowledge isn’t
always easy, but it’s amazing how much we can learn through carefully using
all

our strategies in a coordinated way.

 

Knowledge and the Intellectual Virtues

 

Thus far, we’ve been discussing some of the key elements of a proper
understanding of knowledge, including belief formation and testing. We’ve
argued that

knowledge requires true belief plus some account of that belief—something
that legitimates the belief. But thus far we’ve been quite coy about what
this

account is. It’s time—indeed, past time—to repair that deficiency.

What is this feature that, when added to true belief, constitutes knowledge?
Here scholars disagree—in fact, there are few things about which
epistemologists

disagree more! Thankfully, it’s not our purpose to address all the academic
squabbles. Rather, we’ll offer an account of knowledge that we find
persuasive.

t focuses on the relationship between knowledge and the intellectual
virtues.

What are intellectual virtues? Virtues are qualities of excellence possessed
by a person. Intellectual virtues share some characteristics with moral
virtues.

In fact, many acts that are virtuous in a moral context are also virtuous in
an intellectual context. Examples of intellectual virtues are honesty and

courage. Being intellectually honest means making a fair appraisal of the
evidence at hand, dedicating effort to reach valid conclusions, admitting
personal

biases that affect beliefs, and seeking to override or reduce those biases.
In an intellectual context, courage involves, among other things, being
willing

to take a minority position when the evidence points in that direction. It
also means investigating personally held beliefs with rigor.

An intellectual virtue, therefore, is a characteristic of a person who acts
in a praiseworthy manner in the process of forming beliefs. But an epistemic

virtue isn’t simply an instance of intellectual skill. For example, think
about the ability to see sharply. This is a skill that some lucky people
have

from birth. This ability isn’t developed over time. (In fact, eyesight
falters over time.) So it’s not particularly virtuous. Virtue relates more
to what

a person does with abilities or skills like incredibly sharp vision.

Further, the intellectual virtues don’t just happen naturally. Rather, they
arise from habits. Like good habits (such as exercising and eating
healthfully)

and bad habits (like biting fingernails and gossiping), the intellectual
virtues are the sorts of things that become more and more a part of us the
more

we practice them. Similarly, the more we practice their opposites, like
intellectual dishonesty, the more difficult it becomes to respond to any
given

situation in an intellectually virtuous way.

Intellectual virtues influence, and are influenced by, the motivations of
the one employing them. A person must come to believe something out of
proper

intentions. Say that a student named John hears a teacher talking about a
classmate whom John dislikes. “He is nice,” the teacher says. Because of his

ill will toward the student, John hears, “He has lice,” and he jumps on this
bit of negative information. Even if it’s true that the student has lice,

does John’s belief count as knowledge? No. Even if he believes it, it’s
true, and it’s grounded in a normally reliable belief-forming process (John
has

good hearing), from a virtue perspective, John’s belief doesn’t count as
knowledge since this belief arose in an intellectually non-virtuous way.
John’s

belief was shaped by his malicious attitude toward the fellow student. Given
all these points, we define knowledge in this way: Knowledge is true belief

that is reached or acquired through an act of virtue.

The key insight of virtue epistemology is that knowledge isn’t just an issue
of whether evidence exists for specific belief at a particular time, but an

issue of how a person goes about gathering evidence. So whether or not a
particular belief is properly grounded for me has to do with how I formed
the

belief. Did I form this belief in accord with the intellectual virtues,
reflecting praiseworthy habits of belief formation and testing acquired over
time?

Or did I form this belief in a manner that reflected slipshod handling of
the evidence or haphazard reasoning processes?

 

Conclusion

 

We began by talking about our need to gain truth and avoid error. Skeptics
are fixated on avoiding error. Their concern is the adequacy of a person’s
evidence.

To avoid falsehood, skeptics place a very high standard for admissible
evidence. For some skeptics, the standard is so high that every belief
becomes doubtful.

We agree that avoiding falsehood is vital. And given our virtue-oriented
epistemology, the notion of evidence is important. But more important is
whether

we rightly handle the evidence we have! An unscrupulous person can twist
evidence to support the position he holds. But if we’re intellectually
virtuous,

we’ll operate differently. We’ll treat evidence honestly, overcome our
biases toward our own culture’s preconceived notions, and refuse to misuse
evidence

to gain power or to pretend that our own pet beliefs are superior.

So is knowledge possible? Even though people have many false beliefs, Yes!
The existence of junk car yards doesn’t count as evidence against the
existence

of new cars. Similarly, the existence of intellectually non-virtuous people
doesn’t show that intellectually virtuous people fail completely in their
quest

for genuine knowledge. In sum, due to human limits, some things are beyond
knowing. But if we exercise the intellectual virtues, we can achieve genuine

knowledge about important things. Skepticism wins some skirmishes along the
way, but it doesn’t win the war!

 

James Beilby is an adjunct professor at Bethel College and Theological
Seminary, St. Paul, MN.

 

David K. Clark is professor of theology and Christian thought at Bethel
Theological Seminary, St. Paul, MN.

 

Most of the content of this article was previously published in

Why Bother with Truth?

Arriving at Knowledge in a Skeptical Society ©James Beilby and David K.
Clark, (Norcross, GA: Ravi Zacharias International Ministries, 2000).

 

Gospelcom.net alliance member

© 2003 Ravi Zacharias International Ministries. All Rights Reserved.  e




More information about the Faith-Talk mailing list