Tuesday, November 15, 2011

 

A Word to the Wise


Hee!

I think numerous people telling him how bad he is at philosophy is having an effect on Jerry Coyne. At the end of his latest screed about free will, he says:

And of course I had no choice about writing this post, nor you in whether you agree with me. . . .
Of course, as we Coyne collectors have come to expect, he rather ruins the effect in the paragraph before:

I conclude that philosophers should abandon the term “free will” and use some less freighted term. How about “the appearance of having made a decision”?
The relevant part of the entry in the Merriam-Webster Online Dictionary:

con·clude verb \kən-ˈklüd\

transitive verb

3 a: to reach as a logically necessary end by reasoning : infer on the basis of evidence [concluded that her argument was sound]

...b: to make a decision about : decide [concluded he would wait a little longer]
Of course, Coyne must have meant to say that he has only the appearance of concluding. Certainly Coyne cannot “reason” unless he has the choice to accept or reject arguments on the basis of logic. Heck, he can't even recognize what is logical without the choice to accept good arguments and reject bad ones. Nor can he infer anything based on evidence unless he also has the choice to accept relevant and valid evidence and reject irrelevant and invalid “evidence.”

Now, of course, Coyne may be right that he had no choice but to use the word “conclude,” even though it is not compatible with his premise.

But then you kind of have to wonder why we have dictionaries at all.
.

Comments:
I don't think this is quite fair. After all, one might well say that a thermostat has a choice about whether to turn the furnace on or not.

The fact that the choice depends on the temperature is not relevant, since we make choices based on external data all the time (should I fight or flee? depends if it's a chipmunk or cougar).

The fact that the choice is deterministic is not really relevant, either. We could rewire the thermostat so it has access to a random number generator; then the decision would not be deterministically based on the temperature.

I think "free will" is often defined so incoherently that it is hard to say for certain whether we have it or not.
 
I should add, philosophers need to do a much better job defining it. And if those philosophers aren't building on what we know from computer science and neuroscience, they're probably not going to get the right answer.
 
I don't think this is quite fair.

I know. I've said before that I don't blame Coyne for any confusion he has about "free will" because it is a difficult topic ... and that goes for philosophers as well.

I just get amused by how he ... and everyone else who denies its existence ... can't help but talk in terms that assume it exists.

Of course, I'd also be amused by a thermostate that, after recognizing its random number generator went on talking about how rational and scientific it is.

Not that I have any choice but to be amused.
 
I haven't gotten through all of it, but the NYT section "the Stone" has an interesting examination of neuroscience and free will by Eddy Nahmias, "an associate professor at Georgia State University in the department of philosophy and the Neuroscience Institute. "
The pull quote from the article states that "Many neuroscientists are employing a flawed notion of free will."
http://opinionator.blogs.nytimes.com/2011/11/13/is-neuroscience-the-death-of-free-will/
 
Oh, and I should have followed your link sooner - apparently that's what Coyne is ranting about.
 
So if belief in the supernatural - or belief in belief - is predetermined, isn't ridiculing believers kind of like making fun of a thermostat for turning on the air conditioning?

If it's OK to demand believers give up their beliefs, is it OK to demand that homosexuals and lesbians conform to the predominant sexual behaviours that their reproductive organs equip them for?

If not, why is it OK to advocate for an equal voice for homosexuality in society but work to marginalize believers who try to make their religion not be in any serious conflict with science ( or society for that matter, like, for instance, churches that celebrate same-sex marriages).
 
Ah, but TB, their opposition to belief in the supernatural is also predetermined, so we can't blame them. But then again, our blaming them is also predetermined, so they can't really fault us...

This is fun!
 
True, but if we're able to be aware of that we're predetermined, how can we make a judgement on how adequately others are able to strive to overcome their genetic predetermination?
 
I think "free will" is often defined so incoherently that it is hard to say for certain whether we have it or not.

Everyone seriously writing in the field considers it a difficult and tractable matter. That's why Coyne's response is such a joke: he is in fact certain that we don't have free will, not out of any serious consideration of the problem, but as a way to shore up determinism without any heavy lifting. Hilarity ensues.

I should add, philosophers need to do a much better job defining it.

It might be helpful if you specified which particular definitions your are unsatisfied with.
 
Correction. I meant some other word there. Not tractable.
 
I just get amused by how he ... and everyone else who denies its existence ... can't help but talk in terms that assume it exists.

Doesn't seem paradoxical to me. For example, we make choices, but choices don't seem to define free will. I think a large element of our naive understanding of free will includes lack of predictability, and clearly we are rather complicated systems whose behavior we are unable to predict.

It might be helpful if you specified which particular definitions your are unsatisfied with.

All of them. That is, I've never seen a good definition. Feel free to point me to one.
 
... we make choices, but choices don't seem to define free will.

But "choice" was exactly what Coyne used to frame his position. Nor do I see a distinction between "decision" and "choice," the other term Coyne used to express his position.

Now, I'm not arguing how "free will" should be defined or if any such thing exists. I know enough about the subject to know not to venture an opinion, much less issue proclamations for all to see the way Coyne does.

I'll only say that, from the "inside," it certainly seems I have at least some, no doubt highly constrained, ability to make the kind of choices/decisions that Coyne rejects. But, if I was convinced otherwise, I can see no point in talking about reason or science.
 
Isn't the distinction Coyne was making between "choice" and "conscious decision to make choice"? My guess is that most of our choices are actually not based on any kind of logical reasoning and are unconscious.
 
If a choice is unconscious, how in any meaningful sense is it still a "choice"?

Wouldn't we call that something more like 'instinct' or 'conditioning'?

-- pew sitter
 
It all depends: do you think computers that behave differently in different circumstances are "making choices"? I do. So I see no need for choice to be conscious.

We speak about "mate choice", for example, but I doubt most people consciously weight the alternatives about who to marry.
 
All of them.

Sarah Palin, is that you?
 
Sarah Palin, is that you?

Hey, look, if you can't produce a definition that you think captures the notion of free will, why not just admit that, instead of resorting to content-free insults?

Here are some the definitions I don't like:
- ability to choose between alternatives
- one's choices not predetermined
- requirement that one is responsible for one's action
- ability to do otherwise
 
"My guess is that most of our choices are actually not based on any kind of logical reasoning and are unconscious."

And

"We speak about "mate choice", for example, but I doubt most people consciously weight the alternatives about who to marry"

I wouldn't dispute that many people don't seem to exercise free will, but I don't believe that necessarily proves anything about free will. It may have something interesting to say about whether it's easier and more comfortable to conform to a mean in our society.

And I get the feeling that kind of thought influences what you don't like about definitions:

"Here are some the definitions I don't like:
- ability to choose between alternatives
- one's choices not predetermined
- requirement that one is responsible for one's action
- ability to do otherwise"

So, one way to connect the dots (and I'm not trying to put words in your mouth) would be that maybe the results of those choices are somehow not remarkable enough to differentiate them from a deterministic result?

If so, does that mean free will is an illusion, or does that mean we've gotten far better at knowing what range of choices, or opportunities, there are for an individual so we are not surprised by any result.

And if that's a possibility, then why does a limited set of opportunies - however large - mean that a person can't exercise free will in their choices within that set?

In other words, why does having a limited set of opportunities that result in a predictable set of outcomes have any bearing on whether one is exercising free will within that set?
 
There was a wee little bit of content in that snarky remark, Jeffrey.

The point, spelled out, is that you don't name-check any actual philosophers, or any reigning theories of philosophy, thus reducing the credibility of your claim that you have exhaustively surveyed the field and found it wanting. (Even if you were not actually bullshitting us, as Sarah Palin did with Karie Couric).

If you want to say philosophy is tosh, then say so, but don't pretend to come at the problem from a position of erudition and reason if you are going to be too lazy to support your own knee jerk conclusions with actual examples.

Having said that, thanks for following up with a few examples.

The question you raise in dismissing these definitions is one of moral accountability. Even if you are willing to let others off the hook for their transgressions (Jerry Sandusky, for example) since they "had no choice," I wonder if you also able to apply this line of thinking to your own actions. Do you never experience regret that you made the wrong the decision, hurt someone, missed an opportunity to do good, or just have more fun? Do you not hold yourself to various standards of excellence and competency? I have no doubt that you do.

This is what questions about free will usually resile to, despite all the distractions over whether it is an actual ontological substance or function, compatible with determinism, etc. etc. If a thermostat comes to us with a moral quandary, then we can include it in the conversation. Until then, probably best to stick to actual claims of free will.
 
reducing the credibility of your claim that you have exhaustively surveyed the field and found it wanting.

A pathetic caricature of what I said. I said nothing about "exhaustively" surveying anything. I sincerely hope you're not teaching philosophy anywhere, as this kind of uncharitable behavior will make your students despise you, and not teach them much.

If you want to say philosophy is tosh, then say so

Much of philosophy is tosh. In particular, any discussion of "free will" that doesn't specify in detail what is meant by the term, and doesn't rely, in part, on computer science and neuroscience, is likely to be of little value. In my opinion, of course.

Do you never experience regret that you made the wrong the decision, hurt someone, missed an opportunity to do good, or just have more fun?

It's this kind of answer that makes me suspect that much philosophy is tosh. Can you really not think of any other explanation for such feelings, other than your incoherently-defined "free will"?
 
So, one way to connect the dots (and I'm not trying to put words in your mouth) would be that maybe the results of those choices are somehow not remarkable enough to differentiate them from a deterministic result?

Thanks for being charitable, unlike another poster in this thread.

No, that's not it at all. A simple computer program with access to a random number generator that makes decisions about anything (e.g., is the face in front of me male or female? Should I plug myself in and recharge at this outlet or the next one?) can also be said to "choose between alternatives" and not have its choices "predetermined" and have an "ability to do otherwise". Yet such a program doesn't seem to capture what we mean by free will.

If you want to say that such a program has free will, that's fine with me, but then it's not really the big conundrum that philosophers have always claimed, is it? If you say it doesn't have free will, then you have to explain more precisely what free will means to you.
 
"Yet such a program doesn't seem to capture what we mean by free will. ."

For instance: Is the program aware that it's making or made a "decision" or "choice?"

So, are we using those words out of convenience, and because of that, are we confusing what we mean by choice or decision in the context of a computer program versus the context of a human being?

Even though we can mimic the appearance of choice and decisions, do those words really apply to hardware that - for instance - can't consciously reject attempts to program or re-program it?

"If you want to say that such a program has free will, that's fine with me, but then it's not really the big conundrum that philosophers have always claimed, is it? If you say it doesn't have free will, then you have to explain more precisely what free will means to you."

Why not a third choice: I haven't adequately articulated what I think is free will, but I know what you're describing doesn't work?

Don't scientists question other people's research results and conclusions without they themselves proposing an alternative? After all, isn't being able to rule something out a common and useful practice?
 
Thanks for being charitable, unlike another poster in this thread.

I think you might have something there. I could have been more charitable to you, and should have been. Furthermore I regret that I did not find a way to criticize your position with a little more good will and fellow feeling. Part of the reason for this is that human beings (all too predictably) tend to react poorly when they aren't dealt with warmly, and I knew this, but I chose to be righteous rather than conciliatory.

I should have exercised my choice to opt for kindness instead of rancor, even though I was at first angry at your macho, know-nothing comments (and after all we can't change how we feel, just how we behave), because I care about the consequences of my actions, and I care about what kind of world I live in, and how I contribute to it positively or negatively, and because other people simply may not have as great a capacity to be as reflective as I. And so it's the least I can do to make an effort to live up to my own ideals from time to time.

Now, I've left out, above, those terribly imprecise demon words "free will" so that we can focus on the simple human experience of making choices of whatever moral weight. Are you prepared to argue that moral choices don't actually exist, that they are just an illusion projected upon the theater of the mind? And if so, on what grounds do you propose to evaluate your own actions and those of your loved ones and not-so-loved ones?

If none of us can do other than what we do, then why criticize anyone for anything at all? If for example, out of thoughtlessness, you are harsh with your child or spouse, who then in their hurt and anguish goes off and drinks a bottle of sherry or kicks the dog, on what grounds can you say to yourself that you will try to be more thoughtful and empathetic in the future? If what we call choice is just the illusion of decision making, then wouldn't you have to resort to the raw hope that whatever deterministic forces make you "you" will result in better behavior in the future, since "you" can't do anything about it, since that would involve that sinfully imprecise concept of "free will"?
 
Jeffrey Shallit,

Okay, so for you, a choice doesn't have to be conscious; as I read you, you consider a distinct response among different possible responses to be equivalent to choice (you brought up the example of computers behaving differently in different circumstances).

Do you think there is any such thing as a conscious choice?

Or, for that matter, even any such thing as consciousness?


-- pew sitter
 
Is the program aware ...

Well, see, this is yet another example of why I have trouble with much of the philosophy of free will that I've read. What, precisely, does "aware" mean in this context?

If you say it means that an agent that is "aware" must be able to "reflect on" or "explain" its own actions, then there's no problem doing this with a computer program. After all, we can create a program with access to its own code, so that if I say, "Why did you decide the last input was a male face rather than female face?" the program can respond with "Because the sum of entries on lines 23, 47, and 101 was greater than 0.5".

So under this particular interpretation, I see no reason why a computer program couldn't be aware and hence have free will.

I'm perfectly happy to entertain other definitions of "aware". I just haven't seen any that aren't either incoherent, trivial, or unable to distinguish between people and computers as above.

can't consciously reject attempts to program or re-program it?


Come on, this is too easy. It's trivial to make a robot that will try to evade you if you get too near it. I'd call that "rejecting attempts to program or reprogram it". As for "consciously", it's yet another word that I don't know a good definition for. For me, "conscious" could be a synonym for "aware" (see above) or it could mean something entirely different.

I haven't adequately articulated what I think is free will, but I know what you're describing doesn't work?

But I think it's the job of science and philosophy (to the extent that philosophy has any role at all in this game) to try to make these definitions as precise as possible. Chemistry, for example, has abandoned vague notions like the "principle of combustibility" for more firmly-grounded notions like "oxidation". I'd like to encourage philosophers of mind to do the same thing. And to some extent, they are (viz. Churchland, Dennett).
 
Or, for that matter, even any such thing as consciousness?

Well, the trouble again is that "consciousness" has so many dimensions that it's hard to say.

One dimension is "self-awareness", which for me means something like "having a mental model of the world that includes yourself". That's much closer to a firm definition, and so I have no problem saying that a computer (or dolphin, or chimp - see work of Gordon Gallup) could be conscious in this sense. In fact, under this definition, we can even quantify consciousness, as measuring how good that mental model (say, for example, how accurate its predictions are).
 
Are you prepared to argue that moral choices don't actually exist, that they are just an illusion projected upon the theater of the mind?

I hope you don't mind, but first I want to understand more basic things before we move on to the complicated world of ethical reasoning and deontology. I want to understand exactly what you mean by "choice" and "free will". If you think I'm too stupid, or arrogant, to discuss it, that's fine with me, but I don't think it would be fruitful to proceed without getting a firmer understanding of those more basic concepts.

then wouldn't you have to resort to the raw hope that whatever deterministic forces make you "you" will result in better behavior in the future, since "you" can't do anything about it, since that would involve that sinfully imprecise concept of "free will"?

A computer program can evaluate past history and failed predictions and change its future actions based on that. Does it have free will if it does so? So, despite the sneering, you haven't really addressed the points I am making at all.

It seems to me that a good course in the theory of computation (including, of a course, a study of computability and randomized complexity classes) might fruitfully be added to the neurophilosophy curriculum.
 
Jeffrey Shallit,

To me, the interesting thing about human processing is that it allows for insights and intuitions, great leaps that involve sometimes radical reconceptions of the model of self.

My father was involved in AI, but retired; he designed the kinds of robots you describe that could take evasive action. (Our cats loved this, once they got over their fear of it.)

But one thing he said was a challenge, at least at the time he was working, was the idea of the sudden intuition or transformative insight. In human beings, as I said, this can radically reorient one's idea of oneself.

I'm curious to know, since you seem to be in the field, are we there yet with any forms of AI?

-- pew sitter
 
P.S.

I also think an interesting human capacity when it comes to mental models is when people want to change themselves.

What does that mean? What does it say about awareness, consciousness, processing, whatever, that I can have a mental model of myself that is accurate in the way you speak about (that is, that using my mental model of myself I can predict what I would probably do in a given situation); and yet I can have another model in mind of the kind of person I want to be, but am not yet, and I can even strive to change my behaviors and my thinking to grow more toward what I am currently not?

Again, I don't know if we can say that's true of other forms without having access to subjective experience, but it seems to me that capacity to envision ourselves differently and sometimes even change to grow more toward that vision, is worth thinking about when it comes to choices and consciousness and so forth -- reflective self-awareness and the ability to change course because of that; the ability to envision alternative models and then act to change oneself.

Just some thoughts.

-- pew sitter
 
"Well, see, this is yet another example of why I have trouble with much of the philosophy of free will that I've read."

Good point. Perhaps you should respond to Chris with information about what you've read. You do seem to be espousing the principals of eliminative materialism without exhibiting knowledge of other disciplines.

In other words, you're engaging in philosophy even as you denigrate it.

It's fine, here, to not know a lot about philosophy. I certainly don't, but I listen and learn and ask questions. I think you're getting close to a point, here, where you're objections are based more on ignorance of whether your objections have been addressed. Bordering on hand waving.

As to your use of the words "choice, "decision" and now "awareness," I agree that precise definitions are very important. And the way I think you're using them isn't very precise. In fact, i think it adds to your confusion rather than clarity. Specifically, what you're describing are things that mimic human qualities - these programs don't control whether to engage in those activities.

You might argue for complexity, but the mimicking is still there. Even if sometime in the future someone creates a robot that might chuck its studies in anger, storm out of the house and hang out at the streetcorner, we'll know empirically that combination of mimicked behaviours was placed there. Is the robot acting autonomously or within the acceptable outcomes of a sophisticated program? Would you be ready to give that robot the same full rights as a human being or still consider it property?

That's an important distinction - when a human is compelled by brain injury or neurosis to engage in behaviours outside of their "control," it's recognized that there's something wrong with that person. On the other hand, that's what's expected and celebrated by all your examples.

So I understand what you're describing, but your use of words is imprecise. And I'm glad you didn't really dispute my point about disproving other people's theories without suggesting an alternative. That frees me up to ask this question:

Can you articulate the results that would falsify your position?
 
I see where you are going with this, Jeffrey, but the analogy can only lead to two places if we actually take it seriously. Either (1) we literally stop holding people responsible for their actions, because "choice" is the same in a person as a thermostat that can't "help" but do what it's designed to, or (2) we grant moral agency to computers and machines, so that a piece of software that "evaluates" its former actions and runs a piece of code that causes harm to others is "personally" responsible for is decision.

The first option is pathological, but perhaps more importantly it is in direct conflict with the most fundamental aspects of modern, liberal civilization. It would be impossible to hold almost any of the ideals we hold today (and I trust you and I hold a great many in common) if human beings were not held morally culpable for their conduct.

The second option seems so ludicrous that I won't even bother with it, except to be momentarily amused by the notion of sending thermostats to bed early without their supper, or scolding tax software for making an erroneous calculation.

I will, since you asked, define "making a choice" to a slightly greater degree of precision. I am speaking of it as recognizing that one can do or not do certain things, and thoughtfully (or thoughtlessly) projecting (imaging, modeling, etc.) the outcome of each option as a grounds for whether or not to do that thing.

Obviously a lot hinges on the word "recognize" here. Obviously thermostats cannot "recognize" anything, nor can they simulate potential future experiences imaginatively. Whether or not powerful computers can do so I leave to the side for a moment, pausing only to notice that so far no computer can actually communicate the experience of such a recognition or speculation, so the evidence that anyone but sentient, language-using creatures can make moral choices is, at present, non-existent.

That opens up a few dozen cans of worms, no doubt. I think the only real important part for the purposes of this conversation is understanding the stakes of flattening the difference between human "choice" and machine "choice." None of the rest really need be bothered with. Intellectual honesty would require that claiming that "choice" is not real mean that we stop acting as though it is, otherwise we're just playing word games--which was (IMO) the point of John's OP.
 
"we grant moral agency to computers and machines, so that a piece of software that "evaluates" its former actions and runs a piece of code that causes harm to others is "personally" responsible for is decision. "

That's kind of where I was going Chris. My biggest concern is how can I know that a computer or machine is truly autonomous? Given the complexity that serious autonomy would involve (and I am not denying that kind of complexity would be possible), how can I trust - for instance - that this thing wouldn't have a mechanism for it being secretly controlled without it being aware of that controll?

Could it walk into a voting booth with every intention of voting for A, be directed by some hidden command and vote for B, and then leave the voting booth convinced that it voted for A?

Would we have a right to check its systems to confirm its vote if we've granted it the right to private vote?

In this sense, it seems wrong to extend language such as "choice," "decision," and "awareness" without some serious qualifiers such as "mimic" or "modeling."
 
One dimension is "self-awareness", which for me means something like "having a mental model of the world that includes yourself".

There is no self. There is only an objective world independent of what you perceive. How is that possible? No, you can't know. But there must be knowledge without a knower. Oh yeah, scientists must know that. Keep the faith my friend.
 
Intellectual honesty would require that claiming that "choice" is not real mean that we stop acting as though it is, otherwise we're just playing word games--which was (IMO) the point of John's OP.

Yep. I don't deny the possibility that Jeffrey ... and even Coyne, who has not displayed half the thought (if I can use that term) about this that Jeffrey has ... is right. But then we must abandon the idea (again, if I can use the term) that there is something called "reason" or "science" and replace them with, perhaps, "processing" through programs and "hardware" of unknown and unknowable accuracy.
 
A computer program can evaluate past history and failed predictions and change its future actions based on that.

But if we are going to use the example of what we can/conceptually could program a computer to do, couldn't we equally as well program it to to evaluate its history badly or even outright falsely and never "know" it is doing so? Then it could never know whether it and/or the the other computers (both those who agree with its assessment are those that don't) are right. It wouldn't even have the ability to determine whether its past actions had been, in fact, adjusted to its even older history.

Nihilism (as to knowledge) is interesting to contemplate but its a damn sight harder to live.
 
@John
Given a set of inputs (the incident at hand, your memories, the book you are reading, what people have told you etc) - you(your brain, your mind , your soul, your conscience, your will) "choose".
That's incoherent.

Either you will always arrive at the same conclusion (for the exact same input) or there is an element of randomness involved.
If you "choose" then on what basis did you make that choice and why would you choose differently if that basis didn't change?
 
That's incoherent.

Your decision/choice to believe that it's incoherent, based on your set of inputs (the incident at hand, your memories, the book you are reading, what people have told you etc), may be totally wrong and, therefore, itself, incoherent. There is no "reason" to believe that those factors will result in correct answers.

Again, I am not arguing that Jeffrey, Coyne or, now, you are wrong. Just that, if your "beliefs" are right, then it is futile to talk about "reason" and "science" and pretend that your deterministic results are "better" than other people's deterministic results. You're in the exact same boat as the theists.
 
the idea of the sudden intuition or transformative insight

If you could define more precisely what you mean by these terms, I might be able to answer.
 
And the way I think you're using them isn't very precise. In fact, i think it adds to your confusion rather than clarity.

Then feel free to provide a better definition of "choice", "free will", "self-awareness", "consciousness", etc., or point me to a place where these better definitions can be found.

I've asked this several times, but all I seem to get in response is sneering and protestations about how stupid and/or ignorant I am.
 
Either (1) we literally stop holding people responsible for their actions, because "choice" is the same in a person as a thermostat that can't "help" but do what it's designed to, or (2) we grant moral agency to computers and machines, so that a piece of software that "evaluates" its former actions and runs a piece of code that causes harm to others is "personally" responsible for is decision.

I think it's silly to suggest these are the only two alternatives. Elsewhere I have proposed (using what is known about computational complexity) that it might be possible to have one's cake and eat it too.

But even if I accept your dichotomy as representing the only two possibilities, then so what? If (1) is ruled out by what we have discovered about neuroscience, then don't we just have to deal with it, instead of pretending it's not the case?
 
But if we are going to use the example of what we can/conceptually could program a computer to do, couldn't we equally as well program it to to evaluate its history badly or even outright falsely and never "know" it is doing so? Then it could never know whether it and/or the the other computers (both those who agree with its assessment are those that don't) are right. It wouldn't even have the ability to determine whether its past actions had been, in fact, adjusted to its even older history.

Yes, and so what? What you're describing is exactly how a lot of people behave. Whole volumes have been written on mistakes of cognition and how it is nevertheless possible to make good judgments under uncertainty.

In computational complexity we often speak about "amplifying" the probability of correctness of randomized algorithms. That's how I see science as a social process: it doesn't guaranty correctness with certainty, but it does amplify the probability that our conclusions will be correct.

So I don't think recognizing the possibility of the incoherence of free will negates the usefulness of rationality or science.
 
Can you articulate the results that would falsify your position?

My position, such as it is expressed in the comments here, is that I don't find much of the philosophical debate on free will very enlightening, because it seems that many philosophers are largely mired in pre-scientific reasoning, instead of trying to understand the neurological basis for the concepts they define so vaguely. This is not true of all philosophers by any means (Churchland, Dennett).

My position would be falsified if philosophers came up with definitions of "free will", "choice", "consciousness", "self-awareness" that were agreed upon by the vast majority of neuroscientists, and these definitions were then scientifically fruitful. Based on the history, though, this seems very unlikely.

I am happy to make the following prediction: we will eventually understand to what extent we have "free will", what "consciousness" is, and so forth, but in the end our understanding will be much closer to Leibniz's "pumps, pistons, gears, and levers" than what we see today in the Stanford Encyclopedia of Philosophy.
 
pausing only to notice that so far no computer can actually communicate the experience of such a recognition or speculation

Either you do not know very much about computation, or you have different meanings for "experience", "recognition", or "speculation" than I do. Expert systems do this routinely: they make decisions; they can speculate on possibilities; and they can explain the reasons for their decisions.

the evidence that anyone but sentient, language-using creatures can make moral choices is, at present, non-existent.

Read the work of Frans de Waal. Your claim is overblown and without foundation.
 
Yes, and so what? What you're describing is exactly how a lot of people behave.

I hope you don't think that excaped me. ;-)

The question is how do you "know" that the people who do that are not me and you instead of "them".

Whole volumes have been written on mistakes of cognition and how it is nevertheless possible to make good judgments under uncertainty.

By people who deterministically had to write what they did with no way to check whether their results were "true" rather than just the results of bad "programing" or "hardware."

In computational complexity we often speak about "amplifying" the probability of correctness of randomized algorithms.

And how do you decide whether your "decision" on what amplifies the probability that our conclusions will be correct are correct? How do you know that your deterministic programing has not mislead you about that?

So I don't think recognizing the possibility of the incoherence of free will negates the usefulness of rationality or science.

But, of course, you were deterministically destined to say that without anyway to know that what you call "rationality" or "science" hav e any relationship to "reality."
 
Jeffrey Shallit,

The example my father used was Einstein's coming up with the theory of relativity. In more general terms, I would think of it as paradigm shifts or insights or imaginative forays that reorient the entire mental model, whether it's of oneself or a field of knowledge.

-- pew sitter
 
"Then feel free to provide a better definition of "choice", "free will", "self-awareness", "consciousness", etc., or point me to a place where these better definitions can be found."

But you yourself (if there is a self that is you) pointed to the Stanford Encyclopedia of Philosophy, where it's clear there is a rich history of debate and not one clear definition.

And in the page you linked, it even describes how some are engaging in neuroscience. It seems like your biggest objection is that they haven't settled the debate and you think you have.

Except, even many theologians believe there is a physical aspect to our selves that science can discover so they're not disputing your findings or even bothered by them. It's the interpretations that's the issue.

I think your use of "choice," "decision" and "self-awareness" is imprecise. You're using those terms to describe systems that model what humans do, but that are put into place and set forth mechanically. And most importantly, those systems have limits to their models that you yourself impose - consciously or unconsciously.

As I brought up with my complex robot with the right to vote, the issue of whether these systems can really be self-aware and autonomous will always be there. After Philip-Morris, Excelon, BP, Countryside and countless other abuses, it is not irrationally to mistrust claims about any product.

You yourself displayed the flaw in your argument with your reply about the evading robot. If someone tried to corral me and mechanically assault my brain and thought processes against my will, I would defend myself with force up to and including lethal force if necessary. And I would be justified.

But that's not a "choice" you instilled in your own robot, which leads me to conclude not even you think they're anything more than a product and not a free agent. Because if your product killedl a human being in the context you described, you would be held responsible and not the product.

So why aren't you being more precise with your language. You are creating programs that model "choice," mimic "decisions" and exhibit the illusion of "awareness."

It is interesting and important work and I follow it with hopes that it can help our society in the same way that sophisticated climate models have helped us understand our changing world. But. At the same time, though, the weather models are not the weather.
 
@john
There is no "reason" to believe that those factors will result in correct answers.
Sure.
What you or anyone else needs to do is come up with a plausible hypothesis of how "free will/choice" is supposed to work (not a philosophical question - but a process/mechanics one). You have to concede that both "randomness" and "deterministic" models are plausible even if unproven.

I'm willing to concede the existence of a "soul" , a "will", a "mind", a "conscience" for this argument (they too will have to rely on the same inputs)

then it is futile to talk about "reason" and "science" and pretend that your deterministic results are "better" than other people's deterministic results.
I didn't claim it is deterministic. The claim is merely this - I can understand what the deterministic folks are saying . I can understand why decisions may have a random component. I don't understand how free will/choice is supposed to work (not what it is) so go ahead suggest one plausible mechanism that makes some sort of sense (which if provided as an input changes stuff for me).
 
Jeffrey,

I hope you don't think that anyone on this thread (I think I can safely speak for all of us) believes in a free will that defeats determinism and causation. That is pretty much a view held only by theists and New Agers these days. Determinism is a given.

This means that all of us here are also "having our cake and eating it too": acceding to the causal description of the world presented by science, and yet acting in the world as though we were responsible for our actions. Even someone like Dawkins--who suggests that if we were consistent we would not hold people responsible for their actions, but should rather say (of a child murderer, no less) "this unit has a faulty component"--does not actually make the case that we should *really* act as if this were our state of affairs. And in your blog post, you announce that you, too, are a compatiblist. Sounds like we are all playing on the same team.

The disagreement would seem to be restricted to the right to use certain language, and the wisdom of exercising that right. As John mentions above, there are higher order commitments to make good on than just making sure our language is always consistent with that of modern biology. We are committed, as he notes, to reason, which requires at least the illusion of a capacity to exercise analytic, symbolic and speculative thought.

Another way to put this is that we are committed to the metaphysical view that human beings are autonomous free agents, capable of separating the actual from the potential (I haven't seen De Waals suggest that other primates do this, but haven't read every word of his; perhaps you have a citation?) The legacy of Locke, Hobbes, Kant, Descartes and Rousseau is what enabled the scientific project in the first place. We can abandon this metaphysical commitment (I certainly have problems with some of the particulars), but we can't do it wholesale without undermining the foundation of science, which needs to at least *pretend* that human beings can choose some small subset of their actions; to say, I will do this and not that, because I am interested in the results.

We haven't stopped talking about love, thank Vishnu, just because we know its chemical basis. We haven't stopped talking about justice or fairness just because we understand how and why they arose as important concepts among primate communities. And we haven't stopped talking about truth even though we admit it tends to reflect our own illusory "middle-world" picture of things. (Dawkins again: [Reality is] whatever [an organism's] brain needs it to be in order to assist its survival." All fine and good, unless we take it so much to heart that that we dig ourselves into epistemological hole we can't get ourselves out of.)

You don't have to find any of this "enlightening," of course. But if you want to engage in rational conversation, without tricks or pranks, you need to take logic and induction seriously. Unless you want to propose we restructure society to reflect the illusory nature of reason, justice, selfhood, choice, love, and any number of other foundational concepts, then you are forced to accept that neurochemistry, to use your example, has limited application, however important in it's own way.

More importantly, though, you can't point out that everyone else is living a charade, when you are participating in the very same game, to the very same degree. We're all interested in reconciling competing descriptions of the world. It's not easy, and progress happens slowly. If you are too impatient to live uncomfortably in this intermediate stage, then maybe philosophy is just not for you.
 
Either you do not know very much about computation, or you have different meanings for "experience", "recognition", or "speculation" than I do. Expert systems do this routinely: they make decisions; they can speculate on possibilities; and they can explain the reasons for their decisions.

I remain agnostic on whether we can ever teach computers to "think." I won't rule it out. But this is not what I claimed: I said no computer has ever reported its experience, and in fact there's no credible reason to believe any computer system, expert or otherwise, has had an experience. Let's start there.
 
What you or anyone else needs to do is come up with a plausible hypothesis of how "free will/choice" is supposed to work (not a philosophical question - but a process/mechanics one).

Uh … no, I don't. Once again, I'm not arguing that Jeffrey or Coyne are wrong. I'm saying they ought to face up to the “logical” and “rational” consequences of their position … since nothing like “logic” or “reason” exists under it.

I can understand what the deterministic folks are saying .

So can I. I just want them understand what they are saying.

... suggest one plausible mechanism that makes some sort of sense (which if provided as an input changes stuff for me).

That we don't know enough to answer the question now? … like Newton didn't know enough to deal with relativity and Darwin didn't know enough to deal with genetics and … well, you get the idea. I was making fun of Coyne's suggestion that we know enough now to make confident assertions AND his failure to understand the consequences.
 
That we don't know enough to answer the question now?
Again - sure.
The point is that people who seem to believe in some sort of free will/choice have not suggested any sort of plausible mechanism (which Coyne has - even if he may be wrong). And they have the same degree of confidence that Coyne does, no? In fact they have enough confidence to assume it exists so that they can move onto other problems like the one of omnscience :).

I was making fun of Coyne's suggestion that we know enough now to make confident assertions
I look forward to the time you make fun of people who respond to normal religious problems with God had to grant humans "free will".
 
I look forward to the time you make fun of people who respond to normal religious problems with God had to grant humans "free will".

I will ... and have ... when they indulge in self-contradiction.
 
I'm finding this all very amusing, with determinists clinging to what they believe is an illusion rather than embrace their beliefs and their implications.

I especially like Mr Shallit's Humpty Dumpty impression, using "choice" to mean an inevitable outcome of a given set of inputs. Like a rock I hold up and release chooses to fall, I suppose.

While I can understand the difficulty of rooting purpose (without which Dawkins' "faulty" is meaningless) and choice out of their language, I would think that intellectual honesty on the part of determinists would force them to make a serious effort to do so.
 
Mike from Ottawa
Intellectual honesty would demand you figure out how non-determinism /free will/ choice(which is non-random) is supposed to work or abandon your certainty till you do.
 
(without which Dawkins' "faulty" is meaningless)

Good catch, Mike.
 
I hope Jeffrey is able to return to the conversation - it's been an interesting exercise.
 
Working on a project involving the Dover trial transcripts while watching football, and came across this interesting exchange:


Q. Do you know what -- how would you define mind, m-i-n-d?

A. Mind? Mind is the capacity to experience, to ask questions about one's experience, and then to criticize the ideas that we come up with to explain our experience.

Q. Is mind a function of intelligence?

A. Well, there are different ways of understanding mind. You can understand it as a process or you can understand it as a concrete reality from which mental processes emerge.

Q. Is there a real distinction between the two that you just defined as far as being a part of mind?

A. Well, mind as a process unfolds in cognitional acts such as being attentive, being intelligent, being critical, and being responsible. Mind as the foundation of that, we call it the desire to know or you could call it the intellect.

Q. Both of those would require intelligence, though, the processing and the desire to know?

A. In order to explain their existence, you mean, the existence of mind?

Q. No, what mind is, the definition of mind.

A. They would entail what I would call intelligence, yes.

Q. Is mind a part of nature?

A. Yes, it is.
 
So why aren't you being more precise with your language?

Well, I think I've actually been more precise than others. I am precise to the extent that my knowledge about neuroscience (quite limited) and physics (also limited) permit.

You're using those terms to describe systems that model what humans do, but that are put into place and set forth mechanically.

No, not necessarily, as I already observed. We can make systems that rely on true randomness, such as particle decay.

which leads me to conclude not even you think they're anything more than a product and not a free agent

Sorry, I don't know what you mean by "free agent". Can you define it?
 
" ... as I already observed. We can make systems that rely on true randomness, such as particle decay."

Which doesn't save your idea of the thermostat as decision-maker. It merely means the thermostat's action (the terms you should be using where you've been using the free-will based terms "choice" and "decision") is deterministically based on the temperature and the random number input. The thermostat no more makes a "decision" or "choice" than if the sole input is temperature.

You determinists just refuse to embrace your belief and instead insist on the inconsistency of clinging to a set of ideas you profess to believe are illusions.
 
"Sorry, I don't know what you mean by "free agent". Can you define it?"

Within the confines of the morality play I proposed? Sure!

The robot is a product that killed. There was no real choice in the matter, its programming caused it to select an option and act upon it. We may despise the product and destroy it, but we also understand that it was not responsible for its own actions so we do not assign moral responsibility for the killing to it. We understand robots don't kill people, people kill people and somewhere out there someone is making killer robots.

So we find the person who created the robot and the programming, the free agent - the one we can assign moral responsibility for the action the robot took. This person chose the path that you did not, he or she created the option in the robot's programming for a certain level of force that the programmer should have known could result in death.

We prosecute that person and convict based on the evidence. In an enlightened society, we would not guarantee but provide for the possibility of reform, for the free agent to recognize their action was morally wrong, to feel and express remorse and so earn forgiveness and parole.

Unfortunately, our prison system seems to reflect a deterministic/fundamentalist view that people are irredeemable, that they cannot escape from their base nature. Call it determinism or call it evil, it seems to result in the same thing.

I appreciate that you choose to engage with me, but I'm also interested in reading your answers to the other commenters here who have a far better grasp of the philosophy in question than I have.
 
Sorry, I don't know what you mean by "free agent". Can you define it?

Is reality that which can be defined? Neither determinism nor randomness has anything to do with free will, which by necessity must be uncaused and irreducible. Free will is a concept related to one's philosophy of mind, not matter.
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?

. . . . .

Organizations

Links
How to Support Science Education
archives