Elliot Temple and Corentin Biteau Discussion

from JustinCEO Topic - #73 by CorentinBiteau

I didn’t know you think there are two different types of suffering. Do you know of any literature that discusses this issue? Did you read it somewhere? Is it something you’ve thought much about? Did you just think it was common sense?

I thought the standard animal activist position is that animals have the same kind of suffering as humans but perhaps to a lesser degree. Saying they don’t have human-type suffering, just something else, is a pretty different claim which merits a great deal of elaboration on what the something else is and why/how it matters. It also runs into issues because some of your arguments involve human/animal comparisons but it’s unclear what the point of those is if you aren’t claiming animals have the same kind of suffering as humans.

I was answering to this :

My current perspective is that animals don’t have the ability to think, so they don’t have the ability to “suffer” in the way that humans conceive of suffering.

There is some kind of suffering associated to projecting into the future or the past. For instance, replaying bad events from your memory, or having anxiety about events that did not happen yet (like being scared that a catastrophe will unfold). Fearing your own death is a typical example.

This is related to actual events that cause suffering, but different in the way that it is about replaying them in your mind.

For the kind of suffering I mentioned above, animals cannot experience that - they cannot have anxiety about a catastrophe that will happen in many months.

However, for suffering more immediate, like starving to death or having a broken leg, then the suffering would be roughly of the same kind. There are variables, of course - humans can experience additional anxiety by thinking of their own death in advance. On the other side, they can also lower their pain if they are good meditators.

But direct, immediate suffering (being in pain, being very hungry) would be of the same kind, which is why animal activists focus mostly on that.

Anyway, I don’t feel debating this is the most important thing, we have other stuff to solve before that.

What I really want to learn about is about internal states of mind. That’s the core point that is required to link behaviour and internal feelings.

So, what is your answer to what I said here?

Anyway, you still didn’t answer that.
I asked twice, in addition to the first time. And I asked many more times about internal states of mind. I do not feel this is respectful. If you do not answer this time, I’ll be forced to consider that you do not have an answer to this point (meaning you do not consider your argument worth defending).

I am sorry that things comes to that, and I know you will take this pretty poorly, but I really shouldn’t have to ask so many times about a core point of your argument.

I’m not Elliot, but I’m interested in this discussion, and having trouble following this.

The way you do your quotes, you aren’t giving sources and they don’t link back to the source, so it’s hard to tell what the quotes are from. I did find it with a text search. Just so you know, if you highlight the text you want to quote and click “Quote”, it will paste it into your post along with an attribution and a link back to where it is from.

I don’t think that’s possible for humans or animals. As I discussed previously, pain is not automatically, directly or instantaneously suffering. Pain can only become suffering via a mental interpretation.

Why do I think animals don’t do such mental interpretations? Because they don’t have ideas, don’t have opinions, don’t have creative thought or intelligence, can’t create knowledge about interpretations of things. Interpretation involves knowledge creation – an interpretation is knowledge about what something is or means (information adapted to the purpose of understanding or evaluating something). (That’s for one type of interpretation. The word “interpretation” has multiple meanings. E.g., in humans, there is information processing done on visual data from the eye before it reaches the mind. That could be called “interpretation”, but it’s not the same thing as an idea-based interpretation within the mind; it’s just an algorithm using pre-existing knowledge.)

I thought that by discussing types of suffering, I was discussing internal states of mind (your preferred topic).

Your linked post has multiple questions and lacks the words “internal”, “state” and “mind”. Being more specific would be helpful. I’ll attempt to answer it:

Elliot Temple and Corentin Biteau Discussion - #38 by CorentinBiteau

Personally, I don’t really understand why knowledge creation is required for thinking that something is bad. When my expectation is “I don’t want to starve to death”, I haven’t created any knowledge here. It’s innate.

The “nature/nurture” debate has a lot of existing literature. Are you familiar with it? You seem to be advocating the “nature” side of the debate, but I’m unclear on how/why you came to that conclusion, or why you think it’s necessary to bring that debate into our discussion. My position is more in line with “nurture”. I also regard this as a different topic than internal states of mind.

Why do you think that, for instance, me being fearful of dying is something that requires knowledge creation? I know I can override that, to some extent, but this seems like the default position, no?

I don’t think human infants fear dying or even understand what dying is. They have to learn about dying later.


You should also give sources before all your quotes. For quotes on other pages, give author and link. For quotes on the same page, it’s easier to highlight the text and use the quote button. Sources for quotes can be unnecessary when multiple consecutive quotes have the same source. Without quote sources, it’s hard to read and understand posts (especially for other people, or people reading at a later time, but even for me, despite familiarity with the discussion, it was still extra work to try to figure out the missing information).

Oh, ok, I was referring to this:

For the topic of internal states of mind, I asked this a long time ago:

I still feel like Elliot didn’t give an answer to this question - which is very important if we don’t know how subjective experience arises, how can we say that it cannot arise in animals? I find this worrying given how long the debate went on. I asked more here:

(the last quote was 40 messages ago)

Oh, and here’s another point for which I’d like an answer:

A I said, can’t animals have “default” opinions on what’s good and bad arising from evolutionary pressure ? And that they mostly cannot change ? This wouldn’t require knowledge creation, but they’d still be in the situation of “wanting something and getting something else ” - which, as you said, is suffering.

Oh, I certainly don’t want to go into a “nature/nurture” debate. However, aren’t there things that humans do not like in a kind of universal way? Like not wanting to starve to death? I’d expect this not to be cultural in such case.

This is true - fearing of death requires anticipation, this is a bad example.
How about not wanting to feel hungry? Not wanting to have broken limbs? Not wanting to be thirsty, or to be sick? I mean this in the immediate sense, a.k.a children do not like it when they are hungry.

Hunger and pain both provide sensations, which people fairly consistently form negative mental/psychological interpretations of.

Our bodies give us clues (information sent to the brain) about what is good or bad. For some issues, like hunger, ~all people get fairly clear and strong clues, so ~everyone gets the hint and forms ideas that are in line with the clues rather than contradictory to the clues. It’s the same with e.g. taste and smell – we form opinions about which tastes and smells we like, but these aren’t random or arbitrary, they are guided by what information is sent from our nose and mouth to our brain, which is (indirectly) controlled by our genes.

Animals, I claim, get the clues (information being sent to the brain) which then feed into behavior algorithms and that’s it. Animals simply don’t have psychology. They have no ideas about what the world is like or what they want. You (correctly) see purpose in their actions because their actions are guided by knowledge, but that doesn’t make them conscious.

Instead of evolution programming an animal to want something, which it then figures out how to get, evolution instead just directly programs the behavioral algorithm for getting it and skips the wanting. If evolution programmed wanting it, that wouldn’t work without intelligence, because intelligence is required to create knowledge about how to accomplish a goal (if you don’t already have a built-in algorithm to accomplish that goal).

Imagine taking a self-driving car where all its behaviors are 100% determined by algorithm, and then also programming in wants or goals without adding the slightest bit of behavioral flexibility. That wouldn’t make sense. What are the wants or goals for if the behavior can’t be changed to better achieve them? But to adapt behavior to better achieve the goals requires intelligence. Or it requires flexible algorithms where the “goal” is programming into the algorithm rather than actually being a psychological state.

You’re thinking of animals in terms of having psychological goals and values, and then using their limited intelligence to pursue those goals. But this model doesn’t work if they have no intelligence, merely algorithms. If they have no intelligence, they can’t do problem solving, come up with creative ways to achieve goals, any of that. Having a goal wouldn’t be useful because they can’t apply any intelligence to achieving the goal. They can’t intelligently choose between different options for what actions to take. Also, goals, values, etc. are types of ideas, and ideas are the primary data type involved in intelligence.

(This is all premised on, as discussed earlier, animals entirely lacking intelligence, which is what I consider the key issue.)

Yes.

Feelings are mental interpretations – ideas – within an intelligence, which are partly interpretations of underlying physiological states (e.g. levels of different chemicals or information being sent to the brain from e.g. pain sensors) and partly just mental concepts with no direct relation to your physical body (e.g. you can be sad because you heard on the phone that your grandma died, and the thing you’re sad about are ideas in your head, but the sadness is not really related to or caused by the sound waves and how your ears work). Also, people often create underlying physiological states to go with their emotions – e.g. get sad about grandma’s death first, then release chemicals (and then interpret those chemicals as sadness so it reinforces the sadness).

We can interpret animal “feelings” because of:

  1. Shared evolutionary history.
  2. Experience with that species (we know dogs pretty well).
  3. Making things up; being wrong. E.g. some people might think they interpret emotions from fish or snakes, or something even more unlike us such as an insect, and just be wrong. Similarly, people often make up a bunch of personality stuff for their dogs or infants which is incorrect.

Our ancestors were animals. Our intelligence is built on top of that ancestry which still has a lot of the same underlying physical stuff like taste buds, pain sensors, etc. Some of our feelings are interpretations of those physical things. And our visible behavioral reactions to those things are sometimes not changed that much from when we were animals. Why change it? Like theoretically you could have a culture that indoctrinates everyone to making a frowning expression when happy and a smiling expression when sad, but why? Tons of work and no reward. And if you don’t impose it on a large percentage of the population in a short time period, it’ll be really confusing. Instead, things like smiles and interpretations of smiles are self-reinforcing. If a communication is successful, you want to keep communicating the same way in the future.

I think people overestimate how much they can read expressions, body language, voice tones, etc. They get more wrong than they realize when reading humans or animals. But they often do get the basics right.

There are other factors, e.g. the release of certain hormones when wounded might make you shake without the shaking being a behavior controlled or chosen by your intelligence. Or it might affect your vocal cords and cause a change in voice tone so that when your mind commands saying some words the tone comes out differently in a way the mind didn’t intend. Or you might visibly bleed without deciding to do that. So we give off information to observers which isn’t controlled by our minds, so that’s going to have overlap with information we observe from animals. People also form habits related to these things, at young ages, which are hard to control or change, though actors and conmen may get very good at faking some of this stuff.

How do you know that?

I’m not asking whether it’s possible that animals have no ideas about what the world is like.
I’m asking how you obtained the information that “animals have no ideas about what the world is like or what they want”.

Not sure I understand. Even simple programs want something. AI stuff with neural networks from right now (not the level of animals yet) has goals - even though it doesn’t have goals like “not dying” yet. No internal feelings, of course, but it’s based on algorithms. Maybe we have different definitions of “wanting”?

Not sure how any of that relates to “having internal states of mind”.
We could imagine humans having intelligence (just the brain processing stuff and making up choices intelligently) but no internal states of mind.
Why do you think that internal states of mind are not just an “add-up” to the already existing programming?

The goal is “staying alive” and “reproducing”. Isn’t that enough? Can’t the built-in algorithm accomplish that goal ?

No current software wants anything. This is required for everything else I’m saying. It is designed to accomplish goals. That’s totally different. Speaking about it “wanting” something is a metaphor because it contains knowledge (information adapted to a purpose) which looks kind of like wanting something.

A hammer doesn’t want to hammer in nails. It’s designed for that purpose by a person who (probably) wants nails to be hammered in. Software is the same. I guess you imagine we program wants into it. But we don’t. We just program in instructions. Add these numbers. Check this condition, then do this. At no point using techniques like those does one program in understanding or wanting, but one does get the appearance of those things because the software acts apparently purposefully.

Plants don’t “want” sunlight but they do grow towards the sun, face leaves towards the sun, etc. The plant behaves in various ways as if it wants sunlight, but it doesn’t have wants, it’s just designed to accomplish certain things.

Does water “want” to flow downhill? Not literally. It just follows the laws of physics.

The thing where you disagree or don’t understand is the stuff I was trying to talk about the most: intelligence, epistemology and knowledge. We need to get those issues clearer. I thought you were trying to differentiate animals from current software, but you’re apparently not re wants. I am trying to compare animals to current software and differentiate that from humans.

The key issues here are differentiating appearance of design and purposeful behavior from intelligence, wants, goals, having a purpose, etc. The appearance of such things comes from knowledge (information adapted to a purpose) which is created by evolution of e.g. genes or ideas. When something acts like it wants something or appears to want something, that means knowledge is present. You then have to consider the source and nature of that knowledge.

Does that help?

That is a “goal” of genetic evolution, not of the animal (it is a purpose for which evolution adapts the animal). Just like self-driving car programmers have goals but the cars don’t. (But it’s also different because genetic evolution doesn’t actually have goals in the human sense either.)

Ok. So let’s say there is a difference in the degree of “wanting” something. Animals do not create knowledge - it comes from evolution. But humans can create knowledge.

Still not clear how any of this leads to internal states of mind or not, though.

Can’t it be both at the same time? It’s the goal of the gene agenda, yes, but the goal of the animal too, a.k.a not dying. Well, more precisely, genetic evolution favors individuals whose goal is not dying. That does seem to lead to built-in preferences either way.


Oh, by the way, I noted something during the debate. When I make several points, you answer to the easiest ones but leave out the more challenging ones. That’s curious. Is it possible to answer the following points?

That is a very uncharitable accusation. If you dislike me and consider me irrational, I can stop spending energy trying to help you… If you have serious arguments that I’m doing something wrong, you could make them in detail and directly instead of throwing out such a harsh accusation as a “by the way” and as something “curious”. I do not appreciate indirect, passive-aggressive, unargued social attacks on my rationality.

The way you’re treating me here is mean and harmful. I understand that it’s a common bad habit people pick up, so if you want to improve yourself to treat human beings better, that is something I could sympathize with instead of being offended by. But I don’t wish to simply put up with thinly veiled insults.

You are right, that was unreasonable of me.

I shouldn’t have done that.

I think I have some frustration left from the time I asked repeatedly about a specific point and got no answer (I made the list at the message 65).
So I evacuated things this way - even though that was uncalled for and does not improve anything.

I apologize.

You bring up too many things at once. I put effort into trying to deal with this productively and choosing which topics to address. It’s OK if you don’t mind having half of them read but not replied to. But if you really want replies to specific things, you should drastically cut down on saying anything else.

Do you think calculators “want” to figure out the correct answers to math problems?

No.

Understood. It’s a good thing that this has been made explicit.

Then, if I have to be focused, my question is “what makes internal states of mind emerge or not? How do we go from atoms to internal feelings”?

I will explain why I ask this question.
A key interrogation, in this debate about suffering, is what should we compare animals to? My position is that we should compare animals to humans (if their behavior looks like suffering), while yours is that we should compare animals to robots (if their behavior can be explained by programming).

Your position makes sense if we take behavior and intelligence as being the cause of internal feelings.
My position is rather that we do not really know how internal feelings arise or not. Maybe it’s related to intelligence, maybe it’s linked to how the brain of living beings works, maybe it’s something entirely different we cannot observe, maybe even linked to spirituality. How would we know? We can’t predict that internal feelings can emerge from the laws of physics, this seems to emerge from something entirely different. We can’t predict the emergence of internal feelings from programing knowledge either, we can only make hypotheses.
In such a case, the safest bet, for me, would be to a draw comparison from the only species that we know for sure have internal feelings: us.

This is why I’m asking about that: what do you think is the actual process for how internal feelings arise?
You can say that you don’t know. Or that you have a theory but you aren’t 100% sure. Or that you are 100% sure, in which case picture me interested.
Ok, so how would you answer this question?

Do you think plants “want” to grow towards the sun, or that venus fly traps “want” to close around prey?

You may have to define “internal states of mind”. I think you mean something along the lines of psychological states, but clarifying details like whether you think any current software or plants have it might be relevant (I think they don’t unless you define this stuff unusually).

My answer is probably “(general) intelligence (which is the point at which a system has ideas)”.

I consider my view non-refuted and to have no known viable alternatives, but it’s fallible and less well understood than some other topics like, say, whether dogs have teeth.

I don’t think they do. But to be fair, if they did, I’d have no way of knowing, so…

Yes, I am talking about psychological states - the fact that for instance, I feel my body touching the keyboard right now (and that it’s very hard to describe how or why). Having a subjective experience of things.

I do not think that computer programs have it. But then again, I absolutely cannot predict at what point a computer program could one day have subjective feelings, or even if this will ever happen. I could guess but that would be speculation.

Ok, general intelligence it is, then.
How do you go from “atoms reorganizing in the brains” (a lot, enough for intelligence) to “having an idea of what the taste of a strawberry feels like”?
How do neurons go from whatever form they would take to react to the chemicals present in a strawberry, to the subjective feeling of what a strawberry tastes?

If you just don’t know much about how this stuff works, why be confident that some specific activism is a really great idea?

The relevant thing about the atoms is that they’re a computer. Then, looking at the software, it can have an idea of what a strawberry tastes like when it implements ideas (which is what general intelligences do – ideas are a name for the replicators they evolve).

1 Like

The atoms are a computer, yes. If it is arranged in a specific way, it can “handle” something like a strawberry, have an arrangement related to its form, an arrangement related to its . All of this can be predicted if we were to look at the laws of physics.

What I can’t predict if I’m using the laws of physics, is the fact that I perceive things from a subjective point of view. That I see something from the inside (some call it qualia, I think?). That I am experiencing something right now. That I feel well, or bad, or a mix of both.

What I’m asking for is: can predict when a computer goes from just handling data (with bytes or neurons) to *having a mind that can feel like it is experiencing something?

I don’t see how computers going from “handling sensory data” to “handling abstract information like ideas” would lead to the completely different state of perceiving things from a subjective point of view.

If I don’t know much about this stuff (and I feel like nobody really knows, there are explanations but it’s mostly speculation), then I’ll just do an analogy with the closest thing for which we have data.

It turns out that for the question “having internal states of mind” and “suffering”, the closest thing we have at hand is humans. And I’ll see that when it comes to suffering, animals have similar behaviors to humans. Plants don’t.

So I stick with the analogy. I may be wrong. That’s possible. I may also be wrong when I think that other humans on Earth say they are suffering. But as animals display traits very similar to us when it comes to suffering, being excited, being sad, and so on, I think the safest bet I can make is going with that analogy.

I feel like any argument beyond that is interesting but runs into the problem of we can’t verify.

If your argument were “animals have robot-like behavior determined by genetic programming”, then your case could be debated, because we can verify that.

But your argument is “animals have robot-like subjective perceptions (aka none)”, which becomes pretty much impossible to verify (at least I don’t know how). And I’m suspicious of stuff I can’t verify. I know that if I said things I can’t verify, deriving from the best logical arguments around, I’d probably be wrong, because the world is really, really complicated.
It would great if I were wrong, though! No need to care if I harmed another being when I’m buying food.

Trying to speak in your terms: What is your credence that cows can suffer?

I consider the subjective perceptions idea to be bad philosophy which confuses people more than it enlightens. I don’t think discussing it is necessary to reaching conclusions on topics like intelligence or animal suffering – suffering depends on stuff like wants/values/preferences/goals (ideas), not on perception or subjectivity. But as a brief approximation, subjective perceptions (and emotions and feelings) are ideas.

Right now, between 80% and 90%. I haven’t seen yet reasons to put the estimate far from my credence that humans can suffer - there are too many similar behaviours between the two.

I don’t really like the term “ideas” here. It directly refers to a very human concept. I don’t feel it is a good way to describe emotions and feelings as being ideas. It sounds like emotions have a “non-mental” portion, especially as they are directly related to physical sensations.

I you don’t like the term “subjective perception”, then you might prefer the term “sentience”, which is the most commonly used by people interested in animal welfare.

I don’t think discussing it is necessary to reaching conclusions on topics like intelligence or animal suffering – suffering depends on stuff like wants/values/preferences/goals (ideas), not on perception or subjectivity.

Ok, I was under the impression that you were thinking that animals had no sentience, no subjective feelings that could allow to feel suffering. But that’s not your position, if I understand correctly?

Your position then, is rather that animals can be sentient, but they need wants/values/preferences/goals to have suffering. Meaning that if animals had preferences, goals or wants, they would be able to suffer. Is that right?