Elliot Temple and Corentin Biteau Discussion

Maybe they would have heard it from Decartes?

Or maybe from Programming the Universe: Are Animals Robots? (1999) which just came up on a web search. Based on skimming, this article isn’t very good, but it does bring up this issue right in the title. And it doesn’t treat it as an important new idea.

Could the lack of discussion of these issues in the animal welfare literature you found be a bias? It seems like that literature is agenda-driven rather than trying to brainstorm and then analyze all the possibilities.

Today, many smart people believe that humans and animals both have Turing-complete computers for brains, and intelligence is a matter of software not souls. I’m not aware of any alternatives that can stand up to criticism in debate today. Do you agree or disagree that brains are literal computers and the fundamental human/dog differences, if they exist, are in software?

In this modern context, it makes sense to consider what software animals run, and how it differs (or is the same as) human software or Mac software. So to me it’s very bizarre and problematic if none of the animal advocates have tried to consider such things. That seems like they rushed into activism before thinking the topic through to figure out if they were actually right, and then got all their followers to do that too.

Moreover (still my personal opinion), maybe people don’t spend a lot of time on the topic of differentiating animals from non-AGI robots because they don’t feel like the comparison is warranted ?

If animal welfare advocates think an argument that defends the meat industry is incorrect, that isn’t a very good reason to not spend any time trying to refute it.

Animals seem to have a large amount of specific things that would differienciate them from robots: they have nociceptors, they tend to avoid less negative stimuli when under painkillers, they are evolutionary much closer to us (who can suffer) than from robots, some species can cry, play, squeal, mourn for days…

I’ve heard all this stuff before, repeatedly. I don’t think you’ve given it a lot of thought, which is one of the reasons that someone (who agrees with you) should think it through more and write their thoughts down. You’re basically saying animals are different than robots because they have sensors and do complex behaviors. But robots can be and are designed to include sensors and do complex behaviors. E.g. it’s easy to make a robot that makes a squealing sound.

This 2021 book looks interesting. People study animals and figure out things like how complex traits evolved. It’s widely accepted by scientists that some complex animal behavior is innate not learned. That raises questions for animal rights activists like which animal behaviors are innate and which are learned, and what that means.

If a bird migrating, a salmon returning to its spawn point, or a koala eating leaves isn’t due to intelligence, we should investigate what else isn’t intelligent either.

This book apparently does that. I may buy it. From the marketing:

All computer programs are algorithms. Eric Cassell wonderfully describes the clever algorithms in many animals. Where did these embedded computer programs come from? Specified complexity, irreducible complexity and the Cambrian explosion are inexplicable from a Darwinian viewpoint. In this book, Cassell masterfully adds animal algorithms to the list. [says ROBERT J. MARKS II, PHD]

Evolution created clever algorithms that account for at least some of the complex behaviors of animals. I doubt you’ll actually deny that.

This is a 2021 book. Is this stuff new? Some parts may be, but the main ideas are not. E.g. here’s a 1970 book:

In the Animal Behavior section, it talks about issues like “Innate Behavior Patterns” and “Innate Recognition”. It describes observations of researches who paid close attentions to animals and figured out some of the algorithms they follow which, when examined closely and tested, do not seem to be intelligent. E.g. a squirrel will do a digging action and try to bury a nut even if there is no dirt. Its algorithm doesn’t check for the presence of dirt and it doesn’t use its eyes to check if the nut is actually being buried.

I would expect reasonable animal advocates (or at least some of the intellectual leaders) to be familiar with research about what animals are like, etc.

Once stimulated, whole cycles of action can proceed by themselves. In the squirrel, food storing consists of the following part-actions: scraping away soil, depositing the nut, tamping it down with the muzzle, covering it over, and pressing down the soil. A squirrel reared indoors will still perform these actions in full, even in the absence of soil. It carries the nut into a corner, where it starts to dig, deposits the nut in the (nonexistent) hole, rams it home with its muzzle (even though it merely rolls away in the process), covers up the imaginary hole, and presses down the nonexistent soil. And the squirrel still does all these things even when scrupulous care has been taken to ensure that it has never set eyes on a nut before or been given an opportunity to dig or conceal objects.

Since we knew stuff like this in 1970 or earlier, and the idea that animals aren’t intelligent like us and lack moral value was widespread for millennia, it seems like something people trying to change millennia of traditional disrespect for animals ought to look into and address.

EDIT: Amazon says the book was published in 1970 but Goodreads says 1972. I don’t know which is accurate. I think it was published in German at an earlier date.

Based on his Wikipedia page, the German version is probably this:

  • 1968: Wir Menschen. Das Geheimnis unseres Verhaltens (We humans. The secret of our behaviour)

Oh wait I skimmed the book info too much.

Eric Cassell surveys recent evidence and concludes that the difficulty remains, and indeed, is a far more potent challenge to evolutionary theory than Darwin imagined.

I saw this after an Amazon review alerted me to the author having some Intelligent Design type views.

So basically, he knows some animal algorithms, but he does not know how evolution could have created them, but he also thinks these things are algorithmic not intelligent, so he concludes it was God that did the programming.

Well, the book sounds interesting anyway but that is not what I was hoping for…

There are scientists who figure out things like how the bombardier beetle could have evolved in small steps (that’s a common example from anti-evolution people). I was hoping this book was the same kind of thing but applied to animal algorithms rather than physical features. But instead, I guess it explains and documents animal algorithms, but then has no idea how evolution could program those algorithms.

Maybe they would have heard it from Decartes ?

Have you read my comment entirely ? In the end it specifically mentions Descartes and its vision as automata.
This specific argument doesn’t seem to be ignored. It’s featured at the top of this article about animal sentience.
I’m worried that you didn’t read my comment entirely and jumped to a “this is because of bias” argument.

Today, many smart people believe that humans and animals both have Turing-complete computers for brains, and intelligence is a matter of software not souls. I’m not aware of any alternatives that can stand up to criticism in debate today. Do you agree or disagree that brains are literal computers and the fundamental human/dog differences, if they exist, are in software?

I can understand the brains as computer argument, yes. It makes sense.
It’s also true that there are some behaviours in animals that works like algorithms (and for me too - like when I jump out of the way when something’s charging at me).

However, let’s be clear here. We don’t know how subjective feelings are arising. We can explain how parts of the brain can reorganize its hormones when feeling sad. But we can’t explain the feeling of sadness from the position of atoms in my brain - it’s just a totally different realm of experience.

To me, this is a strong limit to the computer analogy. If we don’t really know how subjective feelings arise (whether in computers or humans), then we can’t draw a conclusion for animals. We can speculate (like thinking there’s no sentience in animals but when a computer is at the AGI-level there is some). But in the end we don’t really know.

Plus, there is stuff that I don’t know how to explain without sentience. The fact that animals are less prone to avoid negative stimuli when under painkillers is not just a “complex behaviour”. It’s something that doesn’t make sense if animals can’t feel pain.

I also would like to have the answer to the following question: Why evolution would favour things like, say, elephants mourning their dead, if they had no ability to feel grief and sadness in the first place?

Yes I know. I thought your two points contradicted each other. You asked where they would have heard of it, then also mentioned Descartes bringing the idea up. You said it’s OK it was ignored because it’s unknown, then said maybe it’s not ignored (implying it’s not unknown).

The paper you gave Animals | Free Full-Text | Searching for Animal Sentience: A Systematic Review of the Scientific Literature says

“Animals are like robots: they cannot reason or feel pain” (Descartes, 1596–1650).

So people ought to have heard of the idea that animals are like robots.

The paper does not, however, contain the string “compu”. So it seems like they didn’t take into account a modern understanding of what robots are and how robots work. They didn’t attempt to steelman Descartes’ argument by updating it with modern knowledge. That seems bad and raises the question of whether any other books or papers do better at addressing these issues.

Similarly there is the question of whether any animal advocates have read research which carefully studies animals, like squirrels, and then written commentary about it, discussing issues like whether they grant the squirrel nut-burying algorithm is not intelligent, and whether they know of any other documented squirrel behavior they think is intelligent, etc. There’s a lot to engage with from that one example. It shows a way squirrels seem very inhuman.

You have not studied your jumping out of the way behavior nor the similar behavior of anyone else. You have not figured out the part-actions involved, the trigger conditions, or any specific algorithm. In other words, your claim doesn’t look anything like the claim about the squirrel nut-burying algorithm. You do not really know whether your behavior is algorithmic or not regarding jumping out of the way. You shouldn’t assume that it is. It would be perfectly reasonable to bring it up as a candidate idea for consideration (though it’d be better to start with some simpler and easier examples first), but not as a finished conclusion that you have accepted or believe I should accept.

To me, this is a strong limit to the computer analogy. If we don’t really know how subjective feelings arise (whether in computers or humans), then we can’t draw a conclusion for animals.

My position is that brains are literally computers. It’s not an analogy. E.g. a squirrel has a turing-complete computer for a brain. Do you agree or disagree with that? If you agree, then it’s not an analogy. If you disagree, what do you think a squirrel brain is instead of a computer?

Regarding subjective feelings, you’re bringing up too many things at once. We need to take some of the easier issues (like the painkillers point) and get them settled before trying to tackle more advanced stuff.

Plus, there is stuff that I don’t know how to explain without sentience. The fact that animals are less prone to avoid negative stimuli when under painkillers is not just a “complex behaviour”. It’s something that doesn’t make sense if animals can’t feel pain.

Roombas behave differently if you cover up their camera. Painkillers, like camera coverings, interfere with input data provided to the CPU.

Nerves are a type of sensor. They provide information. They send signals to the brain. If you use drugs to reduce the information provided, then the CPU never gets that information and its behavior algorithms don’t take that information into account.

I agree that animals receive information from their nerves. The issue is your interpretation that it’s “pain”. It’s like a robot with some sensors that detect various types of damage, which you then call “pain” sensors, but that label doesn’t tell you whether or not the robot “feels pain” in the human sense. Maybe its behavior algorithms just take into account sensory information. Maybe there are “if” statements that depend on data from sensors.

I also would like to have the answer to the following question: Why evolution would favour things like, say, elephants mourning their dead, if they had no ability to feel grief and sadness in the first place?

There are lots of evolutionary behaviors with non-obvious selection pressures involved. I don’t want to attempt to get into the details of a specific behavior before dealing with some more preliminary issues like what sensors are and how robots can have damage-sensors.

Animal and Human Behavior (quotes go through chapter 5)

American environmentalists, in particular, take the view that practically all human behavior is acquired, and that man can therefore be molded at will by upbringing and education. Animal behavior is likewise attributed almost entirely to learning processes. The European schools of behavioral research, particularly those of Lorenz and the Dutch zoologist Tinbergen, have arrived at quite different conclusions. Exhaustive experiments have shown that many elements in the behavior of animals, even of the most closely related higher mammals, are fixed by heredity. The results of this research suggest that human behavior is probably more predetermined than we realize.

This is a German author writing 50 years ago says that American environmentalists may not know it, but European researchers know that:

Exhaustive experiments have shown that many elements in the behavior of animals, even of the most closely related higher mammals, are fixed by heredity.

I have reasons to disagree with part of his claims about predetermined human behavior and don’t plan to discuss that part. Some of what he says about it is interesting and worthy of analysis, but it’s complicated and somewhat off-topic.

A duckling, for instance, has a whole repertory of actions available and ready for use as soon as it leaves the egg. It can already walk and swim excellently, it already dabbles in mud with its beak and cleans its plumage in a characteristic manner.

It has even been possible to prove, in the case of worms, crickets, bees, and fish, that formulas for the control of movement conform to the Mendelian laws of heredity. If parents which differ in their innate movements – the student of behavior calls them hereditary coordinations – are crossed, all their offspring display either the behavior of one parent or mixed behavior, whereas in the second generation the specific motor characteristics of both grandparents recur.

Controversy with American psychologists was particularly instrumental in stimulating research on this subject. Its special difficulty lies in the fact that many movements which are innate or genuine hereditary coordinations in themselves cannot be performed at birth because the control structure has yet to mature. This may create the impression that a creature has acquired a particular motor pattern by learning, whereas careful experimentation shows that its behavior is probably innate but took time to mature. Grohmann, for example, reared doves, some normally and the rest in cages too cramped to allow them to move their wings. As soon as the normally reared birds could fly well, he released the others. It turned out that the latter could fly with equal ease. This clearly showed that flying, an extremely difficult form of locomotion, does not have to be acquired by these birds and is at their disposal complete, like their organs. Their control structure matures somewhat later, however. The American researchers Carmichael and Fromme carried out a similar experiment with tadpoles. They reared one group normally, the rest under permanent anesthesia so that they did not move and therefore could not learn. When the anesthetic was discontinued, the drugged tadpoles proved to be able to swim almost as well as the others.

That’s interesting science. Similarly:

Grasshopper larvae, by contrast, describe typical “music-making” motions with their hind legs at an early stage but fail to produce any sound because in this case their “instruments” are not yet fully developed.

They do the behavior for the music-making motion before they have the right hind legs for it to work. Evolution programmed them with an algorithm that doesn’t check for mature hind legs before trying to use them.

Anyway, if you want to know what animals are like, you have to pay careful scientific attention, not use anecdotes about pet dogs. I don’t know if researchers have ever studied elephant “mourning”. A PBS article about it isn’t the same kind of thing as drugging tadpoles or keeping birds in very cramped cages in order to test heredity. Similarly, attaching an emotionally-loaded word like “pain” or “mourning” to something is not scientific research about how it actually works.

The toad reacts just as unselectively at mating time when faced with the task of finding a mate. The male leaps indiscriminately at any moving body and embraces it. Should the object of its attentions be another male toad, the latter emits a rapid series of cries, whereupon the former releases its hold. The mating-minded toad sooner or later encounters a female, whose spawn it fertilizes, but it has no innate “image” of a prospective mate. Waggle your finger in front of a male toad and it will mount and embrace it in exactly the same manner.

In order to discover what characteristics go to form a key stimulus, the ethologist uses what he calls a dummy, or decoy. This consists of the simplest possible reconstruction of the appropriate stimulus situation. Judicious alteration of a dummy or the addition of further characteristics enables one to ascertain what the IRM under examination responds to. In young blackbirds, food begging can be stimulated by a dummy consisting of two circular disks of black cardboard, one large and one small. The young birds construe the larger disk as their parent’s body and the smaller one – at which they point their gaping beaks – as the head. In the male fence lizard the blue stripe on the edge of its belly arouses fighting behavior in other males. Females have no such marking and are not attacked, but paint the blue pattern on a female and she will be attacked at once. Paint out the stripe on a male and it will be courted instead of attacked. A bunch of red feathers is enough to arouse fighting behavior in a male robin. Thus the word “mechanism” does possess justification here. The hereditarily fixed nerve structure responsible for recognition reacts like an automaton – in this sense, mechanically.

How little such reactions are associated with intelligence was shown by experiments with turkeys. To the turkey hen, the characteristic cheeping of turkey chicks is the key stimulus which arouses brood-tending behavior. Conceal a loudspeaker which emits this cheeping sound inside a stuffed polecat – one of the turkey’s natural foes – and the turkey hen will take it protectively under her wing. Deprive the turkey hen of her hearing, on the other hand, and she will kill her own young because the appropriate key stimulus fails to reach her IRM.

Lots of good research has been done on what animals are like. Figuring out minimal decoys or dummies to trigger behaviors, like two black circles, is good research. I have not seen any animal advocates discuss this stuff and what it means, though. I’m not saying such discussion doesn’t exist but I don’t know where to find it. Animal advocates ought to be keenly interested in animal research and understanding animals.

Another discovery was that dummies can often be devised which surpass the efficacy of natural key stimuli. Koehler and Zagarus found that a ringed plover will abandon its own eggs in favor of one four times as large, even though it has no hope of hatching it. The cuckoo, as everyone knows, lays its eggs in other birds’ nests, where its young are actually given preferential treatment by the unfortunate foster parents. This is attributable to the young cuckoo’s wider throat, which acts as a stronger feeding release. Tinbergen and his associates established that the male brown butterfly prefers black female dummies to those of natural coloring. And for another species of butterfly, the silver washed fritillary, a rotating cylinder adorned with brown stripes running lengthwise, holds an even stronger sexual attraction than the sight of a female of its own kind. The ethologist refers in such cases to supernormal dummies.

It’s nice to see people testing stuff. A lot of the “science” in the media today is politicized crap.

female rats are in such a strong retrieving mood (“retrieving” is the term applied to the instinctive act of salvaging young which crawl out of the nest) for some days after giving birth that they frequently use their own tail or one of their hind legs as a surrogate. They pick up their tail, carry it into the nest, and deposit it there; or they grip one of their hind legs and hobble back with it on three legs as if it were a baby rat.

The book has many other examples of what animals are like when observed more closely than typical people do.

it is lengthening hours of daylight that put the male stickleback into a procreative mood. The responsible “member” in its parliament of instincts starts to wield influence and causes it to be assailed by a definite restlessness. As yet, the fish neither dons its mating garb nor exhibits any courtship or aggressive behavior. Sticklebacks migrate in shoals from their deep winter quarters to warmer, shallower waters. Once there, every male seeks a weed-stocked spot and establishes its territory. Only then does it put on mating dress and become receptive to other stimuli. If sticklebacks are captured during migration and placed in a basin which contains no plant life of any kind, they remain in a shoal and do not change color, simply because none of the males can mark out a territory of its own. Plant some weed in one corner, on the other hand, and one of the males will soon detach itself from the rest, take up station there, establish its territory, change color, and become procreatively inclined. In this case, therefore, the growth of procreative inclination is brought about by two factors of an external nature: first, lengthening hours of daylight; and second, the discovery of plants which lend themselves to the establishment of a territory (and nest building).

It’s so robotic and inhuman. Anyone who studies animals should be familiar with some stuff like this which has been known for a long time. E.g. “the stickleback has been studied extensively since the 1940s” source. Three researchers brought up in the book were born near 1905 and one was born in 1860.

One example of an internal influence is the operation of hormones. It has been ascertained that when the female collared turtledove sights a displaying male, its ovaries release progesterone into the blood. The effect of this hormone is to arouse a disposition to brood somewhere between five and seven days later. Lehrmann, who experimented with eighty pairs of these doves, injected them with progesterone seven days before bringing the males and females together. When he offered them eggs at the same time as he brought them together, the pairs immediately embarked on brood-tending activities, which they would not normally have done. This was yet another instance of the ease with which instinctive behavior can be distorted and diverted from its natural course-in other words, of its rigidly mechanical nature. In this case, inclination was induced by a hormone. Introduce this into the bloodstream prematurely, and the instinctive member gains ascendancy correspondingly early.

The “rigidly mechanical nature” of animal instincts is a reasonably well known and old idea. Which is why when any animal appears to display some kind of intelligence, that is interesting news to people – their default belief is that animals lack intelligence.

(The connection between suffering and intelligence is a separate matter which we could discuss later.)

Anyway I think animal welfare advocates should use evidence of this quality – which is available for many species – not stuff that’s noticeably worse. Maybe they do somewhere but I haven’t seen it yet.

Experiments with a cuttlefish proved that its memory retained an impression for 27 days. In the case of a trout, memory survived for 150 days, of a rat for fifteen months, and of a carp for as long as twenty months.

They’ve done experiments regarding animal data storage too. Apparently time limits are common. Distributed, redundant data storage also exists:

These small worms were trained to perform a certain task (they are capable of such an achievement) and then cut in half. The regenerative capacity of the planarian is such that the forepart grows a new tail and the hind part a new head. Ensuing experiments seemed to show that both new individuals – the one with the regenerated head included – could accomplish the task in question.

Further experiments indicated the existence of two forms of memory, short term and long term. That totally different phenomena are involved became clear from experiments with cuttlefish, in which the two faculties are located in different areas of the brain. In the case of goldfish, it was possible to prove that their short-term memory changes into long-term memory within an hour, and that the latter definitely depends upon the formation of protein.

Neat.

Chaffinches, for example, have a song with an innately fixed length and number of syllables, but its characteristic division into three strophes must be learned by imitating adult members of the species. If young chaffinches reared in isolation are played recordings of other species of birds, they will accept their song as a model, but only if it resembles that of the chaffinch in tonal quality and strophic form. If they are played various songs including that of their own species, they will recognize the latter and give it preference as a model. In this instance, as in numerous others, the ability to learn is not entirely flexible but innately slanted in one particular direction. The creature has a prescribed curriculum, as it were-in other words, an innate knowledge of what it should learn.

Do you understand how robots could be programmed to do that kind of “learning” without having actual intelligence?

Ok, I didn’t express my self clearly. I saw 2 different points here:

  • The “animals are automatons” by Descartes, which has been around for a long time and of course been under much scrutiny by people interest in animal welfare. I don’t think this one has been ignored.
  • Your version of the argument, quite different in the sense that it mentions brains as Turing-complete machines and stuff like AGIs. That’s the one I said is much more obscure and so there’s been much less litterature about that.

My position is that brains are literally computers. It’s not an analogy. E.g. a squirrel has a turing-complete computer for a brain. Do you agree or disagree with that? If you agree, then it’s not an analogy. If you disagree, what do you think a squirrel brain is instead of a computer?

Ok, I agree with that.

However, there’s nothing we know about the basics of how computer works (including my brain) that can explain how subjective states of mind are arising (like “I’m feeling sad”). The same way there’s nothing we know about atoms that can explain how a particular arrangement of molecule can translate into me feeling sad. We can make a few correlations but how that happens is really uncertain.

So I don’t really see how we can draw conclusions about sentience from that.

By the way, this makes me wonder: Why do you think a non-AGI robot does not have a subjective state of mind? What would make you think that this is possible?
I personally don’t think current robots are sentient, but as I’m asking the question I’m suddenly not sure why I think that.

Nerves are a type of sensor. They provide information. They send signals to the brain. If you use drugs to reduce the information provided, then the CPU never gets that information and its behavior algorithms don’t take that information into account.

Oh, ok. I understand better what you’re saying. So your take is that animals still receive negative stimuli from nociceptors, or hormones, but do not have internal feelings about it. So painkillers would still make a difference. The information is processed but with no subjective state of mind.

The list about automatic behaviours was also interesting, I knew a few of them. And of course animals have many, many automatic mechanisms. There are a lot of shortcuts in there. I didn’t have any doubts that, some day, you can program a robot that, on the outside, looks like it’s acting like an animal.


But this even if many of these behaviours are automatic, this doesn’t really tell us whether animals have internal states of mind or not. This very old saying by Bentham is pretty much the take of people interested on animal welfare, who don’t see intelligence as the most important criteria : " “The question is not, Can they reason?, nor Can they talk? but, Can they suffer ?".

There’s another point I want to ask about: if I understand you correctly, animals display outside behaviours linked to emotions that we associate to positive or negative wellbeing (fear, being excited, stress, anxiety, boredom, being in pain), but they don’t feel these emotions as they have no internal state of mind. Does that mean that in evolutionary history, feeling emotions has only appeared in the last hundreds of thousands of years, when (relatively modern) humans arrived ?

That’s another topic and we can’t talk about so many topics at once.

My position on suffering, qualia, subjective states, etc., is related to my position on intelligence, so I want to talk about intelligence and learning first. We can talk about the downstream stuff if we agree on my premises.

Here’s an attempt to organize this more.

Animal Welfare Tree (ET, CB).pdf (45.0 KB)

This (as with the lack of consideration of modern computers and AI algorithms) makes it sound to me like your position, and the position of the thought leaders you’ve learned from, is not thought through adequately.

Eliezer Yudkowsky wrote:

on my view, the Singer side explicitly starts by trying to twist people’s brains up internally, and at some point we should all maybe have a conversation about that.

And from his own linked essay:

It is dangerous to believe, said the Watcher, that you get extra virtue points the more that you let your altruistic part hammer down the selfish part.

I saw this today. I didn’t know that EY also had issues with altruism and with how EA is good at getting people’s thinking twisted up in harmful ways.

Ok, humans show a high level of intelligence, and they are able to learn completely new stuff (like, say, new political systems), and to do completely different stuff than animals. Their behaviours are much less automatic. I can agree with these premises.

What I want to know is how a very high intelligence relates to suffering, or to having an internal state of mind. This is the important stuff. I know there are premises in your reasoning you have to expose, but it is possible to present them while still making links to suffering in your overall reasoning?

“high level of intelligence” is a distinction of degree. it treats intelligence in terms of amounts.

I think “learn completely new stuff (like, say, new political systems)” is intended as a binary distinction (can do X as opposed to cannot do X).

Do you see and agree with the difference?

Yes.

Can you get to the point ?

this message makes me think your kind of annoyed at elliot, and that you want him to have a conversation more normally. like instantly get into writing each other arguments that each of you come up with on the spot.

I think a problem elliot has with normal discussion/arguments is that almost never actually conclude well. like it’s very rare for someone to change their mind or learn something. like if you were to look at 100 gun control or abortion arguments, how many people do you think would have changed their minds? i’d think like maybe 1 or less.

so assuming that normal discussions usually go really badly, then maybe doing things unusually, doing them differently, is good. maybe instead of getting into the arguments immediately, they need to have the same framework or something before they can productively discuss with each other.

Do you think elliot is having a normal kind of discussion with you, or is what he doing not normal? And do you think normal discussions usually end well, or poorly?

Personally, what I want is learning things and understanding how the world works.

I get that most conversations are usually unproductive, I agree with that. Because the goal of people is generally to defend their identity and the conclusion they already reached.

My goal instead is to understand how the world works, so I tend to change my mind much more frequently (I have in mind several topics where I changed a lot compared to my initial point of view, like on religion). If I don’t, I at least try to understand why the other person came to reach their conclusions. So the conversations I have are usually less frustrating than for most, as I often learn some stuff.

So I want to get to the gist of it, getting to the important ideas fast.
The thing is, there is a very important point in this conversation, namely that “you need at least a human level of intelligence to have internal states of mind, and suffering and pleasure”. It’s the core of Elliot’s argument, and if true would be extremely important.

What frustrates me a little here is that we haven’t addressed that point yet. I want to learn more about it, and asked several times where internal states of mind come from, but we’re still not there.
I understand better Elliot’s position, yes. But I still do not have the most important argument of why his position is the good one.
I can understand the rationale behind trying different a approach than usual, but not if this is dragging the discussion down.

That is not my position. I don’t think we can move on, based on you granting my premises, unless you know what they are (and you probably also need to find them at least kinda plausible).

I don’t think animals have a different level of intelligence than humans. I think there is a binary distinction where animals are not intelligent. But when I talked about this distinction you got bored(?) or something.

PS Curiosity – Animal Welfare Overview

It is absolutely unacceptable at this forum to put non-quotes in quotation marks like that. A reasonable reader might believe that I wrote the quoted text, but I did not. A reasonable reader should never be misled to think something is a quote which is not an accurate quote.

No misquoting is clearly stated in the forum rules, plus you agreed to it at the start of this conversation back on the EA forum. This is an official moderator warning.

(On a related note, we also agreed not to edit posts, but I think minor edits can be fine here because, unlike at EA, this forum has a publicly visible editing history feature.)

https://discuss.criticalfallibilism.com/faq

Do Not Misquote

Never misquote anyone. Due to risk of a typo, use copy/paste or the quote button instead of manually typing in quotes (unless it’s from paper or an audio recording, in which case you must state that it’s typed out by hand). Do not present a summary or paraphrase as a quote. There is zero lenience on this. A misquote is never close enough. This rule is enforced in an extremely picky, exacting, pedantic way. Expect moderator action if you violate this rule. You have been warned.

This, btw, is what having an actual norm against misquoting looks like, and thinking it actually matters, unlike EA. Also I like the reasonable reader wording and will edit it into the rules.

It is absolutely unacceptable at this forum to put non-quotes in quotation marks like that.

Oh, sorry about that. I didn’t think it could be interpreted that way. I was using these quote to go for a “here’s more or less what I understood” sentence but didn’t think that it could be misinterpreted because it has the same format than a quote. You’re right to point that out, I’ll avoid that in the future.

PS Curiosity – Animal Welfare Overview

Ah, good, that was what I was looking for: an overview of what you think on the topic. Thank you for that. This allows me to better understand your premises and the definitions you use.

I agree with you on several points: that humans have general intelligence, and knowledge creation; that the behavior of animals is largely automatic depending on what’s in their genes; and about the different types of algorithms.

These large-scale, widespread problems causing human suffering seem more important than animal suffering. Even if you hate how factory farms treat animals, you should probably care more that a lot of humans live in terrible conditions including lacking their freedom in major ways.

The problems you point out are very important, indeed, and I’d like to see them solved too. However, I think an important element we should take into account here is scale (and tractability).
How many people do you think are suffering from these problems ? How many animals do you think are suffering from factory farming ? I’d be curious to see your estimates.

Also, as is common with causes, activists tend to be biased about their issue. Many people who care about the (alleged) suffering of animals do not care much about the suffering of human children, and vice versa.

Do you have a source for that claim? Because that sounds false. Most of the people I know who care about animals also care a lot about humans. I personally donated quite a lot to charities like the Against Malaria foundation, and know several people who did the same. Do you think that if you asked people interested in animal welfare whether they care about child suffering, they’d say no ?

Suffering involves wanting something and getting something else. Reality violates what you want. E.g. you feel pain that you don’t want to feel. Or you taste a food that you don’t want to taste. […] Suffering involves something happening and you interpreting it negatively. That’s another way to look at wanting something (that you would interpret positively or neutrally) but getting something else (that you interpret negatively).

I absolutely agree with that. It’s a very good way of defining suffering, actually.

Animals can’t interpret like this. They can’t create opinions of what is good and bad. This kind of thinking involves knowledge creation.

This is a very important claim at the core of your argument, since it’s what links your position on intelligence and suffering. I think we should try to talk about that first (I will answer about Appearances, Activism and Uncertainty in another response).

Personally, I don’t really understand why knowledge creation is required for thinking that something is bad. When my expectation is “I don’t want to starve to death”, I haven’t created any knowledge here. It’s innate.

Starving to death is indeed something below my expectations, but I don’t see why forming my expectations on the topic required knowledge creation. It’s likely that my genes pushed me toward this default position in an automatic way.

I understand that there are specific situations where a few people might agree to feel pain (“Sometimes humans like pain”, as you said), or even to starve to death if there is a strong enough reason (high enough social reward, strong belief system, wanting to support the next generation). But that’s a way of overriding a very powerful innate instinct. The “knowledge creation” part is, I think, in creating beliefs strong enough to override innate instincts, not the opposite.

I would tend to think that animals have default expectations on what’s good and bad arising from evolutionary pressure (they want food because they can’t survive without it; they don’t want to lose a leg or starve because they’d die), and that they mostly cannot change. This way, they’d still be in the situation of “wanting something and getting something else” - which, as you said, is suffering.

Why do you think that, for instance, me being fearful of dying is something that requires knowledge creation? I know I can override that, to some extent, but this seems like the default position, no?

Do you know of any literature which has something good to add to this discussion, like a nuanced view on suffering or on innate ideas or nature/nurture debate? I’d particularly like to read an equivalent overview essay from animal advocates (that doesn’t just skip over all the issues I think are important and focus on other topics like e.g. factory farms or how healthy being vegan allegedly is).


One way to approach this is via free will. Free will requires being able to think multiple different things. You have to be able to create other knowledge in order to have a choice between ideas. This explanation might not work well for you; I can just drop it if you don’t like it.

A different approach is to look at evidence. We have evidence that animals exist, that they evolved, that they have computation and algorithms, that they have long term memory, and that they have sensors for damage, light, sound, etc. There’s research on what algorithms animals run and how they work.

Is there any evidence which you think contradicts my position?

Like “animals just follow algorithms” could be seen as the simplest explanation unless there’s something that contradicts it. If it accounts for all our evidence, and there’s no argument that refutes it (rules it out), then saying evolution programmed in wants and algorithmic behaviors both, instead of just algorithmic behaviors, would be an unnecessary extra complication.

If an animal is designed to get enough food, and its actions follow that design, that doesn’t mean it wants food anymore than chess software wants to win chess games or a self-driving car wants to avoid crashing. Algorithms shouldn’t be anthropomorphized. An animal can be seen as acting as if it wanted something, but so can chess software. It looks like it acts as if it wanted something because it does a lot of the same actions that a human who wanted it would do. But that doesn’t mean it actually wants it rather than just acting on knowledge which is adapted to getting that thing. When humans want X, they use their minds to create knowledge of how to get X, which they then act on. But a squirrel has inborn knowledge of behavioral algorithms to get and store nuts, which controls it, which is different.

If there’s some evidence that poses a problem for the algorithms-only view, then we’d have a reason to consider that maybe animals want things and brainstorm other explanations for the evidence. But I don’t know of any evidence like that. (You could also, instead, potentially make some sort of philosophical argument that doesn’t use evidence.)

A different approach is to look at evidence. We have evidence that animals exist, that they evolved, that they have computation and algorithms, that they have long term memory, and that they have sensors for damage, light, sound, etc. There’s research on what algorithms animals run and how they work.

I agree with that.

Algorithms shouldn’t be anthropomorphized. An animal can be seen as acting as if it wanted something, but so can chess software. It looks like it acts as if it wanted something because it does a lot of the same actions that a human who wanted it would do. But that doesn’t mean it actually wants it rather than just acting on knowledge which is adapted to getting that thing.

I still don’t see how this links with having an internal state of mind that allows you or not to feel suffering. You can make an algorithm that could copy an animal when it comes to external behavior, yes.

But I still don’t see why having a “designed” behavior tells us anything about whether you can have internal states of mind or not. The issue I have with your explanation is that you cannot predict the arising of something like “having subjective feelings” based on general rules about algorithms (whether they are Turing complete or not). It just seems to be an entirely different topic.

So I don’t really see why algorithms should be the basis of our reasoning here.

I am personally unable to say how internal states of mind arise (from brains or computers) - if you have solid data on that, I’d really like to know. My default expectation is that something in the brain can allow for that - but it’s really not clear what and how.

Do you know of any literature which has something good to add to this discussion, like a nuanced view on suffering or on innate ideas or nature/nurture debate? I’d particularly like to read an equivalent overview essay from animal advocates

I’m not sure I have what you wish for. Most of what I read by animals advocates is more about describing the bad stuff that happens to animals, and describing their behavior to show that they respond strongly to negative or positive stimuli.

There are some recent works that try to assess whether insects can feel pain, from neural and behavioural evidence. And this pretty complete work (that I still didn’t have the time to check fully).

But I haven’t seen litterature that adresses your argument, with algorithms and everything (as I said, it’s pretty uncommon). There is stuff on the Descartes automaton argument but as it’s a weaker version of your argument it’s not that valuable for you.

I think that people interested in animal welfare often have some version of this line of thinking in mind:

  • These are people that often have pets, and really like them.
  • They see them play, be excited, sometimes feel sad or anxious, cry when their leg is broken, etc.
  • All of this is stuff that we can see in humans too, and we don’t doubt for a second that humans feel something.

So it sounds pretty straightforward to these people that animals feel emotions. The litterature on nociceptors and behavioural avoidance of harm tends to confirm this. After all, we share a lot of evolutionary history with animals, so it would make sense that, by default, they share this ability with us.

Of course, people can be “fooled” by appearances. This happened for several topics in science. But I think you will agree that this position seems reasonable - at least as a default position - unless some rock-solid evidence of the opposite is put forward.

The issue of your argument is that, from the outside, it reads as something like “we can make a mindless copy of your dog that has the same behaviour, which means that your dog is mindless”.

Of course it’s not exactly your position. But this is an analogy that doesn’t really feel that strong, especially as it counters a position that comes directly from personal experience. If I said “we can make a mindless copy of your brother, so he is mindless”, you probably wouldn’t agree with me, and you’d be right.

What do you think about this ?