Elliot Temple and Corentin Biteau Discussion

Do you agree that my unfeeling algorithms theory is compatible with all our evidence?

If so, I guess you think your algorithms-plus-feelings-too theory is also compatible with all the evidence, so in terms of evidence it’s a tie.

If so, what is your answer to the Occam’s Razor issue of unnecessarily tracking on feelings when we can explain all the evidence without that extra hypothesis?

Do you agree that my unfeeling algorithms theory is compatible with all our evidence?

I just don’t see where the “unfeeling” part comes from. You only talked about behavior - not how it feels from the inside. I don’t see how you can draw conclusions about feelings based on theories about algorithms that only address behavior.

If so, I guess you think your algorithms-plus-feelings-too theory is also compatible with all the evidence, so in terms of evidence it’s a tie. If so, what is your answer to the Occam’s Razor issue of unnecessarily tracking on feelings when we can explain all the evidence without that extra hypothesis?

If we were to land on an alien world where there are some moving rocks that seem to avoid damage, then I’d agree that it would be tough knowing whether they had feelings or no.

However, I think we are in a different situation here, since we have a comparison point: us. We know that we have internal states of mind, and we know that we have a lot of common evolutionary history with animals.

For behavior, algorithms can give us useful insights and there are similitudes - so doing the comparison is fair.
However, when it comes to internal states of mind, algorithm theory is of little use, since we don’t really know how feelings really arise. So if we were to do a comparison, I think it would more suited to compare with humans, since we are the closest thing to animals that can provide information on the topic of “having feelings”.

In such case, we have two different hypothesis:

  • Animals can have internal states of mind because, like humans, they present traits we associate with having internal states of mind : being excited, being taking care of their youth, playing, not wanting to be hurt, crying, etc.
  • Animals do not have internal states of mind because this is something unique to humans that comes with knowledge creation, despite presenting the appearance of emotions and having feelings

I think Occam’s razor would support the first one.

Do you agree that my unfeeling algorithms theory is compatible with all our evidence? E.g., elephant “mourning” observations don’t contradict it.

Also, do you agree that all software and robots we’ve created (so far) has no feelings, opinions, preferences, values or suffering?

Do you agree that my unfeeling algorithms theory is compatible with all our evidence? E.g., elephant “mourning” observations don’t contradict it.

I’d say that this theory is possible.
Of course, this doesn’t tell us much about whether it is likely.

From what you presented, we can also make a theory about humans being general knowledge-creating algorithms that are unfeeling as well.
Of course, such a theory isn’t true, but we only know that because we have first-hand experience that this is not the case - not because there is something in the algorithm theory that requires humans to have subjective feelings.

Also, do you agree that all software and robots we’ve created (so far) has no feelings, opinions, preferences, values or suffering?

My instinctive response would be that they have no feeling, because as a programmer I really don’t see how software running could lead to having internal states of mind.
Then again, if I were to work on building an AGI or an animal or human brain from scratch, I wouldn’t have any idea either on when and how subjective feelings would appear.

OK. So I think you believe my theory is not decisively refuted by the evidence, but is disfavored in various ways by the evidence. Is that accurate?

If so, I guess you think elephant “mourning” is an example that makes my theory less likely without flat-out contradicting is. Is that correct?

I don’t think we can make an equivalent theory like that. Humans do things like talk about and act on values. Human values have external effects on the world. You could say people are lying, but that is a different type of theory than thinking that a self-driving car doesn’t actually “want” to avoid crashing or want anything. It isn’t lying and we can explain its behavior without reference to values, whereas explaining human behavior without reference to values would actually make the explanations worse. (At least we can explain the car without reference to its own values – talking about the values of the designers and coders would be useful to the explanation.)

I agree with this. I don’t think present-day programming techniques create feelings, wants, etc. Let’s just assume this as a premise for now, since you find it plausible and I believe it. We can question it later if you want to.

I think all animal behavior can be explained basically in terms of present-day programming techniques, while human behavior can’t be. Does that make sense as a position? I don’t mean that animal software, created by evolution, uses the exact same programming techniques we use today. I mean there’s no great mystery to it, and nothing where we’re like “I have no idea how you could program that”. Sometimes the details are tricky and evolution may have optimized better, but we already know how to create similar things. Whereas with human intelligence, we have basically no idea how to code that. What do you think?

BTW, CB, feel free not to reply to other people on the forum unless you want to. (I’m skeptical that that discussion with JustinCEO is going to be productive.)

OK. So I think you believe my theory is not decisively refuted by the evidence, but is disfavored in various ways by the evidence. Is that accurate?

If so, I guess you think elephant “mourning” is an example that makes my theory less likely without flat-out contradicting is. Is that correct?

Yeah, that seems accurate. Your theory would be more convincing if there wasn’t, say, stuff like that in animals (just stumbled upon it, not a scientific study, exaggerating title, I know).

Of course, one thing that also makes your theory hard to directly contradict is that it’s absolutely unverifiable and unfalsifiable. It’s taking standard animal behavior but just adds that in the inside animals don’t feel a thing (exactly at the spot where we can’t verify).

I don’t think we can make an equivalent theory like that. Humans do things like talk about and act on values.

Ok - not sure how this requires having internal states of mind, however, since we probably can imagine an AGI that can generate values and create knowledge without having a feeling mind linked behind it.

This is indeed a worse explanation, I agree. But I mention that because I feel kind of the same way with your explanation about animals (saying they don’t really feel being in pain when they show external signs we associate with being in pain).

I think all animal behavior can be explained basically in terms of present-day programming techniques, while human behavior can’t be. Does that make sense as a position?

Ok, I guess we kind of can.

Of course, current robots are nowhere near the level of, say, a cow. They are still very specialized, so keeping track of several very different goals at once, like a cow does, is very hard. My cousin works with cows all day, and he tells me that when they hear the sound of his motorcycle, they come to the fence all excited to see him. We can maybe copy that one day if we are to specifically copy the behavior of a cow. But programming something that leads to such an emergent behavior without specifically planning for it seems much, much harder.

But I can imagine making a copy that behaves like a cow one day.

Even though I must add that I have no idea on how to copy this behaviour:

Veterinarian Holly Cheever documented a story from her early days of work on dairy farms about a cow who had lost many previous babies before giving birth to twins.

Knowing that her farmer would expect only one baby, she brought one of the twins back to the farmer and kept the other one hidden in a field. When the farmer noticed her reduced milk production, he tracked her down, found the other baby and took him or her away too.

That sounds like what you may call knowledge creation.

If it were, then my position (cows lack intelligence in a binary way) would be false. I think this is an example of how my position is falsifiable. You can give evidence/examples like this and I have to have some kind of answer; my position doesn’t enable just ignoring all evidence as irrelevant.

I’m trying to give an explanation which accounts for all the data and is a good explanation, not just technically possible. I don’t think my claims are disfavored by any evidence. (I actually don’t think evidence favors or disfavors ideas as a matter of degree, rather than doing the binary job of refuting or non-refuting. But I think we may be able to discuss this using terms from your epistemology and make progress without having to debate epistemology. So I’ll try to do that.)

I’m not trying to make a claim like there’s an invisible God who doesn’t perform any miracles, and therefore makes no difference to anything, and is therefore irrefutable by evidence. Or like solipsism, which is also irrefutable by evidence. But those things are bad claims which are refutable by arguments.

So, taking one thing at a time, let’s start with elephant mourning example, although I’d have the same three questions about various other issues too.

Maladaptive?

Do you think the behavior is evolutionarily maladaptive? In other words, is it something that would evolve due to selection pressure, or is it an action contrary to evolutionary selection?

Maladaptive behaviors are a sign it’s not just (non-intelligent) algorithms, because evolution wouldn’t program maladaptive algorithms. Humans do maladaptive things like choose to be child free or go sky diving.

Evolution does program behaviors that are ineffective when taken out of context, such as squirrels trying to bury nuts with no dirt. And it also programs behaviors which sometimes fail but are beneficial on average.

Intelligent?

Do you think this behavior shows intelligence (knowledge creation)? I’m guessing for elephant mourning you’ll say no.

High quality evidence?

Researchers have carefully observed animals in the wild and documented what they see in detail. Researchers have also experimented on animals in labs and at farms. This has been done for many species and been published. This creates much better data than e.g. anecdotes from pet owners or journalists writing stories they believe will get attention. Nature documentaries are better than journalism or anecdotes, but still pretty bad compared to scientific research.

Is there high quality evidence about elephant mourning? If not, I would prefer to start with only examples that have high quality evidence. I would consider lower quality examples only after you judge that I’ve successfully addressed all the high quality evidence and you agree that there’s no high quality research on any animal behaviors that disfavors my position.

I’d also prefer to begin with cases where researchers think they understand what’s going on pretty well before dealing with cases where the researchers think it’s mysterious. The squirrel nut-burying behavior is a good example of something that has been researched, and there are many more like that.

After high quality evidence from research, there are some other things I think are better than anecdotes or journalism, like nature documentaries. Better than that can be uncontroversial, widespread examples (e.g. dogs wag tails). But the problem with dog tail wagging is, there’s probably a lot of detail about it that a researcher or expert could figure out, that we (ET & CB) don’t know. Having an expert who explains what’s actually going on, in addition to the raw evidence, is really helpful. Experts could know things about when and why dogs wag their tails, how vigorously they do it, what purposes it has, how much it varies by dog species or situation, etc.

There are TikTok videos where someone (who sounds like an expert) interprets dog behaviors. Commonly the goal is to point out that the dog is uncomfortable in a popular video, and that you actually shouldn’t treat dogs that way. If we run out of high quality research, I’d like to turn to this kind of thing next, where an apparent expert says stuff that seems informative. I think there’s a clear, positive difference between this kind of commentary and how most people look at and analyze animals. Example: TikTok - Make Your Day

But to reiterate, I think we should start with animal behavior examples that have good research. Does that seem fair? (I haven’t looked up what kind of research on elephants exists. If you think there’s adequate research, you can bring it up. If you don’t know of any, then I don’t think the example should be very convincing to you.)

Okay, if we are to take an animal behavior that might be interesting, can we take the example I just mentioned about the cow that decided to sacrifice one child in order to conceal the other one ?

The information comes from a veterinarian. It’s not a topic researched extensively because it’s an example of a single occurence. But it’s straightforward, it’s pretty clear the reason why the cow did that (to protect one child knowing the other would be taken away), and it involves concealing something deliberately in response to event that happened years before. It’s hard to think of other reasons things could have happened this way.

And the most simpler answer is that she does not like having her child taken away.

If you prefer longer papers, there is more litterature on the fact that the separation of cow and calf causes psychological stress.

I don’t think my claims are disfavored by any evidence.

Would this be a relevant example?

It’s much harder to evaluate animal behavior if it isn’t observed repeatedly to understand the patterns better. Being able to to make some kinda change to the situation and see what happens is also important. Also, having details about the situation written down give me a lot more to go on. When it’s a story that’s pre-interpreted by an observer, and then only the facts relevant to that interpretation are stated, then many replies I might make will sound like making up fantasy stories, because the details that matter to me are unspecified so I have to try to make them up plausibly. I’d rather have more data available instead of guessing.

A note on behavioral responses to brief cow-calf separation and reunion in cattle (Bos indicus) looks like it involves a scientific mindset and repeatable observations so I’ll reply to that one.

Vocalizations/calf/hour rose from 0.9 ± 0.12 in the CO group to 10.5 ± 1.5 in FC and 9.3 ± 0.72 in NC calves (1033% and 1166% increase, respectively)

That looks like a large effect. Also, I think ignoring information that cows communicate is bad. If the cows are vocalizing more, that could indicate something is going wrong, which could lead to them being less healthy. If you think of the cow as a valuable robot that is outputting an error message, you shouldn’t ignore the error message…

There are some common design patterns for how animal algorithms work which are re-used for many animals and many behaviors, which have research about them. Understanding those patterns helps make animals look more like robots that only do a limited number of things and less like people. I think one of those design patterns helps explain what’s going with the infant separation stuff. I’ll find some quotes about this and post more later.

By the way, I want to point out that you still haven’t explained how having some automatic behaviors relates to having internal states of mind or not. More precisely, you haven’t explained how internal states of mind arise. I feel like it’s the crucial point to your reasoning.

I have asked time and time again about that, and still do not feel like you gave me an answer.

You made another point at some moment that suffering required knowledge creation, to which I answered and you haven’t answered back - so I assume you have dropped this argument.

So I will ask once more: how do internal states of mind arise?
If you don’t know, just say it.

Feelings/etc come from a large, discontinuous jump in features: intelligence. The prior large jump, Turing completeness, doesn’t do it. A software jump is required, not just the hardware jump. I don’t think it’s a good idea to switch the discussion to universality, breakpoints and discontinuities currently. It’s a big, abstract topic.

I did also explain it another way before. Opinions require flexible interpretations which require creativity. It’s the generic, general-purpose ability to create ideas (intelligence) that enables creating ideas about feelings, emotions, preferences, values, wants, etc.

Trying to focus on a limited number of things at once is not dropping my claims. I don’t silently drop my claims. If I changed my mind and thought one of my claims was wrong and should be dropped, I would say that.

Animals have multiple behavior modes, which we might call instincts or appetencies. Fight, flight, mating, nurturing young, being on alert for predators, eating, resting and hunting are examples.

Creatures in an appetitive condition become less receptive to key stimuli which elicit other behavior patterns. A hunting minded animal must be presented with far stronger sexual stimuli before it makes the transition to mating behavior, and vice versa. As soon as the most pressing impulse has been satisfied, however, the animal regains its normal receptivity to other stimuli.

Lorenz speaks of a “parliament of instincts” – an extremely graphic metaphor. Just as members of a legislative body compete to submit their proposals and put them into effect, so instincts jostle for a chance to take the floor and issue their coordinated commands. They wait to take charge of the body and control it. If no such opportunity presents itself, heightened excitation results: The instinctive act in question becomes easier to elicit and can even take place at random and without special incentive.

Lorenz compares this process of mounting excitation with a liquid gradually rising inside a vessel until it finally overflows. Tinbergen and von Holst also refer to the internal damming of energy specific to action until it eventually spills over. Another graphic idea used to illustrate this process was borrowed from physiology, which speaks of a lowering or raising of stimulus thresholds. The higher a threshold, the harder it is to cross, and the same metaphor has been applied to stimuli which elicit instinctive actions or reflexes. A rise in the stimulus threshold signifies that correspondingly stronger stimuli are required to elicit the appropriate reaction. A fall entails that a relatively small stimulus can cross the threshold and eliminate the block.

The specific mechanisms that increase the influence of a particular instinct have been studied with some success. E.g.:

it is lengthening hours of daylight that put the male stickleback into a procreative mood

and

One example of an internal influence is the operation of hormones. It has been ascertained that when the female collared turtledove sights a displaying male, its ovaries release progesterone into the blood. The effect of this hormone is to arouse a disposition to brood somewhere between five and seven days later. Lehrmann, who experimented with eighty pairs of these doves, injected them with progesterone seven days before bringing the males and females together. When he offered them eggs at the same time as he brought them together, the pairs immediately embarked on brood-tending activities, which they would not normally have done. This was yet another instance of the ease with which instinctive behavior can be distorted and diverted from its natural course-in other words, of its rigidly mechanical nature. In this case, inclination was induced by a hormone. Introduce this into the bloodstream prematurely, and the instinctive member gains ascendancy correspondingly early.

The build up of hormones is one of the mechanisms controlling which behavior pattern an animal does. The more the meter (hormones or whatever) builds up to favor a particular instinct, the more some parameters are relaxed for doing it, e.g.:

Surrogates, or substitutes, are also employed as a means of working off appetencies. This can be seen in creatures which live in communities and among which the mutual grooming of fur and skin forms part of the innate behavior repertory. Keep such creatures in isolation and they will lack the opportunity to carry out these actions. Hence, they will often invite their keeper to let himself be groomed by them. Again, female rats are in such a strong retrieving mood (“retrieving” is the term applied to the instinctive act of salvaging young which crawl out of the nest) for some days after giving birth that they frequently use their own tail or one of their hind legs as a surrogate. They pick up their tail, carry it into the nest, and deposit it there; or they grip one of their hind legs and hobble back with it on three legs as if it were a baby rat.

When a female rat drags its baby back into the nest, that lowers the brood-tending meter (gives it a weaker voice in parliament, or lowers the amount of liquid buildup in the other metaphor). If there are not babies to drag back, the meter gets unusually high. This results in being less discriminating about what looks like a baby. This is a reasonable feature: if what the rat is doing isn’t working, it should relax the constraints on the behavior. It’s also something that can easily be coded without intelligence. Like if you have a visual analyzer that scores objects near the nest for how much they look like a baby rat, you can just lower the threshold for action from a 0.6 score to a 0.4 score. (Presumably it’d be gradually lowered as the meter for doing the brood-tending behavior goes up, rather than lowered in one big jump.)

With this kind of mental model of what’s going on, it’s unsurprising for separated cows and calves to fail to do some instinctual actions, have the meters for those instincts rise, and then do less discriminating actions like:

Bucket-fed calves offer a" similar example. They develop the habit of sucking other calves or the rings on their stall chains.

Or, failing to find substitute actions, a “something is wrong” type meter will go up, and the cow will do behaviors related to that pattern of actions, such as making vocalizations that communicate something negative to other cows (particularly the mother, who might hear the calf and come – in the ancestral environment, calfs being lost would be a common source of various soon-after-birth instinctual actions not being done enough, and calling out would help fix that problem).

This kind of mechanism falls within present day programming abilities. It’s basically just keeping a floating point variable for each major behavioral pattern or mode, and doing behaviors for whichever one has the highest number currently (it’s probably more complicated but that’s a reasonable approximation). Then you need to code some triggers for the variables going up or down, such as hormone levels going up or down, giving birth, or or daylight hours lengthening or shortening. Some of them should go up over time and go down when the behavior is done.

Von Holst and von Saint-Paul acquired even deeper insights into this problem by cerebral stimulation in chickens. Having introduced electrodes into the brain by surgery, they were able to probe individual centers and stimulate them artificially by means of weak electric shocks. This enabled them to tell which form of behavior was controlling the current nerve structure. The tiny electrodes caused the birds no pain or discomfort. When they recovered from the anesthetic, they were completely unaware both of the electrodes and of the gossamer-fine wires emerging from their heads.

The results of these experiments in stimulation were very informative and are recorded in instructional films. One of these shows a cockerel seated contentedly on a laboratory table. Stimulation of a particular part of the brain ensues. At once, the cockerel stands up and starts to peck at the tabletop. There is nothing there to pick up, but as soon as the relevant spot is stimulated, the cockerel pecks like an automaton. Stimulation of another spot results in the cockerel’s remaining seated but looking around. When the voltage is increased (approximately from .1 volt to .3 volt), the bird stands up and starts to cluck. A further increase in voltage and it walks about and evacuates. Yet another increase and it turns around, squats down, and points its beak in a certain direction. Finally, at about .9 volt, it takes off, emitting a series of cries. In this case, gradually increased stimulation elicited various hereditary coordinations in regular succession. In fact, the cockerel exhibited all the behavior patterns with which it would normally have greeted a potential invader of its territory. When von Holst stimulated the calm, seated cockerel with .9 volt right off, the intermediate phases disappeared and the bird flew off screeching. If he stimulated the same points several times, the result was exhaustion and a raising of the stimulus threshold such as can be observed under natural conditions.

Von Holst and von Saint-Paul were able to induce almost any hereditary coordination in poultry by artificial means. It turned out that many hereditary coordinations can be activated from a variety of points in the brain. One example was clucking and another walking. These hereditary coordinations occupy a very subordinate position within the hierarchy of instinctive movements and come into operation in the course of various behavior patterns. For instance, clucking is a concomitant of brood-tending behavior, as well as of the activated urge to flee. Again, the hen walks, or takes steps, not only when in quest of food but also during aggressive action and copulation. Lorenz christened these very simple hereditary coordinations “tool activities” because they are useful for various purposes. It is evident that the circuits in the brain are so disposed that various instincts make use of these basic movements, each within the context of its particular motor flow.

Poultry have a fixed number of actions, like clucking, turning their head or taking a step. These are hard-coded in their brains at particular locations. They do them in fixed sequences as instinct meters rise. Animals are often extremely inflexible and inhuman with following fixed sequences, showing no understanding of what they are doing or its purpose, and just acting like an algorithmic automaton. E.g.:

Sticklebacks migrate in shoals from their deep winter quarters to warmer, shallower waters. Once there, every male seeks a weed-stocked spot and establishes its territory. Only then does it put on mating dress and become receptive to other stimuli. If sticklebacks are captured during migration and placed in a basin which contains no plant life of any kind, they remain in a shoal and do not change color, simply because none of the males can mark out a territory of its own. Plant some weed in one corner, on the other hand, and one of the males will soon detach itself from the rest, take up station there, establish its territory, change color, and become procreatively inclined. In this case, therefore, the growth of procreative inclination is brought about by two factors of an external nature: first, lengthening hours of daylight; and second, the discovery of plants which lend themselves to the establishment of a territory (and nest building).

The big picture or meta point (which I don’t really want to debate more at the moment but think is worth keeping in mind) is that I think you should know more about this stuff than I do before being an activist who is seeking major change. I am not an activist about animals, nor a specialist in the issue. I’m not seeking to change things. I think anyone who wants to be an activist or specialist, or make large changes to society regarding animals, should be significantly more thoroughly informed about these issues than I am.

I think the leaders of the animal activism stuff draw in poorly-informed people on purpose and use them (most activists know significantly less than CB, and I don’t think CB knows nearly enough to invest a bunch of time or money, or to be confident that the changes he proposes would make the world better rather than worse). The leadership doesn’t encourage people to learn a ton before taking action; they encourage taking actions like donating, now. That is bad when you have a controversial cause related to making large changes to society, even if it turns out that you aren’t wrong.

The chicken example was very interesting. I didn’t know that you could trigger these behaviors directly by sending electrodes into the brain. That’s funny. The other examples were interesting, too.

Ok, let’s accept the premise that the behavior of animals is mostly automatic, and we can copy that by programming. (still not sure about the example of the cow that had twins, but I’ll leave that aside, the working hypothesis of automatic behavior is good enough).

Now, what does that really change?
I pointed out several time that what is of interest here is not external behavior, but internal states of mind. Can we focus on that?

I don’t think CB knows nearly enough to invest a bunch of time or money, or to be confident that the changes he proposes would make the world better rather than worse

Well, as I said before, I am assuming that a dog wants to avoid pain and suffering because it presents many of the same traits humans present when they want to avoid suffering. I don’t think that’s unreasonable.
As for “doing more research”, I tend to do that a lot, actually. But basically the only argument I’ve come across that says animals can’t suffer (and that was actually backed by some serious reasoning going beyond the simple automaton stuff Descartes talked about) is yours. So I don’t really know where I would have done more research elsewhere. And I still don’t feel like you have given the really strong argument I was waiting for, since you mostly elude how internal states of mind arise, but I hope we can get to that topic if you answer what follows.

Feelings/etc come from a large, discontinuous jump in features: intelligence. […] I did also explain it another way before. Opinions require flexible interpretations which require creativity. It’s the generic, general-purpose ability to create ideas (intelligence) that enables creating ideas about feelings, emotions, preferences, values, wants, etc.

Oh, that. I did answer to that before. It was here.

Can you then respond to what I said, then?

You claimed that there is evidence that disfavors my theory. I responded to a piece of evidence. Do you now agree that it (the cow-calf separation research) does not disfavor my theory?

Ok, let’s say this doesn’t disfavors your theory - at least no more than cows screaming when they are hurt.

Where does this leads us?

Well, hold on. It sounds like you believe that animals making certain sounds while injured disfavors my theory. And you brought that up on purpose rather than dropping the matter in order to grant my premise and discuss based on it.

And previously, you thought my theory being disfavored by evidence was important. You brought that up and wanted me to engage with it. And you had many examples. I began to engage. But now you’re trying to drop that topic without seeming to change your mind. Why? I’m trying to keep things moderately organized but it’s hard.

The things I think that make your theory less likely is that animals show signs of behaviors that we assign with having feelings in humans (crying, screaming, being excited, etc.)

I think this disfavors your theory compared to a world where animals cared about as much as a Roomba about being hurt.

For all of these behaviors, you say “it can be explained as being automatic without invoking internal feelings” (not a quote). Which is something I can understand. I agree with you that most animal behaviour is automatic.

I just think this explanation would be more likely if humans didn’t show some of these behaviors (screaming when their legs are hurt) that we associate with suffering.

That doesn’t falsify your theory, it’s possible we are fooled. It just means that you have to present an alternative way of explaining how internal feelings arise in humans but not in animals.

This is why I’m dropping the matter of automatic behaviors and I accept your premise. (and I don’t really want to go back and discuss on that point unless we solve the next one)

What I really want to learn about is about internal states of mind. That’s the core point that is required to link behaviour and internal feelings.

So, what is your answer to what I said here?