Curiosity – Animal Welfare Overview

These large-scale, widespread problems causing human suffering seem more important than animal suffering. Even if you hate how factory farms treat animals, you should probably care more that a lot of humans live in terrible conditions including lacking their freedom in major ways.

Why do they seem more important than animal suffering? You might be biased on this issue because you think animals aren’t morally relevant (other than in their value to humans). An unbiased reader might not feel the same way. All factory farmed animals would probably be going through significant suffering if they could suffer in a similar capacity to humans. People with this attitude might not agree with you on how they should feel about this issue.

I think this section would make more sense later in the overview after people have had a chance to understand your explanation.

I think that human suffering and (potential) animal suffering are mainly separate issues, and you won’t often have to pick one or the other to care more about. You can replace your meat with beans and lentils without changing your position on any other issues related to human suffering. Leading by example would probably be the best form of “activism” for most regular people who don’t have large platforms to share ideas. Learning some new recipes shouldn’t take away significant time from someone’s work helping people who are lacking freedom.

Should we treat trees and 3-week-old human embryos partially or entirely like humans just in case they can suffer?

Cultures around the world have long-standing traditions relating animal suffering to morality. These traditions were created by generations of people following their intuition that causing unnecessary suffering to animals was wrong.

There are significantly fewer (if any) traditions relating tree suffering to morality. Very few people had intuitions that tree suffering was an important moral issue. Treating a tree like a rock doesn’t involve discarding any long-standing traditions. Treating animals like rocks/trees would involve discarding a long-standing tradition. That is a radical attitude which we should be cautious with. Your explanation could be wrong and the intuition of generations of people could be right.

I agree with you that humans are universal knowledge creators and use evolution in their minds to create new knowledge. I agree that farm animals cannot think creatively, form preferences, or do abstract thinking. And I agree that all those things are related to human suffering.

Suffering is an idea(s) that we have after we’ve done our interpretation of a situation. Humans necessarily use evolution in our minds to reach the idea that we are suffering because our thinking relies on evolution. But once we have the idea that our preference has been violated we don’t need to keep doing evolution in our minds. The idea that our preference has been violated is suffering. Humans can have many different preferences and many different degrees to which they can be violated, which is why suffering is a spectrum.

We know that humans can have ideas (like preferences) and that they can be stored in our brains. We know our ideas come from our ability to think creatively, using evolution to come up with new ideas not contained in our genes. But we don’t know how those ideas are stored in our brains. And we don’t know how our brain is able to criticize, compare and combine ideas. We know there is a jump to universality involved, but we don’t know how it works.

A jump to universality implies that a human brain can have any possible idea which could be had. In the same way that a universal writing system could represent every possible word that could be added to its language. But non-universal writing systems can still represent some finite amount of words. Could non-universal “thinking” allow for a finite set of ideas?

For example, a cow could have a predetermined set of 10 different potential ideas. The idea(s) configured in the brain at any given time could be determined by fixed/learning software and wouldn’t require creative thought. Cows wouldn’t be able to choose which idea to think, come up with new ideas, or discard old ideas. That would be consistent with there being no cow philosophers or scientists.

A (probably bad) analogy I thought of is that being a human is like playing a video game. You can control the character and interact with the game world in a meaningful way. While being a cow (with 10 preprogrammed ideas) could be like watching a movie. You see it and experience it (to the extent that you could experience things with only 10 ideas) but you have no control or ability to interact.

Cow brains and human brains are similar in their hardware, even though cow software doesn’t allow for creative thought. Humans experience ideas through a specific hardware configuration of their brain. Since cows have similar hardware it should be physically possible for their brain to also be configured in such a way that they experience an idea. Since they can’t think creatively they would have to rely on another mechanism to properly configure their brain.

This is a relevant difference between cows and trees that you ignored. Cows have a potential mechanism by which they could experience ideas (their mammal brain) whereas trees do not. There are no known explanations for how a tree could have an idea.

Cows are also different from self-driving cars in that the cars were designed and programmed by humans. We have explanations for how all the different elements of the hardware and software they contain work. We designed them with specific purposes like staying between road surface markings or stopping for pedestrians. It seems unlikely we would have accidentally given them the software to experience ideas by chance, without realizing it.

Cows on the other hand were designed by evolution, not humans. We have little understanding of how their hardware and software work together. We don’t know how a brain stores an idea. We don’t know why our genetics give us the ability to think creatively. We don’t know if genetics could configure a brain to experience a finite amount of ideas. There are many more unknowns when dealing with cows than when dealing with trees or self-driving cars.

Now, if I was designing a “cow-bot” I certainly wouldn’t try to program it using a set of predetermined ideas or experiences. That seems needlessly complex and unrealistic. I don’t know how brains store ideas (or do anything with ideas) so porting that software to my own cow-bot would be impossible. But when you don’t know how things work they can often seem needlessly complex. The idea of programming a cow-bot using predetermined emotions might seem elegantly simple if I could understand the software for how our ideas work in the first place.

I agree that it’s possible cows might be like self-driving cars. I’ve even tried to explain it to others and convince them that it’s true (or at least possibly true). But I’m not sure I can provide a decisive argument. And my intuition is still conflicted. I think I would feel really bad torturing and abusing a farm animal.

You’re welcome to be unsure, but I have studied stuff, debated and reached conclusions.

Studying and debating does not provide us with any kind of certainty that your ideas are correct. I am definitely unsure that initiating violence against billions of potentially sentient beings is a moral choice. Especially when the alternative is buying slightly different ingredients at the grocery store and learning some new recipes.

Also, we’ve been eating animals for thousands of years. It’s an old part of human life, not a risky new invention.

We’ve been smoking and drinking for thousands of years too, that doesn’t mean those behaviours are healthy and risk-free. Consuming animal products might have been increasing our risk of cancer for thousands of years without anyone noticing.

“In 2015, based on data from 800 studies, IARC classified processed meat as a human carcinogen (Group 1), meaning that there is enough evidence to conclude that it can cause cancer in humans. The evidence for red meat was less definitive, so IARC classified it as a probable carcinogen (Group 2A).”

It’s also relevant to issues like whether or not we should urgently try to push everyone to be vegan, which I think would be a harmful mistake.

Why do you think it would be a harmful mistake? Why is it better to grow plants and feed them to animals (which is usually inefficient) rather than just growing plants and eating them ourselves? Are you familiar with the idea of trophic levels?

Also, based on my personal research I found that a whole-food, plant-based diet was a healthy diet for a typical human. But that’s a topic for another post specific to nutrition.

When you say “push everyone to be vegan” which definition of veganism are you using?

Veganism was invented in 1944 with the founding of the Vegan Society. The word vegan was coined from the beginning and end of the word vegetarian. Veganism was meant to mark the “beginning and end” of vegetarianism.

While the definition has changed over time, I think a key element is avoiding cruelty to animals “as far as is possible and practicable”. That’s highly subjective but it allows for interesting scenarios to still be vegan. For example, someone in a 3rd world country might eat animal products once a week because they have no other source of calories. If it was not practicable for them to stop eating animals once a week (because they would die) I would still consider them vegan.

If you read this post about ChatGPT, but imagine he’s talking about animals instead of ChatGPT, it has relevance. Some of what he says about the current non-intelligent software applies to animals too.

That’s not a coincidence. ChatGPT has knowledge in it but doesn’t create knowledge, just like animals. And people get really confused about the difference. They see knowledge and attribute the knowledge to the thing with the knowledge, instead of realizing it’s all coming from some other source (like genetic evolution or intelligent design by programmers which actually works by evolution of ideas).

Why does thinking that animals aren’t morally relevant count as more of a “bias” than thinking they are morally relevant? Both are specific ideas and you could call either of them “bias”. So doesn’t this all just mean “someone who has different ideas to Elliot might not feel the same way”?

All ideas could potentially be wrong, there’s always something new to learn. Choosing an idea because it’s a tradition without responding to the actual content of the idea is an argument from authority. Can you explain what is wrong with Elliot’s argument?

What do you think would provide certainty?

Isn’t this contradicting your position I quoted earlier on using tradition to make decisions?

https://www.reddit.com/r/Damnthatsinteresting/comments/1b4mni7/the_reason_you_should_avoid_the_water_in_australia/kt0aqqv/ (my bold)

He’s Barefoot Bushman, a famous crocodile&reptile campaigner n conservationist. He’s got some very persuasive videos on why we should not kill crocodiles on sight nor be afraid of them as they [crocodiles] operate like a computer program and [are] very readable / consistent in their behavior as long as we know how their brain program works

“He” is the person in the video being discussed, who lures a large crocodile out of the water with a wooden stick in Australia (apparently at something like a zoo, not in the wild).

Interesting information about beaver algorithms. Stories like this help show that animal behaviors which appear intelligent might not be intelligent at all (animals may be nothing like a dumber human with partial human-type intelligence). You shouldn’t just assume animals (or “AI” software that performs tasks well like playing chess well or giving written responses to questions) have (even partial) intelligence just because you see some success at something complex and there’s clearly knowledge involved.

I have not looked up the original research to verify this is a true story.

That is so neat. The idea that it’s triggered by the sound of the water is so cool.

This reminds of the chapter in Surely You’re Joking, Mr. Feynman where he does a bunch of experiments with ants. (I cant remember which one, I think it has ‘ants’ in the title, but I don’t have the book on me now. I’ll find it later and post some quotes here.) I found Feynman’s curiosity so impressive. He seemed like someone eager to find problems and learn about them.

I think most people wouldn’t consider this kind of ‘home’ experimentation science, but I think that this is what it’s all about.

He wrote a book about it:

The Internet Archive appears to normally have it but unfortunately they’re currently under attack by hackers.

There’s very little information about the book in online reviews.

Here’s some article about his beaver research:

The main points of the story look likely true. I’ll add the book to my reading list and maybe get to it eventually.

From the TikTok comments:

This makes me ponder the ability of AI. If we are controlled a lot by instinct (preprogramming) and make decisions based on environmental inputs, aren’t we just advanced AI?

I think a lot of people, when they find out that maybe animals (and/or AI) aren’t intelligent (I’m guessing he figured that out), start questioning whether humans are actually intelligent. That’s a good thing to consider. But humans have some clear differences from animals such as writing philosophy and poetry books, and doing science. I think maybe this guy and some other people are failing to think of those differences or to see them as really important. One way to state it is humans are capable a lot of things that we don’t observe any animals doing and we currently don’t know how to get software to do well.

I love the fact animals, and humans, instinctively know how to do things without being taught. Its really interesting & fascinating. Its just been hardwired in the brain through evolution.

I saw a special where a forest ranger used sounds to get beaver to build [dams] in certain place to help control dams from damaging other areas

They were probably real confused when the sound didn’t stop :laughing: