Elliot Temple and Corentin Biteau Discussion

Would that fit your definition of activism that isn’t unreasonable ?

They directly target big companies, in order to push them to move away from the worst practices in the field, like cages and boiling chicken to kill them. This doesn’t solve the problem entirely, of course, but these reforms improve the life of many animals. Only a few people in big companies are targeted.

Most people are against factory farming (at least from the polls I’ve seen in France and the US). This doesn’t mean they act on it but at the very least they don’t like seeing the pictures of how animals are raised and would prefer it not to be a thing.

So this doesn’t oppose the majority of the population (unlike the “go vegan” message, which I would say is ineffective and often tribalist). It is much more consensual, because there’s often public support for companies changing. There is also a good track record of obtaining results here - since many companies did change, and rather quickly.

Plus, the number of animals affected per dollar is huge, at least 100x more than the number of humans I could affect by donating to a charity. These large numbers compensate for the 10% uncertainty I have and the eventuality that animals suffer less than humans.

Charities that do that focused on promoting veganism before, noticed this didn’t really work, so changed their way to something more effective.

Does that fit your criteria?


As for the rest of the discussion, I have trouble seeing what I could show you that would lean you are 99% certain that animals suffer.
(well, personally I’d probably be vegetarian even I were 20% certain, given the very, very large number of beings affected - the average US consumer kills about 20 land animals, including 19 chicken, so even at 20% this would leave an excepted 4 animals suffering).

The problem I have is that I don’t see how your point can be disproved, or even verified. You explain every observation of behavior associated by suffering, even pain, by something like “it’s just software”. But if we are not to use these observations, I don’t really see anything that we could use as actual data because we can’t get in the animal’s head to verifiy.

I’m not sure I could defend to another person that I feel suffering and joy and am sentient. I’m pretty certain that it’s possible to make a philosophical reasoning that would explain away all of my feelings. So explaining that others have feelings is pretty much impossible.

I’m also not certain of the solidity of your causal chain. I still don’t see why wants or preferences would require knowledge creation. It’s pretty easy to imagine preferences and wants that exist by default - that cannot be changed but they are selected for by evolution.

The analogy you are making ended up being closer to “we can make a mindless copy of your dog, so your dog is mindless” than I though. Of course, it’s not very good argument, but it’s surprisingly hard to argue against it - very hard to disprove, even though it feels like a syllogism. There’s a lot of analogy with robots but I still feel like comparison with humans would be more warranted on the topic of sentience and suffering.

I also feel that philosophical reasoning can be interesting to explain things that are in our minds and we can verify, but has little predictive power. If there is a property specific to living brains that leads to suffering but that is not present in robots, how would you know that? We don’t even know how sentience arises and at what point - how can we say that we know animal can’t be sentient?

So, I have trouble seeing how to make progress here.