Elliot's Microblogging

Are microwaves and cellphones actually safe? On the theme of not trusting the experts, I did 5 minutes of research with google scholar.

Initial conclusions:

Microwaves are safe. The research was done in the 70s or earlier. Looks competent. The radiation is dangerous but the shielding works.

Cellphone research is shoddy and a bunch is industry-funded and biased. We could add cheap shielding to reduce radiation by multiple orders of magnitude, but I think we weren’t doing that as of 2009, presumably to keep phones slightly thinner and lighter (or maybe to keep a better external shape?). So cellphones might be a brain cancer risk. ugh.

This fits with a general theme I’ve noticed: science got worse. Random papers from the 70’s are better than now. Publish or perish is evil and some things have gotten more corruption. Science is having an Eternal September problem and being flooded with mediocre people who are then pressured to publish tons of stuff. Also maybe it’s been bought more by big corporate interests. It was a gradual transition but a very rough guideline is science after 1995 is worse. Also stuff that’s too old is problematic because we knew less in the past (this varies a lot by field). For a lot of topics you have to read stuff from after WWII or later to find work that’s still good today not obsolete.

BTW a lot of meta studies are really low quality – pretty brainless efforts to search some keywords in some databases of papers then categorize the papers with simple metrics. Instead of thinking about explanations, concepts, arguments, what is refuted or not, etc. And this crap is much more common today b/c computers enable it and publish or perish incentivizes it. This is ironic because the “hierarchy of evidence” stuff advocated by some rationality type people says meta studies are the best evidence. What is the Hierarchy of Evidence? | Research Square Reviews are useful when they summarize ideas and talk in terms of concepts and explanations, but I often find just looking at some of the actual research, not meta stuff, is most useful.

A couple thoughts about my initial impressions of science papers/issues:

  1. My initial impressions are positive, negative or neutral. I do all three. I’m not just always pro-science or always pretty near neutral at first until I know more.
  2. I’m good at skimming and have a lot of experience with it.
  3. Sometimes I look at stuff in more depth and I often reach the same conclusion (and if I didn’t, then I’d think about what mislead me, how to do better next time, etc.)
  4. I searched academic papers instead of popular media articles. I think that works better for a lot of topics.

Video (from mainstream perspective, not someone challenging the status quo) says the scientific research says depression is not caused by serotonin imbalance, and we’ve known that for a while. Tons of scientists don’t believe the serotonin-depression hypothesis but the public does (and the drug companies want the public to believe it). This implies that prescribing SSRIs (selective serotonin reuptake inhibitors) for anti-depressants is problematic…

Related, do you know what’s a less extreme, dangerous, risky thing to try than brain-affecting drugs like SSRIs? The lion diet. As a diet, it sounds very extreme. And it is. But it’s mild compared to SSRIs, so people should seriously consider trying it before trying SSRIs.

The lion diet is basically just red meat, water and salt. It’s an extreme elimination diet which tries vary hard to eliminate any food that might be contributing to your health problems. It’s not meant to be permanent. The idea is that, after things improve, you can reintroduce foods one by one and see which are OK for you.

I think that “industry-funded” should be irrelevant to your judgement about a study, because it’s non-decisive.

Good people in industry would be interested in knowing whether or not their product is safe, and would therefore be interested in funding (good, unbiased) studies to investigate that.

E.g. You could say that Hank Rearden researched Rearden metal in an “industry-funded” study.

Industry-funded is not decisive alone, but is relevant context which can be used in arguments and explanations. For example, one might find it implausible how many mistakes they made, on the basis that science isn’t that bad (in general), and then search for an explanation.

Part of the current context is widespread badness in large companies and zero companies resembling Rearden’s. That makes references to industry (or mega corporations or other familiar terms) understandable without specifying a qualifier.

atrioc on YT mentioned basically that mark cuban isn’t an impressive businessman. i looked it up.

Cuban suckered Yahoo into super massively overpaying for broadcast.com during the .com bubble. the acquisition failed within 3 years. Cuban did not provide value to his biz partner (yahoo) but become a billionaire from that failure.

but also i found more:

cuban wasn’t the founder. he bought 2% for 10k, then insisted his 2% was non-dillutable, fought with the founder for months, created a bunch of bad blood, interfered with getting other investors, and bullied the founder into giving up.

the founder agreed to keep only 10% of his company. cuban had maybe 50%, idk exactly but it couldn’t be much more than that. so cuban had 5x more than the founder, maybe less.

later when it was sold to yahoo, cuban had over 20x more than the founder. how? by continuing to fight over dilution rules instead of doing dilution evenly. the founder said in the interview that he would have had to bring in outside investment to keep his share at 10% and he went down to under 1% by not finding investors.

the founder was so bullied by Cuban, and such a gentler soul, that he just kinda didn’t care that much and wanted to get distance instead of fight. part of the deal they made, where cuban leveraged his 2% stake into taking over the company, involved the founder not having to report to Cuban, not having Cuban as his direct boss. also Cuban started denying the guy was even one of the founders at all, let alone the original founder 5 years ahead of Cuban. Cuban repeatedly under communicated, didn’t negotiate contract terms, then insisted stuff meant whatever was good for him without consent or agreement.

so many of the famous businessmen today are so awful.

i found the interview after noticing the very unequal dilution based on the numbers on wikipedia. after a couple searches, i was unable to find anything on the internet directly addressing the dilution issue and actually talking about it as something important. like no one seems to have ever written an article about it. the linked interview was the best info i found.

2 Likes

It’s hard to come up with good guarantees for philosophy consulting or tutoring because people’s progress depends more on them than on me, and they may be dishonest or irrational. I do have a 30 day satisfaction guarantee for digital products.

Tutors in general teach to tests, which is easier than trying to help people actually understand stuff, and they still generally don’t offer guarantees.

Many therapists, psychiatrists and business consultants offer no guarantees and have pretty low success rates, but get hired anyway.

I can guarantee time spent (this is implied with hourly billing btw) or words written or hours of video recorded. But those aren’t the results that clients want. That’s kinda like a programmer making a guarantee about lines of code written instead of about the code working well or, better, the client getting a business outcome that they want (e.g. higher sales).

GOAL: My goal with this post is to react to something Elliot shared and talked about. I would love responses to the questions below but I personally got value just from thinking and writing a bit here.

Why is it not plausible to make some conditional guarantee based on the buyer’s demonstrable learning effort with the product? Something like if you do X, Y, and Z you should at least have result C or better. I’m guessing you have thought of a chain of reasoning with this idea and the problems that come up. What level of engagement is needed to make some measurable/objective progress for a typical person or even a significantly below average person?

Related to this, I was wondering if you would ever want to develop a sort of Khan Academy for Critical Fallibilism? (if that is even possible). What level of resources would you need to consider trying to make something like that?

I know you have talked about not wanting to write a book people because it will be so little understood. What software based learning platforms have you seen that appear to have potential? You have shared some articles by one of the contributors to the quantum country website before. Do you think something like that has potential?

Actually only one article that I can remember.

Do you have a specific example that you think would work?

I make articles and videos because I think those are good formats. I don’t think something else is better. Most philosophy stuff isn’t very visual. Books are fine too. The writing on the CF site is book length already.

META-ISSUE: I don’t think this post is a good response but I think it’s not far off the best that I can do for a casual response with some degree of thoughtfulness. Also, not sure how well, or how much better, I could do if I was working on this response like it was my job.

I don’t have an example of a guarantee that I think would would work. Philosophy seems so open-ended that’s it hard to think of a meaningful guarantee. The point of Stark’s article seemed to be that people can try making extremely limited guarantees that are somewhat meaningful.

I guess I just felt like there should be some level at which something could be guaranteed. Like, if someone memorized your top 50 articles and could give close to verbatim answers to the top 100 FAQs about CF, then that would indicate that they at least understand the basics of CF. That’s not a guarantee though because it’s not specific.

Part of the problem is that I don’t know what someone should be capable of at various levels of philosophy knowledge. With programming or auto-mechanics, there are certain things you should be able to make happen at certain knowledge levels. I’m not sure what the levels are with philosophy or what new things one should be able to make happen at those levels.

Trying to think of guarantees kept leading me to try and come up with tests/diagnostics of philosophy knowledge. That’s what made me start to think of learning platforms for Critical Fallibilism. I have found tests with answers keys to be pretty good for learning. Step-by-step solutions are even better. Immediate interactive response to input seems even better than that. I think you too, have talked about some of that being good.

In addition, I guess I was wondering if there was maybe a negative guarantee, like if you do these 50 things and have not achieved X, Y, or Z, then you are not making progress. Maybe in those cases someone should focus on conventional self-improvement, work, family, friends, and other normal stuff for the next 2 years before trying again or something like that?

Philosophy is about creativity a lot, which clashes with guarantees, worksheets, tests, answer keys, etc. Some skills which build towards philosophy, like making paragraph trees, are less creative, so there’s more consistency and agreement for the answers that different people come up with.

Related to Elliot Shares Links (2022) - #219 by Elliot

I have not researched what’s true, but the basic story claimed here is:

Musk pays Jared Birchall between 1 and 3 million dollars per year. In return, Birchall actually gets stuff done. Musk doesn’t know how to do things; he cheaply hired someone who does.

In other words, Birchall acts as Dagny Taggart to Musk’s James Taggart role. If so, he’s massively underpaid and also shouldn’t do it at all.

But who knows. Maybe Birchall is a shitty leech himself who just socializes with people and then gets underlings to do the real work but takes the credit. I do generally question the competence of whoever is on Musk’s team since the overall result with the Twitter acquisition, and various other projects, has been so bad. But Taggart Transcontinental got a bunch of bad results that weren’t Dagny’s fault. An underpaid competent person could help explain what’s going on with Musk getting so rich despite being so awful.

2 Likes

GOAL: React to post and try questioning and articulating intuitions as mentioned in this article: Intuition and Rationality

I went through a bunch of thoughts before figuing out what to start writing. I guess one reaction/intuition I have is that there doesn’t seem to be anyone in the world, who is operating at scale, with the skill and morality of the heroes in Atlas Shrugged. I don’t know what to think about this since the actual seems to be much better off than the world represented in AS but the best big businessmen in the real world seem worse.

My reading of the article is that Birchall is good at managing Musk’s personality and basically does what Musk says without innovative initiative. The two main indications I got for that are that Birchall has a reputation for being good at managing wealthy client’s money and that he was willing to dig up dirt on the British cave diver. The article didn’t say Birchall was a good investor who was able to generate above average returns. The implication was more like, Birchall knows how to talk with wealthy clients and make them comfortable. Digging up dirt on a cave diver is probably not a very productive activity and probably not something Hank Rearden would do.

My best guess about Musk’s career success is that it is largely due to having gotten lots of unpaid media advertising for being savy about environmental issues and by being good at “innovation theater”. I don’t think the media cared about his underlying flaws for a long time but cares more about them now that he has more power. It’s kinda like how Trump got lots of free media, because he boosted ratings, but then it backfired when he was serious and successful in trying to become president. With Musk, it was more a combination of fitting the narrative and later being good for clicks/ratings/etc.

Both of Musk’s major companies are highly inter-mingled with government, including state contracts, subsidies, and special priveledges. So, maybe his extreme wealth doesn’t need that much explaining after taking into account his marketing advantages, which includes being good at marketing, and his relationship with the government.

There are a couple things in Musk’s career that make him appear like more of a hero. One thing is that he apparently did risk all ~$180 million of his net worth to bailout his companies when they were not doing well. Maybe the story is highly embelished and not as big a bet on doing something great as it sounds. I don’t know why someone who seems more like Orren Boyle than Hank Rearden would risk the money at all though. Maybe he wanted more fame, power, and status than ~$180 million could offer. The other thing that makes him seem good is that he has led, at least officially led, some successful projects with SpaceX and Tesla. Even if these companies should just be counted as part of the government, they have been somewhat innovative departments in the government. But maybe he is doing way less than someone more competent could do with the same resources.

META-COMMENT: I don’t think this went too great I couldn’t figure out how question my intuitions explicity. Instead, I just free-wrote a bunch of ideas related to some intuitions. I guess I should try being more explicit about which intuition I’m trying to learn about. I don’t know which intuitions are which though so its a bit hard to figure which one I’m working on.

The basic method I suggested is to consider hypothetical scenarios/questions and see what opinions your intuitions have for those other scenarios.

So e.g. what if Musk didn’t build a factory in China and had no CCP connections, but everything else is the same. Does that change your intuition somehow?

What if SpaceX had actually started a Mars colony and was clearly making a ton of innovative progress?

What if Musk had little media coverage? Or primarily negative coverage?

You can take the real scenario, change a variable, and check how you feel about it. This can take some time and effort because you have to reimagine the world a different way enough to gain intuitions about the new world. If you just briefly mention a description of the how the scenario is different, your intuitions probably won’t grasp it enough to react properly (when it’s complex stuff like this – it’s easier with simpler stuff). You’ll have to think it through more to get better intuitive reactions. Visualize the new thing, imagine it, tell stories about it, explore it, etc. A world with Mars colony that’s going well would be pretty different and would take some thought and imagination to come to terms with and understand well enough to react to.

This is a complex scenario, so one thing you can look for is “when I change X, I don’t have a strong reaction” and “when I change Y, my intuitions react strongly” which helps you figure out which variables matter to your intuitions. Sometimes you’ll have strong reactions before even thinking the change through a lot.

I read all the posts on my forum. Some people probably consciously know that. But their subconscious doesn’t know it very well. If I “like” a post, they feel less ignored or alone than if I read it without clicking “like”. Even if they consciously know that I read it in both scenarios (clicking “like” or not).

I don’t like upvotes as a popularity contest. And upvotes from people you don’t know or don’t respect don’t really indicate anything but popularity (within a subculture of the forum/subreddit/etc users). And downvotes are worse than upvotes – disliking merits explanation more than liking does. Negative popularity contest type things are worse than positive ones. People liking some celebrities is way better than ostracizing some heretics.

Upvotes from people you know can indicate more specific things, with some useful meaning, like “I read this one” (matters more from people who aren’t reading all posts – but still commonly matters to people’s subconscious regardless) or “I didn’t see a major problem”.

Although I still dislike popularity contests and unexplained/unargued disliking, and I think they’re mostly used in bad ways, my opinion of “likes” as a minimal communication method has gone up as I’ve generally become less chatty with people. On the other hand, I think ~everyone but me could use more practice explaining (in written words) why they like stuff.

I like this post. I think it’s an interesting insight that I can’t remember explicitly thinking about before. I also find the idea and the wording of it humorous in a way. It makes me chuckle when I re-read it. The idea kind of reminds me of movie scenes where there is an intimidating guy standing around and somebody says “he doesn’t talk much, does he?”.

On the Effective Altruism forum, I don’t know if anyone reads all the posts. There are more posts. I think some are ignored. Even if a few people read a post, they may not read it very thoughtfully or engage with the ideas. Getting a few upvotes from unknown people, no and good replies, isn’t very satisfying there either. Especially when you think you said something important that merits a reply, so whoever is upvoting it actually disagrees with that…

They should have some friendly peopple who run the community, welcome new members, try to ensure that either no one is being ignored or else people who aren’t getting much attention are given some kind of reason (and some reasonably concrete ways to fix it). Being ignored for ambiguous reasons by people who don’t actually admit to ignoring you (and might deny it if you brought it up) sucks.

But they have no welcoming committee and no organized effort to divide up labor between some regular, active participants so that everything gets some attention.

Like I can read the whole CF forum. I’ve read more active forums in full before. They have less than 10x the activity that I could keep up with. So if they had ten active users to help make it a good community and assigned them all a 10th of the posts, then they’d be able to have every post get read. But they don’t try to do things like that. I don’t think they’re interested. I don’t really know why not besides their general hostility to being organized or methodical.

When I first started writing about using methods, being more organized, etc., I thought it was fairly in line with common sense. I thought it was useful but not especially innovative. But the more people react highly negatively to it, the more it seems like maybe I’m saying unusual and important. But I still can’t get any kind of clear objections, just unexplained but strong resistance.

DD was wrong about conspiracy theories (~6 part series linked there). I haven’t reread it and don’t know exactly what he got wrong. But e.g. the health authorities knew about and covered up covid vaccine problems. They killed a ton of people delaying the vaccine so they could do safety trials, the trials turned up problems, and then they lied to the public about the results of the trials. Whistleblowers in general are uncommon and treated poorly – people do awful things all over the place and it’s hard to speak out. It’s also hard to speak out as a victim in general (see e.g. Amber Heard). Some victims get a lot of attention and support but most don’t. People don’t like complainers and don’t want society to be bad. Example: there was a reddit thread where some kinda lights had caused a fire and it tried to warn people about them. Tons of people jumped to blame user error including buying a cheap, shoddy brand. One of the people doing that actually had the exact same standard, reputable brand as the person who had a house fire. Instead of thinking “my thing is unsafe” they want to blame bad luck or the victim so that their life still seems safe and adequate. It’s like the reaction “i’ve never had a problem with the police, so you must have been doing something wrong.” They don’t want to believe that they too could have a problem with the police despite not doing anything wrong, because then they’d feel less safe and happy.

A quote about the definition of a conspiracy theory from the first DD article:

https://fallibleliving.com/stwtr/pdfs/166%20Conspiracy%20Theories%20–%201-%20The%20Basics.pdf

A conspiracy theory is
an explanation of observed events in current affairs and history … which
alleges that those events were planned and caused in secret by powerful (or allegedly powerful) conspirators, who thereby…
benefit at the expense of others, and who therefore…
lie, and suppress evidence, about their secret actions, and…
lie about the motives for their public actions.

Conspiracy theories are widely regarded as characteristic of irrational modes of thinking. The very term ‘conspiracy theory’ is usually reserved for irrational explanations meeting the above criteria. For conspiracies do happen. Criminal conspiracies are proved every day in courts. Political conspiracies are discovered from time to time. If we can rationally explain a bank robbery as being the consequence of a conspiracy, why not a war? Or the world economic system? What distinguishes a conspiracy theory (irrational, by definition) from a sane opinion that a particular group of people worked in secret to bring about certain observed events for their own immoral purposes?

In the rest of the series DD makes arguments against theories fitting his definition of a conspiracy theory.

DD’s conspiracy theory series takes a particular kind of theory about political lying and criticises it. I don’t think that his usage matches the way that term is commonly used. Just about any theory about political lying or manipulation might be called a conspiracy theory. For example, this article uses the term conspiracy theory to describe the claim that some poll watchers were blocked from observing ballot counting:

The vaccine lying Elliot described also doesn’t fit DD’s pattern for a conspiracy theory. What would be needed to discuss issues like the vaccine example rationally is a wider discussion of theories about political lying. So DD’s discussion isn’t very useful even if he was right about all the arguments he made about the definition he came up with.