Elliot's Microblogging

atrioc on YT mentioned basically that mark cuban isn’t an impressive businessman. i looked it up.

Cuban suckered Yahoo into super massively overpaying for broadcast.com during the .com bubble. the acquisition failed within 3 years. Cuban did not provide value to his biz partner (yahoo) but become a billionaire from that failure.

but also i found more:

cuban wasn’t the founder. he bought 2% for 10k, then insisted his 2% was non-dillutable, fought with the founder for months, created a bunch of bad blood, interfered with getting other investors, and bullied the founder into giving up.

the founder agreed to keep only 10% of his company. cuban had maybe 50%, idk exactly but it couldn’t be much more than that. so cuban had 5x more than the founder, maybe less.

later when it was sold to yahoo, cuban had over 20x more than the founder. how? by continuing to fight over dilution rules instead of doing dilution evenly. the founder said in the interview that he would have had to bring in outside investment to keep his share at 10% and he went down to under 1% by not finding investors.

the founder was so bullied by Cuban, and such a gentler soul, that he just kinda didn’t care that much and wanted to get distance instead of fight. part of the deal they made, where cuban leveraged his 2% stake into taking over the company, involved the founder not having to report to Cuban, not having Cuban as his direct boss. also Cuban started denying the guy was even one of the founders at all, let alone the original founder 5 years ahead of Cuban. Cuban repeatedly under communicated, didn’t negotiate contract terms, then insisted stuff meant whatever was good for him without consent or agreement.

so many of the famous businessmen today are so awful.

i found the interview after noticing the very unequal dilution based on the numbers on wikipedia. after a couple searches, i was unable to find anything on the internet directly addressing the dilution issue and actually talking about it as something important. like no one seems to have ever written an article about it. the linked interview was the best info i found.


It’s hard to come up with good guarantees for philosophy consulting or tutoring because people’s progress depends more on them than on me, and they may be dishonest or irrational. I do have a 30 day satisfaction guarantee for digital products.

Tutors in general teach to tests, which is easier than trying to help people actually understand stuff, and they still generally don’t offer guarantees.

Many therapists, psychiatrists and business consultants offer no guarantees and have pretty low success rates, but get hired anyway.

I can guarantee time spent (this is implied with hourly billing btw) or words written or hours of video recorded. But those aren’t the results that clients want. That’s kinda like a programmer making a guarantee about lines of code written instead of about the code working well or, better, the client getting a business outcome that they want (e.g. higher sales).

GOAL: My goal with this post is to react to something Elliot shared and talked about. I would love responses to the questions below but I personally got value just from thinking and writing a bit here.

Why is it not plausible to make some conditional guarantee based on the buyer’s demonstrable learning effort with the product? Something like if you do X, Y, and Z you should at least have result C or better. I’m guessing you have thought of a chain of reasoning with this idea and the problems that come up. What level of engagement is needed to make some measurable/objective progress for a typical person or even a significantly below average person?

Related to this, I was wondering if you would ever want to develop a sort of Khan Academy for Critical Fallibilism? (if that is even possible). What level of resources would you need to consider trying to make something like that?

I know you have talked about not wanting to write a book people because it will be so little understood. What software based learning platforms have you seen that appear to have potential? You have shared some articles by one of the contributors to the quantum country website before. Do you think something like that has potential?

Actually only one article that I can remember.

Do you have a specific example that you think would work?

I make articles and videos because I think those are good formats. I don’t think something else is better. Most philosophy stuff isn’t very visual. Books are fine too. The writing on the CF site is book length already.

META-ISSUE: I don’t think this post is a good response but I think it’s not far off the best that I can do for a casual response with some degree of thoughtfulness. Also, not sure how well, or how much better, I could do if I was working on this response like it was my job.

I don’t have an example of a guarantee that I think would would work. Philosophy seems so open-ended that’s it hard to think of a meaningful guarantee. The point of Stark’s article seemed to be that people can try making extremely limited guarantees that are somewhat meaningful.

I guess I just felt like there should be some level at which something could be guaranteed. Like, if someone memorized your top 50 articles and could give close to verbatim answers to the top 100 FAQs about CF, then that would indicate that they at least understand the basics of CF. That’s not a guarantee though because it’s not specific.

Part of the problem is that I don’t know what someone should be capable of at various levels of philosophy knowledge. With programming or auto-mechanics, there are certain things you should be able to make happen at certain knowledge levels. I’m not sure what the levels are with philosophy or what new things one should be able to make happen at those levels.

Trying to think of guarantees kept leading me to try and come up with tests/diagnostics of philosophy knowledge. That’s what made me start to think of learning platforms for Critical Fallibilism. I have found tests with answers keys to be pretty good for learning. Step-by-step solutions are even better. Immediate interactive response to input seems even better than that. I think you too, have talked about some of that being good.

In addition, I guess I was wondering if there was maybe a negative guarantee, like if you do these 50 things and have not achieved X, Y, or Z, then you are not making progress. Maybe in those cases someone should focus on conventional self-improvement, work, family, friends, and other normal stuff for the next 2 years before trying again or something like that?

Philosophy is about creativity a lot, which clashes with guarantees, worksheets, tests, answer keys, etc. Some skills which build towards philosophy, like making paragraph trees, are less creative, so there’s more consistency and agreement for the answers that different people come up with.

Related to Elliot Shares Links (2022) - #219 by Elliot

I have not researched what’s true, but the basic story claimed here is:

Musk pays Jared Birchall between 1 and 3 million dollars per year. In return, Birchall actually gets stuff done. Musk doesn’t know how to do things; he cheaply hired someone who does.

In other words, Birchall acts as Dagny Taggart to Musk’s James Taggart role. If so, he’s massively underpaid and also shouldn’t do it at all.

But who knows. Maybe Birchall is a shitty leech himself who just socializes with people and then gets underlings to do the real work but takes the credit. I do generally question the competence of whoever is on Musk’s team since the overall result with the Twitter acquisition, and various other projects, has been so bad. But Taggart Transcontinental got a bunch of bad results that weren’t Dagny’s fault. An underpaid competent person could help explain what’s going on with Musk getting so rich despite being so awful.


GOAL: React to post and try questioning and articulating intuitions as mentioned in this article: Intuition and Rationality

I went through a bunch of thoughts before figuing out what to start writing. I guess one reaction/intuition I have is that there doesn’t seem to be anyone in the world, who is operating at scale, with the skill and morality of the heroes in Atlas Shrugged. I don’t know what to think about this since the actual seems to be much better off than the world represented in AS but the best big businessmen in the real world seem worse.

My reading of the article is that Birchall is good at managing Musk’s personality and basically does what Musk says without innovative initiative. The two main indications I got for that are that Birchall has a reputation for being good at managing wealthy client’s money and that he was willing to dig up dirt on the British cave diver. The article didn’t say Birchall was a good investor who was able to generate above average returns. The implication was more like, Birchall knows how to talk with wealthy clients and make them comfortable. Digging up dirt on a cave diver is probably not a very productive activity and probably not something Hank Rearden would do.

My best guess about Musk’s career success is that it is largely due to having gotten lots of unpaid media advertising for being savy about environmental issues and by being good at “innovation theater”. I don’t think the media cared about his underlying flaws for a long time but cares more about them now that he has more power. It’s kinda like how Trump got lots of free media, because he boosted ratings, but then it backfired when he was serious and successful in trying to become president. With Musk, it was more a combination of fitting the narrative and later being good for clicks/ratings/etc.

Both of Musk’s major companies are highly inter-mingled with government, including state contracts, subsidies, and special priveledges. So, maybe his extreme wealth doesn’t need that much explaining after taking into account his marketing advantages, which includes being good at marketing, and his relationship with the government.

There are a couple things in Musk’s career that make him appear like more of a hero. One thing is that he apparently did risk all ~$180 million of his net worth to bailout his companies when they were not doing well. Maybe the story is highly embelished and not as big a bet on doing something great as it sounds. I don’t know why someone who seems more like Orren Boyle than Hank Rearden would risk the money at all though. Maybe he wanted more fame, power, and status than ~$180 million could offer. The other thing that makes him seem good is that he has led, at least officially led, some successful projects with SpaceX and Tesla. Even if these companies should just be counted as part of the government, they have been somewhat innovative departments in the government. But maybe he is doing way less than someone more competent could do with the same resources.

META-COMMENT: I don’t think this went too great I couldn’t figure out how question my intuitions explicity. Instead, I just free-wrote a bunch of ideas related to some intuitions. I guess I should try being more explicit about which intuition I’m trying to learn about. I don’t know which intuitions are which though so its a bit hard to figure which one I’m working on.

The basic method I suggested is to consider hypothetical scenarios/questions and see what opinions your intuitions have for those other scenarios.

So e.g. what if Musk didn’t build a factory in China and had no CCP connections, but everything else is the same. Does that change your intuition somehow?

What if SpaceX had actually started a Mars colony and was clearly making a ton of innovative progress?

What if Musk had little media coverage? Or primarily negative coverage?

You can take the real scenario, change a variable, and check how you feel about it. This can take some time and effort because you have to reimagine the world a different way enough to gain intuitions about the new world. If you just briefly mention a description of the how the scenario is different, your intuitions probably won’t grasp it enough to react properly (when it’s complex stuff like this – it’s easier with simpler stuff). You’ll have to think it through more to get better intuitive reactions. Visualize the new thing, imagine it, tell stories about it, explore it, etc. A world with Mars colony that’s going well would be pretty different and would take some thought and imagination to come to terms with and understand well enough to react to.

This is a complex scenario, so one thing you can look for is “when I change X, I don’t have a strong reaction” and “when I change Y, my intuitions react strongly” which helps you figure out which variables matter to your intuitions. Sometimes you’ll have strong reactions before even thinking the change through a lot.

I read all the posts on my forum. Some people probably consciously know that. But their subconscious doesn’t know it very well. If I “like” a post, they feel less ignored or alone than if I read it without clicking “like”. Even if they consciously know that I read it in both scenarios (clicking “like” or not).

I don’t like upvotes as a popularity contest. And upvotes from people you don’t know or don’t respect don’t really indicate anything but popularity (within a subculture of the forum/subreddit/etc users). And downvotes are worse than upvotes – disliking merits explanation more than liking does. Negative popularity contest type things are worse than positive ones. People liking some celebrities is way better than ostracizing some heretics.

Upvotes from people you know can indicate more specific things, with some useful meaning, like “I read this one” (matters more from people who aren’t reading all posts – but still commonly matters to people’s subconscious regardless) or “I didn’t see a major problem”.

Although I still dislike popularity contests and unexplained/unargued disliking, and I think they’re mostly used in bad ways, my opinion of “likes” as a minimal communication method has gone up as I’ve generally become less chatty with people. On the other hand, I think ~everyone but me could use more practice explaining (in written words) why they like stuff.

I like this post. I think it’s an interesting insight that I can’t remember explicitly thinking about before. I also find the idea and the wording of it humorous in a way. It makes me chuckle when I re-read it. The idea kind of reminds me of movie scenes where there is an intimidating guy standing around and somebody says “he doesn’t talk much, does he?”.

On the Effective Altruism forum, I don’t know if anyone reads all the posts. There are more posts. I think some are ignored. Even if a few people read a post, they may not read it very thoughtfully or engage with the ideas. Getting a few upvotes from unknown people, no and good replies, isn’t very satisfying there either. Especially when you think you said something important that merits a reply, so whoever is upvoting it actually disagrees with that…

They should have some friendly peopple who run the community, welcome new members, try to ensure that either no one is being ignored or else people who aren’t getting much attention are given some kind of reason (and some reasonably concrete ways to fix it). Being ignored for ambiguous reasons by people who don’t actually admit to ignoring you (and might deny it if you brought it up) sucks.

But they have no welcoming committee and no organized effort to divide up labor between some regular, active participants so that everything gets some attention.

Like I can read the whole CF forum. I’ve read more active forums in full before. They have less than 10x the activity that I could keep up with. So if they had ten active users to help make it a good community and assigned them all a 10th of the posts, then they’d be able to have every post get read. But they don’t try to do things like that. I don’t think they’re interested. I don’t really know why not besides their general hostility to being organized or methodical.

When I first started writing about using methods, being more organized, etc., I thought it was fairly in line with common sense. I thought it was useful but not especially innovative. But the more people react highly negatively to it, the more it seems like maybe I’m saying unusual and important. But I still can’t get any kind of clear objections, just unexplained but strong resistance.

DD was wrong about conspiracy theories (~6 part series linked there). I haven’t reread it and don’t know exactly what he got wrong. But e.g. the health authorities knew about and covered up covid vaccine problems. They killed a ton of people delaying the vaccine so they could do safety trials, the trials turned up problems, and then they lied to the public about the results of the trials. Whistleblowers in general are uncommon and treated poorly – people do awful things all over the place and it’s hard to speak out. It’s also hard to speak out as a victim in general (see e.g. Amber Heard). Some victims get a lot of attention and support but most don’t. People don’t like complainers and don’t want society to be bad. Example: there was a reddit thread where some kinda lights had caused a fire and it tried to warn people about them. Tons of people jumped to blame user error including buying a cheap, shoddy brand. One of the people doing that actually had the exact same standard, reputable brand as the person who had a house fire. Instead of thinking “my thing is unsafe” they want to blame bad luck or the victim so that their life still seems safe and adequate. It’s like the reaction “i’ve never had a problem with the police, so you must have been doing something wrong.” They don’t want to believe that they too could have a problem with the police despite not doing anything wrong, because then they’d feel less safe and happy.

A quote about the definition of a conspiracy theory from the first DD article:


A conspiracy theory is
an explanation of observed events in current affairs and history … which
alleges that those events were planned and caused in secret by powerful (or allegedly powerful) conspirators, who thereby…
benefit at the expense of others, and who therefore…
lie, and suppress evidence, about their secret actions, and…
lie about the motives for their public actions.

Conspiracy theories are widely regarded as characteristic of irrational modes of thinking. The very term ‘conspiracy theory’ is usually reserved for irrational explanations meeting the above criteria. For conspiracies do happen. Criminal conspiracies are proved every day in courts. Political conspiracies are discovered from time to time. If we can rationally explain a bank robbery as being the consequence of a conspiracy, why not a war? Or the world economic system? What distinguishes a conspiracy theory (irrational, by definition) from a sane opinion that a particular group of people worked in secret to bring about certain observed events for their own immoral purposes?

In the rest of the series DD makes arguments against theories fitting his definition of a conspiracy theory.

DD’s conspiracy theory series takes a particular kind of theory about political lying and criticises it. I don’t think that his usage matches the way that term is commonly used. Just about any theory about political lying or manipulation might be called a conspiracy theory. For example, this article uses the term conspiracy theory to describe the claim that some poll watchers were blocked from observing ballot counting:

The vaccine lying Elliot described also doesn’t fit DD’s pattern for a conspiracy theory. What would be needed to discuss issues like the vaccine example rationally is a wider discussion of theories about political lying. So DD’s discussion isn’t very useful even if he was right about all the arguments he made about the definition he came up with.

The COVID case is weird because they both lied about it extensively (and censored information about it) but also put it in public writing. For example:

Moderna (mRNA-1273) Vaccine

This was a phase 1, dose-escalation, open-label clinical trial to assess the safety and efficacy of mRNA-1273 (Moderna) vaccine in 45 healthy adults of 18 to 55 age. The test vaccine was administered in two doses at the gap of 28 days in the dose of 25 μg, 100 μg, and 250 μg. There were 15 participants in each dose group. The safety endpoint was the occurrence of any adverse event after seven days of each dose [7].

After both vaccinations, the common solicited systemic adverse reactions were of mild to moderate intensity (included headache, chills, fatigue, myalgia, and pain at the site of injection). Local adverse events were of mild to moderate intensity and the most commonly reported local reaction was pain at the site of injection. The systemic adverse reaction was in 5 of 15 (33%) in 25 μg, 10 out of 15 (67%) in 100 μg, and 8 of 15 (53%) in the 250 μg dose group. All the systemic reactions were mild and were common after the second vaccination with 7 of 13 (54%) in the 25 μg group, 15 of 15 (100%) in the 100 μg group, and 14 of 14 (100%) in 250 μg dose group. There was no incidence of fever in any participant after the first vaccination but after the second vaccination 6 out of 15 (40%) in 100 μg and 8 of 14 (57%) in the 100 μg dose group. One participant had a fever of 39.6 °C which was graded as a severe adverse reaction. One participant was withdrawn from the study due to the occurrence of transient urticarial after the first dose in the 25 μg dose group. There was no serious adverse reaction reported during this clinical trial [7].

I don’t know why the summary says that someone had a fever that was graded as severe, but then says no serious adverse reactions were reported. I think that says something about peer review. It also says something about the study it’s summarizing, which says:

No serious adverse events were noted

But also says:

one participant in the 25-μg group was withdrawn because of an unsolicited adverse event, transient urticaria, judged to be related to the first vaccination.


Solicited systemic adverse events were more common after the second vaccination and occurred in 7 of 13 participants (54%) in the 25-μg group, all 15 in the 100-μg group, and all 14 in the 250-μg group, with 3 of those participants (21%) reporting one or more severe events.

If at least 3 people reported at least one severe event, and also someone got rash/itch from the first vaccination bad enough to withdraw them from the study, then how were there no serious adverse events?

Anyway, the study being summarized in my first quote was published July 2020. It found a 33% systemic adverse reaction rate in the best group (25 μg, which is the dose they later used in the public vaccination campaign). They published that negative information but then basically didn’t analyze it and put a positive spin on things. Then the media told people people the vaccine is scientifically studied and proven safe, and people believed the adverse reaction rate was much lower than 33%. (This was a small sample size. I think actual vaccine safety is better than a 33% adverse reaction rate but worse than the media has told the public.)

The results section contains numbers that look concerning to me, but then the conclusion section just says:

no trial-limiting safety concerns were identified. These findings support further development of this vaccine.

Then the media said ~“super safe; don’t even hesitate”.

I thought that if they had found important safety concerns during the vaccine trials, that would have been reported to the public, but I was wrong.

The scientists reported their data matter of factly (at least sometimes; I don’t know what was left unpublished or falsified). Unlike their data, their analysis appears agenda-driven, and one of their main tactics is to simply not analyze some concerning data. When I look at the data/numbers reported, it looks concerning to me. But they just publish those numbers and then say everything is fine with no real analysis of the concerns. (BTW another tactic, used elsewhere, is to decide that adverse events in study participants were not vaccine related, without explanation. Another tactic is to limit data collection, e.g. only counting and recording adverse events within 7 days of a vaccination. Another tactic is to categorize some adverse reactions as “unsolicited” by only actually asking study participants about some problems, apparently not including rash/itch in this study.) They also wrote:

Those studies showed that solicited systemic adverse events tended to be more frequent and more severe with higher doses and after the second vaccination.

They knew that two vaccinations was riskier than one, but told the public it’s fine and safe. They knew that having the two doses closer together was riskier, but they told people to get them close together.

One way DD’s analysis is wrong, IIRC (still haven’t reread), is he thinks if you do evil stuff and a bunch of people know, then some will do whistle blowing. There are some problems with that. First, whistle blowing is discouraged and punished in a lot of ways (kind of like being a victim, or otherwise complaining and wanting anything to change). Second, lots of people did try to whistle blow about the COVID vaccines, but a lot of their information was suppressed by Google, Facebook, Twitter and a few other platforms with a huge amount of control over communication. Third, most people trying to blow the whistle on the COVID vaccine didn’t know what they were talking about, so there was a big signal/noise ratio problem.


I don’t think Zapier can exclude gaming videos from being auto-posted here. I don’t think it’s a big deal if some get posted so I’ll just ignore it for now.

Zapier is the automation tool that is auto-posting CF videos, curi videos, and CF articles at this forum.

I commented on a substack:

I view a lot of the advice as more bland, generic and not useful rather than actively bad (which seems more like a terminology difference than a substantive disagreement). People are often incentivized to write it because they can get attention and rewards for giving advice, because they’re famous and popular enough. Or because e.g. giving advice for other CEOs may convince investors that they know how to be a CEO. So I think the main reason they write bland and generic advice is because they don’t have any special insight to share, but they want to write advice anyway. Similarly, a lot of mediocre books exist because people have reasons that they want to be an author other than having something novel and important to say.

Reading headlines like this (Meta and others are doing similar atm), I remember that Goldratt was against firing people and thought companies should plan ahead to avoid outcomes like this. I view it as indicating a planning and leadership failure by any company doing it, which also indicates they disrespect their workers. It partly means executives are reacting to current trends instead of planning ahead and having effective contingencies, which is really bad for a leader of a giant company – they should have actual vision and leadership abilities instead of being trend followers.

I think a lot of people dislike layoffs but don’t see them as bad or wrong. Like it sucks for the workers but the company or decision makers aren’t actually doing something wrong based on their own perspective/incentives. But I think it means the people in charge are bad at their jobs. So if you’re firing 10k ppl, the CEO should be first on the list.

CAVEATS: I’m not claiming all of a company’s reasons for wanting to fire people are good or even lawful. I am not claiming that wanting to do salary cuts or not pay for retraining would always or often be good business practice. I am not disputing that some other forms of planning / management could have avoided some of the situations in which employees were laid off. I am only claiming that in my experience layoffs were not specifically about capacity planning like was typically claimed. I don’t know anything special about the current wave of tech layoffs and how they compare to my prior experience.

My limited experience with layoffs is they are mostly not about the firm’s productive capacity or unforseen/unplanned for circumstances but are instead a more socially acceptable substitute for individual firing. Meaning, the employees laid off are mostly employees the company actually wanted to fire individually in the last year(s) but didn’t or couldn’t due to various social and regulatory concerns:

  • Mediocre performers, problem causers, and cultural misfits who avoid doing anything explicitly bad enough to justify firing
  • Employees whose salary + benefit cost has grown beyond their capacity or willingness to produce (commonly presents as age discrimination)
  • Employees with skills the company no longer needs who lack skills the company does need and it’d cost more to retrain them than to hire someone new with needed skills (also often presents as age discrimination)

Some reasons why:

  • Laws and lawsuits about unfair termination, discrimination, and the like heavily discourage against a company firing for anything other than really explicit and blatant bad behavior OR overcapacity. Lacking the former, the latter is comparatively easy to claim.
  • A layoff provides a face-saving benefit to the employee that outright firing does not. They can say (and probably think) they just got caught up in a business downturn, it wasn’t a problem with them/their performance.
  • A layoff is less confrontational - the manager/business gets to say it’s nothing personal (even if it is).
  • You can (and often do) hire new employees at lower salary than existing employees. However, culture heavily discourages outright salary cuts of existing employees. So if someone’s productivity is below their cost (for whatever reason) you can’t just cut their salary so it remains profitable to keep them on. You can just not give someone a raise, but in a low inflation environment this doesn’t mean much. Now that we’ve shifted to a higher inflation environment it may help some in terms of allowing a real reduction in existing employee wages.
  • Likewise, there’s a heavy cultural pressure against paying an existing employee a lower wage while they’re retraining, or charging them for the costs of retraining. Whereas new employees have typically paid for their own training up to that point.

It was my experience that right after a layoff was done, the company would go back to hiring. Sometimes they’d keep hiring even during the layoff with the justification of skill mix. That tells me they didn’t really have too many employees. My impression was they wouldn’t have needed to do mass layoffs if they could have just fired each of the people they wanted to fire as soon as they wanted to fire them, or do salary cuts, or pay low/no wages while employees retrained & charge them for retraining.