• The Power Of Thinking Irrationally

    I am intrigued by the idea that there may be more powerful ways to think, and by thinking about thinking itself we can upgrade our thought processes. But I am also intrigued by things that go against the conventional wisdom. So I was very curious recently to see Tyler Cowen’s criticism of the rationality community:

    I would approve of them much more if they called themselves the irrationality community. Because it is just another kind of religion. A different set of ethoses. And there’s nothing wrong with that, but the notion that this is, like, the true, objective vantage point I find highly objectionable.

    The idea of an irrationality community fascinates me. Who could possibly support irrationality? Is irrationality good for something? Could there be irrationality enthusiasts, eagerly swapping techniques for the most effective sort of irrationality?

    What exactly is irrationality? From Wikipedia:

    Irrationality is cognition, thinking, talking or acting without inclusion of rationality. It is more specifically described as an action or opinion given through inadequate use of reason, or through emotional distress or cognitive deficiency. The term is used, usually pejoratively, to describe thinking and actions that are, or appear to be, less useful, or more illogical than other more rational alternatives.

    Actions that are backed by inadequate reasoning. Perhaps even actions taken without evidence, based purely on emotion. It sounds bad at first, but actually I think it is very valuable to act this way sometimes.

    I don’t think rationality is bad per se. It’s more like, there are several modes of thinking. Sometimes it’s better to think in “rational mode”, and sometimes it’s better to think in “irrational mode”. Depending on the situation, you might want to switch from irrational to rational, like an MMA fighter switching from boxing to jiu-jitsu. (It’s similar to Thinking Fast And Slow, but I don’t necessarily think the irrational mode needs to be faster.)

    So I mentioned that I thought rationality was overrated to someone, and they remarked that they were surprised, because I am a “mathy” person, that I would be the sort of person to dismiss rationality. But I think math is a great example where you want this sort of dual, sometimes-rational-sometimes-irrational thinking. Sometimes you work very rationally on a math problem - you see x + 2 = 5, you want to solve for x, you recall an algorithm for this, you execute the steps one by one, the end. But for a tricky math problem, you might actually spend most of the time without reasons for what you do. You can just stew around in more of an “irrational brainstorming” mode, where you don’t have reasons or evidence, you just have loose fuzzy emotional heuristicky thinking, until you seize on what you think is a solution. And then you can toggle into “rational mode” to check the solution for validity, but still you spent the vast majority of your time in “irrational mode”.

    Let me go through an example. Here’s a math puzzle that someone just randomly asked to me while we were walking around once. It’s a question: can you pick five lattice points and connect each pair with line segments so that no other lattice points are on those line segments?

    If you don’t know what a lattice point is, it’s just the points (x, y) where x and y are both integers. So the lattice points look like this:

    And a solution to the problem would look like this…

    …except that the line segment marked with an X contains another lattice point. So in fact this isn’t a solution, and if you keep poking around by trial and error you will find it quite difficult to find a solution.

    (If you want to solve this problem without me really getting spoilery, do it now without reading any further.)

    This sort of problem doesn’t require any advanced math, but it doesn’t map to any sort of math problem you drilled on in high school. Some people will just read this problem description, and then, nothing pops into their head. Or they trial and error a few times, fail, and then don’t know what else to do. They will just stare blankly at the problem and not know what to think about next.

    When this happens to you, rational thinking is worthless. If you don’t have any evidence to start out with, you can’t start making rational conclusions. So when you find yourself totally stuck, thinking no thoughts at all, that is your mental cue to switch into irrational mode. You’re too far away to grapple - use boxing instead of jiu-jitsu.

    To think irrationally about this problem, just don’t worry about logical connections making sense. Feel out for any emotions about the problem that you have and assume they are axioms. Think of other things that this reminds you of. If A implies B, and you know B is true, imagine for a second that that implies A. Let yourself use some logical fallacies. Just see if those lead you somewhere interesting.

    At this point, I would come to irrational conclusions like:

    • I tried to do it several times and could not. Therefore it is impossible.

    • Since the graph connecting 5 points is nonplanar, and these lattice points are in the plane, it also cannot be embedded into lattice points.

    • The number 5 is very ugly so it causes the math to fail.

    • Lattice points are made up of squares, and the square’s favorite number is 4, so 4 can work but not 5.

    • You can jam four things in there but there just isn’t enough room to jam five things in there.

    These conclusions are not based on evidence, they are not based on logical arguments, they are not really logically correct, they are tainted with all sorts of emotions and biases, and at least one is just totally wrong. But they are useful because they are maybe correct, more likely than 0% chance correct, and they give you sparks to continue. And they are not just useful inside one person’s head - if you have multiple math-problem-solvers brainstorming, it’s useful to share these half-thoughts with each other. You have to trust your collaborators to be fairly intelligent. But when you are in a group you trust, it can really help you to accept some irrational conclusions. And that principle goes beyond solving math problems.

    Anyway, some of these irrational conclusions can be the seed of a rational proof. Perhaps the “not enough room to jam it in there” reminds you of the pigeonhole principle and the answer comes to you in a flash. Or perhaps the five-versus-four and squares-are-beautiful aspects lead you to think about a very simple way to solve it for four points, and ponder deeply why this particular solution can’t be extended to five points:

    Why can’t this be extended to five points? This example is simple enough that you can try extensions in your head and label each new point by which of these four original points it conflicts with. You will get lattice points labeled like:

    A B A B A B
    C D C D C D
    A B A B A B
    C D C D C D
    A B A B A B
    

    You can’t have more than one A in your five points, and actually that is true even if you didn’t start with the simple square, if you think about it.

    (I’m not quite sure how much my blog audience would like me to spell out the math here, but perhaps I’ll leave it at this.)

    I suspect that most people who are trying hard to get better at math, or at similar skills like programming problem-solving, are actually not struggling with the “rational” part, of rigorously proving something works. They are struggling with the “irrational” part, of how do they make progress when they are unable to make rational conclusions. So don’t feel like it’s dirty or inappropriate. Thinking irrationally can be another useful tool in your toolbox. Embrace it, and let me know what irrational techniques work for you.

  • Silicon Valley's Manifest Destiny

    Silicon Valley is famous for having things with nonsensical names. It’s not just the startups, it’s also the place names. For example, “Mountain View”. Here’s a view from a field right next to Google’s main campus. It seems… pretty flat. How do you arrive in this place, look around, and think, I know, I’ll call this “Mountain View”?

    So for a long time I assumed “Silicon Valley” didn’t mean anything. It doesn’t feel like a valley, it feels like a flat area that’s next to a bay. This is the sort of trivia that I ignore for a decade, and then one day in a fit of random curiosity look it up on Wikipedia, and lo and behold it is actually named after a valley:

    Silicon Valley is a nickname for the southern portion of the San Francisco Bay Area, in the northern part of the U.S. state of California. The “valley” in its name refers to the Santa Clara Valley in Santa Clara County, which includes the city of San Jose and surrounding cities and towns, where the region has been traditionally centered.

    This naturally leads to the question of what counts as the Santa Clara Valley. Wikipedia again:

    The valley is bounded by the Santa Cruz Mountains on the southwest, which separate Santa Clara Valley from the Pacific Ocean, and by the Diablo Range on the northeast.

    Here’s a diagram:

    silicon-valley

    The Santa Cruz Mountains, on the left, are the same mountains you can’t quite view from Mountain View.

    Originally the “valley” referred to the area San Jose and southwards. Its industry-specific nickname was the “Valley of Heart’s Delight”, because until the 60’s it was the largest fruit production region in the world. Then that all got displaced by tech companies, which makes the name “Apple” seem a bit less friendly and a bit more passive-aggressive.

    Nowadays the area considered Silicon Valley has expanded to include the stretch from Palo Alto to San Francisco. But in a sense it’s still a valley between the Santa Cruz Mountains and the Diablo Range. It’s just a really big valley so you can’t necessarily see its valley-ness while you are in it.

    So Silicon Valley is sprawling. Where will it stop? My theory is that Silicon Valley will inevitably expand to fill all of the space between these mountain ranges, like a modern version of Manifest Destiny. Imagine Oakland, San Leandro, Fremont, and Gilroy all steadily invaded by an army of techies.

    Why Silicon Valley’s Manifest Destiny Is to Fill Up The Physical Valley

    1. Rent. This is the obvious one, everywhere from San Francisco to San Jose is getting more and more expensive.

    2. Nominative determinism. The word “valley” is part the phrase “Silicon Valley”. Therefore, mystical grammatical fate will drive them together. There is a certain magic to saying, yes we’re located in Silicon Valley. I think you can say that with good faith if your company is located in San Leandro or Gilroy. If someone complains, point to this picture of the mountain ranges.

    3. Software is eating the world. There are still lots of non-tech-dominated industries. If we keep eating those industries a la Uber and Airbnb, and Silicon Valley keeps having most of the eaters, we will have more and more massive companies in Silicon Valley and need more space to put them.

    4. Self-driving cars. It takes two hours to commute from Gilroy to Menlo Park. If self-driving cars make a two-hour commute something that isn’t too bad, all of a sudden Gilroy is a much nicer place to live.

    I think this last reason is underrated. Imagine a world where your car is a great place to work. Sure you can have a two-hour commute. Just hop in your car at 8, get to work at 10, leave work at 4, get home at 6, and hey that’s a 10-hour work day because your car is just a one-person office on wheels. You can put your in-person meetings from 10 to 4 so you don’t have that “remote office” feeling of not actually sitting next to your coworkers. So why bother living closer than two hours to the office?

    Recently I have read a number of interesting analyses theorizing what the good investment opportunities are, if self-driving cars work out. Perhaps a simple answer here is “Gilroy real estate”.

  • Why You Can't Say

    Recently on a tip from Ivan Kirigin I reread this now-ancient Paul Graham article, What You Can’t Say. Like the idea of Straussian reading, the essay is looking for secret truths which are currently inappropriate to share publicly.

    It’s tantalizing to think we believe things that people in the future will find ridiculous. What would someone coming back to visit us in a time machine have to be careful not to say? That’s what I want to study here. But I want to do more than just shock everyone with the heresy du jour. I want to find general recipes for discovering what you can’t say, in any era.

    At first I was going to dig in, follow the instructions in this essay, perhaps try to get meta and turn them on the essay itself, and find some secret truths. But there was just too much to bite off at once and I ended up gnawing on a tangent.

    Specifically, the part that really sparked some thought for me was this hypothesis on the source of taboos:

    To launch a taboo, a group has to be poised halfway between weakness and power. A confident group doesn’t need taboos to protect it. It’s not considered improper to make disparaging remarks about Americans, or the English. And yet a group has to be powerful enough to enforce a taboo.

    I suspect the biggest source of moral taboos will turn out to be power struggles in which one side only barely has the upper hand. That’s where you’ll find a group powerful enough to enforce taboos, but weak enough to need them.

    I’m not totally convinced that most moral taboos come from power struggles. My personal suspicion is that the best explanation of the source of moral taboos comes from the theory of social constructionism:

    Human beings rationalize their experience by creating models of the social world and share and reify these models through language.

    Basically, some truths you believe because they are inherently logical. If at least one of Alice, Bob and Eve has a wrench in their pocket, but Alice and Bob have no pockets, then Eve has the wrench. Some truths you believe because there is empirical evidence. The Earth is round because I saw it on the SpaceX video. But some truths you believe just because other people believe them.

    Instinctively you might think, oh ho that is one of the 147 types of bad arguments. But in practice there are a zillion things you believe not because of logic, but because other people told you to.

    • You shouldn’t eat mud
    • Red lights mean stop
    • Human life is a precious thing

    Lots of useful and totally true facts about the world are socially constructed.

    This can lead to a taboo situation, though. A small group believes X. A much larger group believes Not X. Since both of these truths are socially constructed, there’s no baseline reality. There’s no way to have an intelligent debate. So the larger group turns into an angry internet mob and shouts down the smaller group.

    I believe the topic of startup advice is particularly vulnerable to this phenomenon. You cannot deduce the principles of running a startup from first principles. One effective mechanism is to learn what worked for successful startup founders - to seek out their socially constructed truth. But that’s a pretty small group. In particular, the set of all people in tech industry is a much larger group. Sometimes these groups have opposing socially constructed truths.

    Here’s an example: whiteboard interviews. Are whiteboard interviews a good strategy for interviewing people, or a bad strategy for interviewing people?

    Go read mainstream tech news or social media and you will conclude that whiteboard interviews are terrible. From the first few search results for “whiteboard interviews”:

    The mainstream conclusion is clearly that whiteboard interviews are a bad idea. But on the other hand:

    big-five

    The most successful companies all use whiteboard interviews. It’s not just the top big companies, it’s the top late stage startups, top early stage startups, the top tier at every point. Nevertheless, the median tech internet is opposed. What’s happening here?

    I think the fundamental discord comes from the nature of interviewing. For example, Google accepts under 1% of job applicants. And yet Google has 70,000 employees. If my simplistic math holds up, they have rejected over 7 million people.

    Personally, when I go apply for a job, I think of it like an axiom that I deserve that job. When I get rejected I usually conclude the company is either morons or evil or perhaps if the interviewers were very kind I will be charitable and just conclude that it’s a flawed process. So yeah, I can see how there would be 7 million people out there convinced that Google interviews are a flawed process. They know they are correct because their friends mostly agree with them. But I don’t think Larry is kicking himself wishing they had never adopted the whiteboard interview.

    What is really a shame, though, is that once I had a conversation with a startup CTO that went like this:

    Me: So how’s recruiting?

    CTO: Going great, not having any trouble finding software engineers.

    Me: Wow, I don’t hear that often.

    CTO: Yeah, once we raised we needed 8 people and we hired them in a month.

    Me: Double wow. How did your tiny team even do enough interviews to hire 8 people in a month?

    CTO: Oh, we only had to interview 9 people.

    Me: Uh oh. How did you interview them?

    CTO: Well as everyone knows, whiteboard interviews are terrible, so we just kind of chatted about their past experience.

    Me: Oh no.

    CTO: I have a really good feeling about this.

    So, be careful about taking the median startup advice. You might end up with the median outcome.

  • Higher! Higher!

    I read a decent amount of fiction and I also read a decent amount of nonfiction. But nowadays I really read a lot of kids’ books.

    At first reading to children seems like, well you get some kids’ books and you just go read them a book, that’s all there is to it. The medical establishment strongly recommends it and it seems like reading to children might also be one of the key ways that parents pass on their advantages to their children, so read books to your kids. That stuff makes it seem like reading to kids is a “yes or no” thing. If you read to your kids then you are a good parent and if you do not you are a bad parent. So at first I just thought, “Okay I’ll do it!”

    But after a while I started thinking, what am I trying to achieve with this reading experience? What should I be focusing on? What should I be trying to get the kid to do? And especially, what sort of book is a good one to read to kids?

    Rather than lay out some abstract principles, I am going to claim that the single best book for the 0 years old to 2 years old range is Higher! Higher!.

    higher-higher-cover

    At this point you might be thinking, is this guy seriously writing a book review that is orders of magnitude longer than the book itself? The answer is yes.

    But let me explain why this book is a good one. I have different tactics to suggest for the different parts, and I will quote literally all the words in the book as we go along.

    Rising Action

    As an adult, you might find the first nine pages somewhat repetitive.

    1. The cover shows a girl, probably named Tia, on a swing, saying:

      Higher! Higher!

    2. The title page shows the girl’s father pushing the girl, and it repeats the title of the book:

      Higher! Higher!

    3. On the first “real page” of the book, the father is pushing the girl already a dangerously high amount, with the swing-ropes already above horizontal, and yet the girl requests:

      Higher! Higher!

    4. The girl is now swinging higher. She is approximately the same height as a giraffe, and saying:

      Higher! Higher!

    5. The girl is now swinging higher. She is approximately the same height as a building. A balloon, dog, cat, mountains, and a game of checkers are visible. She requests:

      Higher! Higher!

    6. The girl is now swinging higher. She is approximately the same height as the previously-visible mountain. There is a distant airplane. She is saying:

      Higher! Higher!

    7. The girl is now swinging higher. She is hanging out with two airplanes, and continues to request:

      Higher! Higher!

    8. The girl is now swinging higher. She is in space, maybe low earth orbit. There’s a rocket in the distance, and she says:

      Higher! Higher!

    9. The girl is now swinging higher. She is right next to the rocket, which naturally has a monkey inside. The girl demands:

      Higher! Higher!

    Okay, so you probably detect a pattern here. The first eighteen words are all the word “Higher”. This might drive you nuts at first. But I think it is a good thing.

    The point of reading books to a kid is not really to get them used to the process of listening to a book being read. It’s to get them excited about reading books themselves. And I do not think there is an easier book for a small child to read than this book. If you can only remember one word in your whole brain, as long as it’s the word “Higher”, you’re going to be able to follow along with a significant chunk of this book. Even read entire pages. Which is pretty exciting if you’ve never done that before, in your life.

    Another piece of good design is the page-to-page continuity. For adults it is taken for granted that when you turn the page of the book, the next page is supposed to represent the same story, but just a little bit more in the future. For little kids, that isn’t necessarily obvious. But when you see a distant airplane, the girl requests “Higher”, and then the airplane is close up, that mapping becomes a bit more clear. As the reader you can either engage with this, talking a bit about the stuff in the picture before you turn the page, or skip through it, depending on your mood.

    A final aspect of this first phase of the book that I really like is that the main character is a girl, and the plot includes many “traditionally boy” things, like airplanes, traveling to outer space, and a rocket.

    Climax

    Now comes the part to really stress your toddler’s reading skills. Different words. The climax is spread over three pages.

    1. The girl finally swings as high as she can swing, and sees… an alien kid who is also on a swing.

      Girl: Hi! Alien: Hi!

    2. The girl and alien exchange a high five.

      High five!

    3. They bid a fond farewell.

      Girl: Bye! Alien: Bye!

    The neat thing about this exchange is that each of these utterances has an associated hand gesture. I find it is easier to teach little kids words when they come with an associated “thing to do”. So you can wave hi, do a high five with your kid, and wave bye.

    High fives are particularly underrated. At some point you can explain where the “five” comes from and get them into the math of having five fingers a little bit. Children seem to love it. It’s hard to believe the high five was only invented in the late 70’s.

    higher-higher-high-five

    An interesting question is whether this alien is male or female. Or “other”. I like to ask my kids to see what they think.

    Denouement

    Three mercifully wordless pages follow. The girl swings back down into the atmosphere, towards the playground, and is caught by her father. On the very last page, the girl turns to her father and has a last request:

    Again!

    Of course, your child is quite likely to interpret this “Again!” as a reminder, that they should also turn around and ask you plaintively, “Again?”

    Such a clever engagement hack by the author. Like the Netflix widget that just quietly nudges you to watch the next video in the series. But this time, it’s a good thing, right? Because reading to your kid is good for them and you want to get them excited to read another book?

    So there you have it. Higher! Higher! is a strong book for small children. The key is that you can do more than just read it to them. You can get them engaged with the words, and nudge them bit by bit into reading it themselves, even if they have never done that with any book before.

  • Why Uber Culture

    Usually journalists writing about Silicon Valley have a hard time getting deep enough into the subject to keep me interested. But I really liked The Everything Store by Brad Stone, which covered the rise of Amazon. So I was excited when I saw that Brad wrote another book about the tech scene, The Upstarts, which focuses on Uber and Airbnb.

    I feel like I already heard a lot of the “Airbnb legend” via the YCombinator mafia in one way or another, but most of the Uber stuff was new to me. So as I read it, I was trying to do this Straussian reading thing and think, what secrets about Uber are hidden within this book?

    This book has a wealth of anecdotes - let’s investigate some of them.

    Anecdote 1: Fighting San Francisco

    First off, four months after it launched, back when Ryan Graves was the CEO, the California and San Francisco government together tried to shut down Uber:

    Four months after the launch, when Graves was at a board meeting at First Round Capital with Travis Kalanick and Garrett Camp, four government enforcement officers walked into the tiny UberCab office. Two were from the California Public Utilities Commission, which regulated limousines and town cars, and two were from the San Francisco Municipal Transportation Agency, which regulated taxis. The plainclothes officers flashed badges, and then one of them held up a clipboard with a cease-and-desist letter and a large, glossy head shot of a smiling Ryan Graves. Waving the photograph around the room, he demanded: “Do you know this man?”

    Presumably most companies respond to a cease-and-desist from the government by ceasing and desisting. For Travis Kalanick this reminded him of his time with Scour and so his reaction was:

    “For me that was the moment where I was like, for whatever reason, I knew this was the right battle to fight.”

    “The great thing is I’ve seen this before,” he said. “I thought, Oh, man, I have a playbook for this. Let’s do this thing. When that happened, it felt like a homecoming.”

    After some stressful legal battle, Uber got to keep operating their service without any changes. Meanwhile, competitors like Taxi Magic and Cabulous kept obeying the previous rules.

    Anecdote 2: Fighting Washington, D.C.

    In Washington, D.C. the taxi industry was powerful and a city councilwoman proposed a law that would prevent Uber from lowering prices to compete with taxis:

    The regulations, added to a broader transportation bill, would give Uber legal sanction to operate. But they also added a price floor, which required Uber to charge several times the rate of a taxicab.

    Uber’s lobbyists wanted to compromise; Uber’s leadership wanted to fight.

    He supplied the phone numbers, e-mail addresses, and Twitter handles of all twelve members of the DC city council and urged his customers to make their voices heard. The next day he posted a public letter to the council members, writing ominously, “Why would you so clearly put a special interest ahead of the interests of those who elected you? The nation’s eyes are watching to see what DC’s elected officials stand for.” Mary Cheh was taken aback by the ferocity of the response. Within twenty-four hours, the council members received fifty thousand e-mails and thirty-seven thousand Tweets with the hashtag #UberDCLove.

    DC ended up allowing Uber with no restrictions. Internally, Uber codified this strategy, with a principle called “Travis’s law”:

    “Our product is so superior to the status quo that if we give people the opportunity to see it or try it, in any place in the world where government has to be at least somewhat responsive to the people, they will demand it and defend its right to exist.”

    Uber doesn’t actually publicly promote this law. But they worked with Brad Stone, they gave him a lot of access, and Brad’s calling it “Travis’s Law” in this book. So it has to be what Travis and a big chunk of Uber really believes, right? And it doesn’t sound that bad on the face of it, does it?

    The one part of this law which feels a bit roundabout is the phrase, “any place in the world where government has to be at least somewhat responsive to the people”. What does that mean, exactly? Does that just mean “a democracy”? It seems like it is trying to be a bit more expansive and include not just democracies but also societies that are half democratic or even “somewhat” democratic.

    So flip that around. If you believe in Travis’s Law, and you notice that Government X does not allow Uber, what do you conclude - that Government X must not even be “somewhat” a democracy?

    Anecdote 3: Following The Rules

    I kind of knew about Uber fighting the government a lot. What I didn’t realize before reading this book is that there were times when Uber took the strategy of “not fighting the government”.

    In particular, in 2012 Uber required every driver to be commercially licensed. At the time, that was a clear legal requirement, and even Uber didn’t want to fight that law. When Jason Calacanis asked Travis if he would consider using unlicensed drivers:

    Kalanick fervently believed that services using unlicensed drivers were against the law — and would get shut down. “It would be illegal,” he said on the podcast This Week in Startups. “Unless the driver had what’s called a TCP license in California and was insured.”

    “You don’t want to get into that kind of business?” asked host Jason Calacanis, the Uber angel investor.

    “The bottom line is that we try to go into a city and we try to be totally, legitimately legal,” Kalanick replied.

    At the time, Lyft was a small competitor, but they were willing to break that law. A year later, in 2013, the California government gave up their fight against Lyft, and Uber realized they had made a key mistake.

    Travis Kalanick had watched, waited, and even quietly agitated for Lyft to be shut down. Instead, they spread, undercutting Uber’s prices. Now that their approach had been sanctioned, Kalanick had no choice but to drop his opposition and join them. In January 2013, Uber signed the same consent decree with the CPUC and turned UberX into a ridesharing service in California, inviting nearly anyone with a driver’s license and proof of insurance, not just professional drivers, to open his or her car to paying riders.

    Et Cetera

    There’s a lot more in this book and it’s really a fascinating story. I could have written a section for Fighting New York City and Fighting London. And half the book is about Airbnb! So if you’re interested in this sort of thing I really recommend buying The Upstarts.

    To me, this book explains Uber’s culture. At every turn, they were rewarded for breaking the law, and punished for obeying the law. A culture that can survive that Pavlovian conditioning seems dangerous. And yet a hundred years of Taxi Magic competing with Cabulous may have just led to nothing. Perhaps Uber is not the startup we need, but the startup we deserve.

  • Music in Ancient Greece

    As part of my quest to understand Straussian reading I have been reading The Republic. Written by Plato 2400 years ago. Allegedly this is a good book to read deeply because Plato had a lot of thoughts on society which he was trying to sneak past the censors. Which makes sense because the book is all about Socrates talking about how he thinks the government should be set up, and Socrates got executed for… something. Apparently the precise rationale for why Socrates got executed has been lost in the mists of time. Or just immediately deleted by those censors.

    Anyway, if I were Plato I’d be pretty worried about saying something inappropriate in a book too. I would not be surprised if Plato were veiling some of his true beliefs here. I’ve been reading through it and thinking a whole lot about it while I’m reading it, trying to get all esoteric, and it does seem more interesting than when I read it more lazily back in high school.

    One thing that surprised me is how much talk about music there is in The Republic. If I were writing about the ideal form of government nowadays, I would take it for granted that music was pretty irrelevant. The Constitution doesn’t talk at all about what sort of music leads to the ideal form of government.

    But the Socratic theory of good government is way more focused on, first we need to think of what sort of people are good leaders. And then we need to figure out how our society can educate those qualities into people. Socrates brings up music here immediately:

    “What is the education? Isn’t it difficult to find a better one than that discovered over a great expanse of time? It is, of course, gymnastic for bodies and music for the soul.”

    “Yes, it is.”

    “Won’t we begin educating in music before gymnastic?”

    “Of course.”

    “You include speeches in music, don’t you?”, I said.

    “I do.”

    “Do speeches have a double form, the one true, the other false?”

    “Yes.”

    “Must they be educated in both, but first in the false?”

    “I don’t understand how you mean that,” he said.

    “Don’t you understand,” I said, “that first we tell tales to children? And surely they are, as a whole, false, though there are true things in them too.”

    If you are like me, then at first this passage just seems like total nonsense. The key is that the word “music” used to mean “any activity inspired by the Muses”. Music, poetry, art, literature, myths, all of those things were conflated together, at least in the word itself and in the way Socrates talks about it here.

    But it seems like in practice they would be conflated, too. It makes sense if you think about it - in a world with few books, you can’t really have literature as we know it today. You can, however, have epic poems like the Odyssey. But when you’re putting together an epic poem, the music might be just as important as the words.

    Or in a world where the lawsuits weren’t settled by a judge as much as they were by a jury of 501 or more people, and instead of a lawyer you would commonly hire a speechwriter who would draw parallels to epic poems to make the case, because those epic poems are the main shared moral values in your society, there might be less of a difference between “studying law” and “studying music” than you might think.

    Just imagine if you had to be a great musician, to be a great lawyer.

    Nowadays it seems like music is basically just for entertainment. I wonder if we have lost something.

    It’s a little crazy, but I can think of one case where music really helped in my education. The song Fifty Nifty United States. I memorized that thing in third grade and to this day I can still use it to reel off the names of all fifty states. My kids find this quite entertaining, because to them the names of the states are just nonsense words and they’re impressed by how much nonsense I can speak in one fell swoop.

    What if there was an epic poem that taught you calculus? Just memorize this one epic poem, and if you forget how calculus works, no problem just sing that song to yourself until you get to the part about integrating by parts.

    Might work better than drill and kill.

  • Socially Irresponsible Investors

    It is sort of a shame how capitalism is clearly so powerful in the world, and we would like the world to be a good place where virtues like honesty and fairness and humility and justice are rewarded, and yet capitalism isn’t really fundamentally based on any of those things, it’s based on chasing after money. It can feel weird to believe in lots of virtuous things, and then invest your money in a way that ignores all that. So maybe there is some compromise where you overall try to invest your money intelligently, but somehow “slant” your investments towards things that are good for the world?

    In general the name for this is socially responsible investing.

    Socially responsible investors encourage corporate practices that promote environmental stewardship, consumer protection, human rights, and diversity. Some avoid businesses involved in alcohol, tobacco, fast food, gambling, pornography, weapons, contraception/abortifacients/abortion, fossil fuel production, and/or the military. The areas of concern recognized by the SRI practitioners are sometimes summarized under the heading of ESG issues: environment, social justice, and corporate governance.

    This is probably a good thing, according to the Occamian logic of, trying to do good is generally good. You might make a little less money, since you’re less focused on profit, but it’s worth it because you make the world a better place. Financial markets are weird, though. I wonder if there is more to this sort of investing strategy.

    In particular, I theorize the existence of the opposite type of investor: the socially irresponsible investor. The socially irresponsible investor actively seeks out investments that are making the world a worse place. They invest in companies that harm the environment, disregard consumers, break the law, ignore social justice. A socially irresponsible investor wouldn’t proudly issue press releases about their great strategy, so you wouldn’t naturally hear about this strategy, even if it were super-popular. So don’t disregard this theory just because you have never heard of such a thing.

    Why would someone do this? Well, let’s try to model this. Let’s say there are three types of investor: the responsible investor, who is willing to make a bit less money in order to invest in companies that help the world. The neutral investor, who just focuses on which investments seem like good investments, and the irresponsible investor, who invests in precisely those companies that are evil, the ones that the responsible investor avoids.

    Who makes the most money? Well, the whole point of responsible investing is that you’re willing to make a bit less money by having priorities other than making money. So you might expect the responsible investor to make a bit less money than the neutral investor. But the neutral investor’s strategy is basically an average between the responsible investor and the irresponsible investor. So… the irresponsible investor must be making the most money of all of them?

    For what it’s worth, I wasn’t the first person to think of this. For example there was the Vice Fund which publicly tried to make this sort of evil strategy work.

    Vice Fund, a mutual fund started 14 months ago by Mutuals.com, a Dallas investment company, is profiting nicely from what some would consider the wickedest corners of the legitimate economy: alcohol, arms, gambling and tobacco. So far this year, Vice Fund has returned 17.2% to investors, beating both the S&P 500 (15.2%) and the Dow Jones industrial average (13.2%) by a few points.

    In fact, all four vice-ridden sectors have outperformed the overall American market during the past five years. “No matter what the economy’s state or how interest rates move, people keep drinking, smoking and gambling,” says Dan Ahrens, a portfolio manager at the self-described “socially irresponsible” fund.

    It seems dangerous for the best moneymaking strategy to involve being evil. People who care a lot about money would come in, start looking around, realize the best strategy was evil, and evil behaviors would spread.

    But honestly, I don’t think this model is accurate for the general stock market, because I don’t believe that any of these investors are doing better than monkeys throwing darts. Like more and more people nowadays, I am dubious of active stock-picking and personally aim for a pretty conservative investment strategy with lots of index funds.

    For venture capital investing, though, I do believe that there are different types of investors and that their decisions are different than random guesses. There are certainly a lot of groups who are trying to be socially responsible investors in venture capital, focusing on women, underrepresented minorities, or some more-general notion of doing good:

    • The NYT says that Nancy Pfund of DBL Investors has “quietly built a reputation as the go-to venture capitalist for companies looking to make a social impact.”

    • The Women’s Venture Capital Fund “capitalizes on the expanding pipeline of women entrepreneurs leading gender diverse teams.”

    • City Light Capital “aims to generate strong returns while making a social impact.”

    • The Catalyst Fund was “established to invest in technology companies founded by underrepresented ethnic minority entrepreneurs.”

    • Solstice Capital “is committed to investing 50% of its capital in socially responsible companies that are currently not well served by the venture capital community.”

    Well, that’s cool. So… are there socially irresponsible venture investors, who seek out the opposite sort of investment? Worse, are the irresponsible investors making more money?

    I was going to put together a list with some bullet points, but I really don’t want to be calling people out and naming names, especially when really I know so little about the details here and this is more some idle abstract theory than a concrete proof. The convenient thing about venture capital, though, is that success is really defined by hitting a few winners, rather than performance of your median investment. So you can investigate yourself, look through the billion dollar club, and see how many of the most successful tech startups are famous for breaking the law, being run by social injustice warriors, helping the war machine, or “multiple of the above”. And see who invested in multiple of them, and come to your own conclusions.

    Of course, I’m sure nobody would never write a blog post talking about how their socially irresponsible investing strategy was the key to their success.

  • Military Artificial Intelligence

    There has been some discussion recently about artificial intelligence in military applications. In particular, Eric Schmidt posed an interesting theory:

    I interviewed Eric Schmidt of Google fame, who has been leading a civilian panel of technologists looking at how the Pentagon can better innovate. He said something I hadn’t heard before, which is that artificial intelligence helps the defense better than the offense. This is because AI always learns, and so constantly monitors patterns of incoming threats. This made me think that the next big war will be more like World War I (when the defense dominated) than World War II (when the offense did).

    To me, something doesn’t quite sit right about this. It feels like this is reasoning by analogy instead of reasoning from first principles. It seems like World-War-I-style defensive warfare happens when there is a fundamental advantage from sitting in one physical location, like in medieval times before artillery got powerful enough to defeat physical walls, rather than when patterns of incoming threats are easy to monitor.

    Disclaimer: my expertise is not military. I did listen to Dan Carlin’s podcast on World War I though. Best 20-hour podcast ever.

    So it’s not really fair for me to just snipe at Eric Schmidt. I should offer a theory of, what will artificial intelligence do to the military? What sort of military operation will it make more powerful?

    I think the first question to ask is whether “offense and defense” is the right way to break down future wars. In World War I and World War II, you had an offense and a defense. Nowadays, the sides are more likely to be “the state” versus “the terrorists”. If you’re interested in this shift, a bunch of military types seem to refer to this as the rise of fourth-generation warfare.

    Fourth-generation warfare (4GW) is conflict characterized by a blurring of the lines between war and politics, combatants and civilians.

    The term was first used in 1989 by a team of United States analysts, including paleoconservative William S. Lind, to describe warfare’s return to a decentralized form. In terms of generational modern warfare, the fourth generation signifies the nation states’ loss of their near-monopoly on combat forces, returning to modes of conflict common in pre-modern times.

    The first bits of AI in war are already happening, with the rise of drone warfare. So far, drone warfare is a big advantage for “the state”. While ISIS is using more and more drones, for now there’s still a massive AI advantage that goes to the side which can deploy more capital.

    It’s not clear if this trend will continue, though. If drones get cheaper and cheaper, we could end up in a world where the state and the terrorists both have access to drones of similar quality. What would warfare look like if the terrorists had just as many Predator drones as the government, because the parts cost $100 and you can make them in any back alley with a 3D printer? If the drone was the size of a paper airplane, and you could give it a few pictures of any person and have it seek out the target? If assassinations were cheap on each side, the forces of chaos seem like they would rule the day. So at that point it seems like AI would be an advantage for the terrorists. It’s hard for me to imagine how any society could live under those conditions, really. What would society resort to in that world?

    It seems like a recipe for totalitarian crackdown. Make 3D printers illegal, record video on every street corner, record video inside every house, track every object everywhere, it’s the only way to stay safe. If we have to go that far, then it seems like AI will be an advantage to the state. It’s the only way to make administration of the totalitarian state practical. Not really a pleasant world though. Hopefully something is wrong about my projection here.

    Besides drones, I can imagine military AI becoming relevant for cybersecurity. This one is a bit more far out - we are a lot better at the AI you need for robotics, than the AI you need to hack into a computer system. So would AI be good for the black hats or for the white hats? I can imagine an AI that’s really good at finding flaws in a computer system, but I can also imagine that same AI scanning your defenses like a souped up Valgrind checking for the existence of any flaws. I guess I could see this going either way.

    So overall, the military status quo is pretty good. The world is not devolved into warfare and chaos, and it seems nice to keep it that way. Unfortunately, it seems to me like military AI is quite likely to take things in the wrong direction. I don’t think there’s a practical way to get the world to not adopt a new military technology; the same arms race mechanic applies now as it did in 1914. Wish I had a way to end this post on a positive note, but maybe I should just leave the reader hanging with a vague sense of unease. Enjoy!

  • Straussian Reading

    For a while I have noticed Tyler Cowen using the term “Straussian reading”. I like reading, so I got intrigued by the idea of a way to read that is more powerful than the normal way of reading things, and started learning about the concept of Straussian reading.

    The idea is that when you read something, there are sometimes two layers of meaning. There is the “exoteric”, most-obvious meaning, which is the normal meaning that normal meaning would reveal. After you understand the exoteric meaning, if you keep thinking about it, you may be able to understand a deeper “esoteric” meaning. This initially seems like some sort of Da Vinci Code nonsense, but hang with me for a moment.

    The best introduction to Straussian reading I have found so far is Philosophy Between The Lines: The Lost History of Esoteric Writing. Here’s a passage that does a great job describing the concept:

    Imagine you have received a letter in the mail from your beloved, from whom you have been separated for many long months. (An old-fashioned tale, where there are still beloveds—and letters.) You fear that her feelings toward you may have suffered some alteration. As you hold her letter in your unsteady hands, you are instantly in the place that makes one a good reader. You are responsive to her every word. You are exquisitely alive to every shade and nuance of what she has said—and not said.

    “Dearest John.” You know that she always uses “dearest” in letters to you, so the word here means nothing in particular; but her “with love” ending is the weakest of the three variations that she typically uses. The letter is quite cheerful, describing in detail all the things she has been doing. One of them reminds her of something the two of you once did together. “That was a lot of fun,” she exclaims. “Fun”—a resolutely friendly word, not a romantic one. You find yourself weighing every word in a relative scale: it represents not only itself but the negation of every other word that might have been used in its place. Somewhere buried in the middle of the letter, thrown in with an offhandedness that seems too studied, she briefly answers the question you asked her: yes, as it turns out, she has run into Bill Smith—your main rival for her affection. Then it’s back to chatty and cheerful descriptions until the end.

    It is clear to you what the letter means. She is letting you down easy, preparing an eventual break. The message is partly in what she has said—the Bill Smith remark, and that lukewarm ending—but primarily in what she has not said. The letter is full of her activities, but not a word of her feelings. There is no moment of intimacy. It is engaging and cheerful but cold. And her cheerfulness is the coldest thing: how could she be so happy if she were missing you? Which points to the most crucial fact: she has said not one word about missing you. That silence fairly screams in your ear.

    Just imagine knowing (for example) Plato so well, that you could read The Republic in this way, pondering unusual word choices and thinking deeply about what was not said.

    Why would someone write in this way, with hidden esoteric meaning, rather than just saying what they mean? In this example, your beloved feels that you can’t handle the raw truth. Fear of persecution is another common rationale for esoteric writing. Socrates was executed for his beliefs, so do you really think Plato would just write down everything he honestly believed? The theory is that writing esoterically lets Plato hint at his deeper thoughts, like suspicion of the whole Greek religious system, while escaping punishment for his beliefs.

    If Straussian reading were only something that applied to the ancient Greeks, I would lose interest in the concept. As a Silicon Valley techie, I am going to try to read “startup advice literature” in this way. It is fairly common for successful people in the startup scene to be attacked for their beliefs in one way or another. Yet their unpopular beliefs may be contributing to their success. So wouldn’t it be useful to know what those unpopular beliefs are?

    For example, the Peter Thiel interview question:

    Tell me something that’s true that very few people agree with you on.

    Thiel himself evades answering this. I suspect that he has more secret beliefs that are even more controversial than the controversial beliefs he has already been publicly criticized for. And presumably dozens or hundreds of successful entrepreneurs have been interviewed by Peter Thiel and asked this question. What were their answers? I think people are embarrassed or afraid to share the good answers publicly. So I’m tempted to reread Zero to One and try to step up my Straussian reading game.

  • Book Review of 'The Girl With All The Gifts'

    I’m torn about how to review fiction. I personally hate spoilers. And The Girl With All The Gifts is particularly nice if you have no idea what sort of book you’re reading. So if you’re inclined to trust me and just buy a book and read it without knowing what it’s about, just do it. Amazon says it’s similar to Wool and The Martian so if you liked those, give this a try.

    So, shallow spoiler alert begins here.

    The Girl With All The Gifts is a zombie book. I would say it is the second-best zombie book I have read, after World War Z. It has a really fun beginning if you don’t realize that it’s a book about zombies, but that’s a hard experience to create if you’re reading this paragraph. Sorry. I had the perfect setup of buying this for my Kindle, then forgetting about it for a month or so, then not remembering what the book was but seeing it unread on my Kindle and reading it. Magnificent! I recommend that methodology if you can swing it.

    It is not the sort of zombie book that is grisly and scary like an action movie. Instead it is about a small girl who is a zombie but who is also intelligent enough to understand her situation and to be cute and threatening at the same time. The zombie apocalypse has never had so much empathy for the zombies’ side before.

    There are a lot of parts of the book which are pretty interesting, but they almost all are enhanced by surprising the reader and I don’t want to reveal too much. If what I’ve written so far doesn’t convince you to read the book, I don’t see how I’ll be able to do it, so for the sake of spoilers I’ll stop here.

  • Book Review of 'Sapiens: A Brief History of Humankind'

    “History” can mean a lot of things. The traditional history book reminds me of history class in high school. There’s some period of American or European history and you are encouraged to memorize a list of naval battles. Sapiens: A Brief History of Humankind by Yuval Harari is not a traditional history book.

    The thing about history is that there is so much of it. Even in a single year. Think of all the things that happened in 2014. This amount of stuff probably also happened in 1492. We just have decided that we only care about a portion of it. So a normal history ends up being a history of politics. A history of kings, nations, wars, and conquest.

    Instead, this book steps back and takes a broader view. What are the three most important events in human history? Harari picks three revolutions: the Cognitive, Agricultural, and Scientific revolutions.

    The Cognitive Revolution is when human beings went from being just like other animals to being unique. The key was the ability to spread culture - communication that helped humans coordinate better and learn faster than DNA-based evolution allowed. We usually take it for granted that humans dominate animals but it wasn’t always the case, and it’s interesting to think of verbal communication as a technology of war that let us defeat animals that previously were our predators.

    One theme of this book is confronting painful truths. For example, we know from the fossil record that there used to be different species similar to humans, like Neanderthals. What happened to them? Was Homo Sapiens the eventual victor because of our ability to genocidally destroy the other human species? We will never know for sure but it seems like yes.

    The Agricultural Revolution was when we started farming. This book makes the case also made in Guns, Germs, and Steel that the invention of farming was actually pretty terrible for the average human. Our lives were worse as farmers. It’s just that farming could feed far more people than hunting and gathering, so the farming way of life won out.

    I don’t agree with this argument, because I don’t think individual human happiness is the right metric to aim for. Personally I think the world became a better place with ten million humans than with one million, primarily because it’s pretty cool that those extra nine million get to live. But I can understand how people would disagree here.

    Sapiens is similar to Guns, Germs, and Steel in many ways. If you liked one, there’s a good chance you’d like the other. They both look at a broad stretch of human history, disregard the “standard historical important stuff”, and ask what was the real important factors that led to this outcome. Overall, this book is much more willing to come to controversial conclusions. I found it a bit suspicious that Guns, Germs, and Steel came to only conclusions that social science academics would agree with politically.

    The best example comes in the discussion of the Scientific Revolution, which Harari describes as about 1500 to the present. The main events in world history during this period are basically Europeans violently taking over the world, and there’s the obvious question of why Europeans?

    Framing all of human history as these three revolutions, there are obvious parallels between the three. It seems pretty likely that Homo Sapiens became the only Homo species by killing off the other ones. There is even more evidence in the agricultural era that farming societies were much stronger militarily and frequently wiped out non-agricultural societies. And then a similar thing happened again in the Scientific era, where European cultures got military dominance over the rest of the world, and while they did not actually slaughter everyone they at least spread their culture everywhere. Nowadays every patch of ground is part of a nation-state and all of the leaders wear suits and spend money. So each of these revolutions was not just a revolution in everyday life, but also a revolution in military technology that let the new order violently displace the old.

    So why Europeans? Harari basically concludes, there’s no fundamental reason. It’s just the culture that happened to get scientific first. More than science, it’s the idea of “progress” - that society can be improved, by discovering new technology, new lands, new all sorts of things.

    Another controversial theme of this book is drawing parallels between different human ideas. For example, Harari classifies many things as “fictions that help societies despite being biologically false”. All religions, ideologies like communism and capitalism, principles like human rights, concepts like nations, everyday conventions like the use of money and the concept of ownership. How much difference is there between believing in a sun god and believing in free markets? How much difference is there between believing your country was destined to rule the world and believing your species was destined to rule the world? Why do we believe what we believe?

    Overall I found pondering these questions to be a lot of fun and I would recommend this book to anyone who likes thinking about what it means to be human.

  • Book Review of 'Rationality: From AI to Zombies' parts 3-6

    I finished Rationality: From AI to Zombies so I thought I should finish my book review as well. For my comments on the first part of this book see here.

    I found myself becoming more fascinated with this book as I read it, thinking “I don’t quite agree with this book, but the subject matter is interesting, the author starts off with axioms like my own, and I can’t put my finger precisely on why I don’t agree, so I am compelled to keep thinking about it.”

    Since the first part of my book review I have changed my mind on whether this book overrates rationality. As long as you define rationality as “making the correct decisions in every circumstance” you can’t really overrate rationality. The real question is whether the Bayesian method described in this book is actually rational. That I think the author overrates.

    This book goes into three areas and tries to take a hyper-Bayesian methodology to get a rational approach for each of them. Quantum mechanics, evolutionary psychology, and the author’s personal life.

    The discussion of quantum mechanics is focused on whether the “many worlds” interpretation of quantum mechanics is correct, as opposed to the “waveform collapse” theory. The author’s stance is that not only is the “many worlds” theory correct, but it is so clearly correct that the fact that many people don’t agree with “many worlds” shows that they are insufficiently rational.

    The evolutionary psychology discussion is similar. Yudkowsky claims that scientists are constantly hoping for evolution to favor morality, which leads to a bias in favor of more pleasant interpretations. The underlying claim is that for unsettled areas of science, there is still one rational interpretation that is superior, and if scientists disagree on how to interpret findings on the frontier of science, it is because they are not rational enough.

    The author frequently cites Aumann’s agreement theorem - that two perfectly rational people with the same knowledge cannot disagree. Therefore in practice two rationalists should not disagree. It seems like the thing for them to do is to argue incessantly, and if they cannot come to an agreement then each should conclude that the other is not rational enough. That feels a bit wrong.

    The last section of the book discusses more personal issues and the trouble with growing this rationalist movement. Yudkowsky mentions he has trouble “getting things done” and he also has trouble getting groups of people to work together. To me, both of these seem like problems with what I would colloquially describe as “overthinking it”.

    To the Yudkowsky-rationalist mind, there is no such thing as “overthinking a problem”. You keep thinking, you get to a more intelligent solution. The problem is, mental energy is a limited resource. If every statement and action, you feel obliged to analyze it to perfection, then you’re going to end up more exhausted than if you let yourself make quick decisions and prefer to be 80% correct immediately than 90% correct slowly. No wonder the author laments that he has trouble working as much as he intends to.

    Sometimes your goal really does take precedence over rationally rethinking all of your premises. If you really rethink everything you do until it’s 100% correct, you will constantly be stalled and frustrated in making progress. Eliezer writes, for example, that he was quite discouraged in trying to raise donations from a group of rationalists that the general consensus was that it was irrational to donate money. Is it actually rational for rationalists to donate money? Yudkowsky hesitates to attack that question, probably out of fear of concluding that it is indeed not rational and thus rationalizing himself out of a job.

    I’m not saying that no nonprofit should collect donations. I just think the money-raisers should not constantly angst about whether it is really the most rational thing to collect money. They should not expect 100% of the followers to think alike and be willing to donate. They should just practically see what methods work for raising money, and which don’t, and use the methods that work rather than assuming that arguing about rationalism is the way to solve every problem.

    The core paradox at the heart of Eliezer-style rationalism is that, when you define “rational” as using the best strategy available, once you add any additional principles to your philosophy of rationality, it is inevitable that in some situations, disregarding that principle is the most effective. Yudkowsky loves Bayesianism because in a limited number of situations it does provide a perfect analysis of what to do. But beyond that limited number of simple situations, it does not seem that a Bayesian approach to a problem is actually the most effective way of solving it. So why try to be more Bayesian in your life?

    I have a more technical criticism here too. Even in a situation where you are just focused on decisions and you have a clear set of input variables, a Bayesian model may very well not be the most effective. For example, let’s say you have a large enough number of inputs n that you can process all of them which is O(n), but you can’t process all the pairs of them which is O(n2). You have some boolean output you are trying to decide on. And many of the variables may be correlated. Logistic regression is probably a better fit here than naive Bayes, because you’ll end up capturing much of the input correlation implicitly if not explicitly.

    When you apply this to a practical situation, you end up with a system that locally appears to violate Bayesian statistics. You will have an input that, statistically when X happens, Y happens 60% of the time. But, your gut tells you that when X happens, it’s actually an indicator of not-Y. Maybe your gut is doing logistic regression on a large number of hidden variables and coming to a more successful strategy than your local Bayesian analysis is. Should you really cheat on a test when your probability estimate tells you it’s worth it? Or should you listen to your gut telling you to be ethical? Even if you can’t verbalize all the reasons encapsulated in this gut instinct, it doesn’t mean that rejecting it will lead to better outcomes. Even if you mathematically analyze every visible variable, it still doesn’t mean that rejecting your gut instinct will lead to better outcomes.

    I do want my criticism to be falsifiable. So, what would convince me would be seeing that adopting this rationalist philosophy actually does lead to better outcomes at some practical endeavor. This does not yet appear to be happening.

    All of that said, the book is quite compelling and contains many arguments that make me rethink some of my own basic principles. It is worth reflecting on your own decisionmaking processes, even if you don’t agree with the hyper-Bayesian methodology advocated here.

    If you want a quick hit of this book without reading 1800 pages of it, try this essay on the twelve virtues of rationality.

    This book also left me curious about the author’s theories on artificial intelligence. How would one build a “friendly AI”? What would it look like to get 10% of the way there? My suspicion is that working on “how to make AI friendly” will indeed be a very valuable thing to do, but you can’t really make much progress unless you have some basic architecture of how any AI would be built, and it doesn’t really seem like humanity is there yet. We need the equivalent of the Von Neumann architecture - what parts will lead to a whole that can do humanlike things. Learning functions from vector spaces to booleans is neat but it’s like we’ve only built a CPU and we haven’t figured out that we’ll also need some permanent storage, some I/O devices, and a way to enter programs.

    This book also left me thinking about cryonics. In passing the author claims that signing up for cryonics is such a good decision that everyone should do it. I do not have a good counterargument, yet I have not signed up for cryonics. The pro-cryonics argument might be the most compelling practical part of this book; I wish Eliezer had spent as much time on that as he did on quantum mechanics and evolution.

    One last note - this book reminded me a lot of A New Kind Of Science. Both have quite complex and deep thoughts which diverge a lot from the mainstream. Both discuss how a hypermathematical approach could cause a paradigm shift in a different field. Both are convinced their work is revolutionary but the concrete evidence is not enough to convince the world of it. Both are insanely long in a way that discourages normal people from reading them.

    I would like to see more books like this.

  • Book Review of 'Rationality: From AI to Zombies' parts 1-2

    I read a lot of books, so I thought it would be fun to do some book reviews.

    As I make this decision I am in the middle of reading Rationality: From AI to Zombies which is just a monstrously long and complicated book. Six parts. 1800 pages. So this is just a review of parts 1-2 which account for about the first third of the book.

    This book is a collection of blog posts. To enjoy this sort of book you need to be able to enjoy a dense collection of nonlinearly organized thoughts. If you have never found a blog that intrigued you and just read all of its past posts in a sitting, this book may not be for you.

    Enjoying Infinite Jest with its aggressive footnoting and endnoting is also a sign that this might be your sort of book.

    If you do enjoy this sort of book, and you happen to be a manager, you might also enjoy Managing Humans.

    This book is by Eliezer Yudkowsky who does several curiously nonstandard things like work on AI and write Harry Potter fanfiction.

    The first two parts of this book are about rationality.

    I came into this book thinking that rationality was rather overrated. I took a game theory class in college that turned me off. In particular I was disappointed by the game-theory definition of rationality which did not seem like it was always the right thing to do.

    Here’s an example. We called this the “Microsoft game”, after the olden days in which dominant industry players frequently crushed newcomers just by copying their products. The way it works is, one player is Microsoft. There are thousands of other players. At each time step, one of them is chosen randomly to become a startup and challenge Microsoft.

    The startup has two options:

    • The “conservative option”. The startup gets $1M, Microsoft loses $0.
    • The “aggressive option”. The startup gets $10M, Microsoft loses $1B.

    The trick is, if the startup chooses the aggressive option, Microsoft then has the option to retaliate. If Microsoft retaliates, Microsoft loses an extra $1, and the startup loses all of its $10M winnings.

    That’s all. The question is, what is the rational way for each side to play this game?

    According to standard game theory and the theory of dominant strategies, there is no reason for Microsoft to ever retaliate, because the outcome for Microsoft is always just a dollar better when not retaliating.

    The next step of standard game theory is to repeatedly eliminate dominated strategies. So if you eliminate the ability to ever retaliate, then it’s clear that all startups should choose the aggressive option. Therefore according to standard game theory the “rational” way to play this game is for startups to always be aggressive and for Microsoft to never retaliate.

    In practice, this seems silly. Microsoft should obviously retaliate all the time, and in a world where Microsoft will obviously retaliate all the time then it’s clear for all startups to choose the conservative option.

    I ended up arguing with my game theory professor and converging on a state where, we agreed that the “rational” thing to do according to game theory was one thing, and the intelligent thing to do in reality was something else. Which left me thinking that rationality was overrated.

    The author makes a good case in this book that this is just misusing the term “rationality”. The right way to think of rationality is that if a logical argument indicates that a particular course of action is the wrong thing to do, then it should not be described as “rational”. So I now think this is just a case where game theory needs to fix a glitch.

    This book contains many, many examples of ways in which you can trick yourself into believing an illogical argument, or leave a mental conclusion in place that really should be overturned on closer inspection. In particular, I think I have a tendency to underrate the likelihood of complex plans to fail for some unpredictable reason, compared to simple plans. I reflected on this tendency while reading this book and found it quite a fun experience.

    I still think the author overrates rationality. Rationality is a great root philosophy when you need to determine whether a statement is true or false, or when you need to pick between a small number of clearly delineated choices. But often you need to take action when the space of possibilities is quite vast, or make a decision where your data is all so fuzzy and poorly categorized that reductionism offers little practical help. I suspect there are cases where human biases are actually really good for you, despite being irrational.

    So far I would recommend this book to anyone who can stand it.

  • Hello World

    It feels like a good time to start blogging again.

    For a while I had a blog going on lacker.info, but unfortunately I was using some “Blogspot for your domain” product that was deprecated over the subsequent decade.

    This time, I’m betting that plain old HTML will be a format for the ages. Also .info is so lame now! So lacker.io is where it’s at.

    I’m using Jekyll to generate that HTML because I liked it when we used it for the original Parse blog, although the current Parse blog is just Wordpress. Also Jekyll is nice for wordsLikeThis.

    # Also it has nice built-in syntax highlighting.
    def is_that_really_necessary(blogger):
      if blogger == "lacker":
        return True
      raise "TODO: implement logic here"

    And so it begins. I’m really curious what people would want me to blog about, so let me know.