• Straussian Reading

    For a while I have noticed Tyler Cowen using the term “Straussian reading”. I like reading, so I got intrigued by the idea of a way to read that is more powerful than the normal way of reading things, and started learning about the concept of Straussian reading.

    The idea is that when you read something, there are sometimes two layers of meaning. There is the “exoteric”, most-obvious meaning, which is the normal meaning that normal meaning would reveal. After you understand the exoteric meaning, if you keep thinking about it, you may be able to understand a deeper “esoteric” meaning. This initially seems like some sort of Da Vinci Code nonsense, but hang with me for a moment.

    The best introduction to Straussian reading I have found so far is Philosophy Between The Lines: The Lost History of Esoteric Writing. Here’s a passage that does a great job describing the concept:

    Imagine you have received a letter in the mail from your beloved, from whom you have been separated for many long months. (An old-fashioned tale, where there are still beloveds—and letters.) You fear that her feelings toward you may have suffered some alteration. As you hold her letter in your unsteady hands, you are instantly in the place that makes one a good reader. You are responsive to her every word. You are exquisitely alive to every shade and nuance of what she has said—and not said.

    “Dearest John.” You know that she always uses “dearest” in letters to you, so the word here means nothing in particular; but her “with love” ending is the weakest of the three variations that she typically uses. The letter is quite cheerful, describing in detail all the things she has been doing. One of them reminds her of something the two of you once did together. “That was a lot of fun,” she exclaims. “Fun”—a resolutely friendly word, not a romantic one. You find yourself weighing every word in a relative scale: it represents not only itself but the negation of every other word that might have been used in its place. Somewhere buried in the middle of the letter, thrown in with an offhandedness that seems too studied, she briefly answers the question you asked her: yes, as it turns out, she has run into Bill Smith—your main rival for her affection. Then it’s back to chatty and cheerful descriptions until the end.

    It is clear to you what the letter means. She is letting you down easy, preparing an eventual break. The message is partly in what she has said—the Bill Smith remark, and that lukewarm ending—but primarily in what she has not said. The letter is full of her activities, but not a word of her feelings. There is no moment of intimacy. It is engaging and cheerful but cold. And her cheerfulness is the coldest thing: how could she be so happy if she were missing you? Which points to the most crucial fact: she has said not one word about missing you. That silence fairly screams in your ear.

    Just imagine knowing (for example) Plato so well, that you could read The Republic in this way, pondering unusual word choices and thinking deeply about what was not said.

    Why would someone write in this way, with hidden esoteric meaning, rather than just saying what they mean? In this example, your beloved feels that you can’t handle the raw truth. Fear of persecution is another common rationale for esoteric writing. Socrates was executed for his beliefs, so do you really think Plato would just write down everything he honestly believed? The theory is that writing esoterically lets Plato hint at his deeper thoughts, like suspicion of the whole Greek religious system, while escaping punishment for his beliefs.

    If Straussian reading were only something that applied to the ancient Greeks, I would lose interest in the concept. As a Silicon Valley techie, I am going to try to read “startup advice literature” in this way. It is fairly common for successful people in the startup scene to be attacked for their beliefs in one way or another. Yet their unpopular beliefs may be contributing to their success. So wouldn’t it be useful to know what those unpopular beliefs are?

    For example, the Peter Thiel interview question:

    Tell me something that’s true that very few people agree with you on.

    Thiel himself evades answering this. I suspect that he has more secret beliefs that are even more controversial than the controversial beliefs he has already been publicly criticized for. And presumably dozens or hundreds of successful entrepreneurs have been interviewed by Peter Thiel and asked this question. What were their answers? I think people are embarrassed or afraid to share the good answers publicly. So I’m tempted to reread Zero to One and try to step up my Straussian reading game.

  • Book Review of 'The Girl With All The Gifts'

    I’m torn about how to review fiction. I personally hate spoilers. And The Girl With All The Gifts is particularly nice if you have no idea what sort of book you’re reading. So if you’re inclined to trust me and just buy a book and read it without knowing what it’s about, just do it. Amazon says it’s similar to Wool and The Martian so if you liked those, give this a try.

    So, shallow spoiler alert begins here.

    The Girl With All The Gifts is a zombie book. I would say it is the second-best zombie book I have read, after World War Z. It has a really fun beginning if you don’t realize that it’s a book about zombies, but that’s a hard experience to create if you’re reading this paragraph. Sorry. I had the perfect setup of buying this for my Kindle, then forgetting about it for a month or so, then not remembering what the book was but seeing it unread on my Kindle and reading it. Magnificent! I recommend that methodology if you can swing it.

    It is not the sort of zombie book that is grisly and scary like an action movie. Instead it is about a small girl who is a zombie but who is also intelligent enough to understand her situation and to be cute and threatening at the same time. The zombie apocalypse has never had so much empathy for the zombies’ side before.

    There are a lot of parts of the book which are pretty interesting, but they almost all are enhanced by surprising the reader and I don’t want to reveal too much. If what I’ve written so far doesn’t convince you to read the book, I don’t see how I’ll be able to do it, so for the sake of spoilers I’ll stop here.

  • Book Review of 'Sapiens: A Brief History of Humankind'

    “History” can mean a lot of things. The traditional history book reminds me of history class in high school. There’s some period of American or European history and you are encouraged to memorize a list of naval battles. Sapiens: A Brief History of Humankind by Yuval Harari is not a traditional history book.

    The thing about history is that there is so much of it. Even in a single year. Think of all the things that happened in 2014. This amount of stuff probably also happened in 1492. We just have decided that we only care about a portion of it. So a normal history ends up being a history of politics. A history of kings, nations, wars, and conquest.

    Instead, this book steps back and takes a broader view. What are the three most important events in human history? Harari picks three revolutions: the Cognitive, Agricultural, and Scientific revolutions.

    The Cognitive Revolution is when human beings went from being just like other animals to being unique. The key was the ability to spread culture - communication that helped humans coordinate better and learn faster than DNA-based evolution allowed. We usually take it for granted that humans dominate animals but it wasn’t always the case, and it’s interesting to think of verbal communication as a technology of war that let us defeat animals that previously were our predators.

    One theme of this book is confronting painful truths. For example, we know from the fossil record that there used to be different species similar to humans, like Neanderthals. What happened to them? Was Homo Sapiens the eventual victor because of our ability to genocidally destroy the other human species? We will never know for sure but it seems like yes.

    The Agricultural Revolution was when we started farming. This book makes the case also made in Guns, Germs, and Steel that the invention of farming was actually pretty terrible for the average human. Our lives were worse as farmers. It’s just that farming could feed far more people than hunting and gathering, so the farming way of life won out.

    I don’t agree with this argument, because I don’t think individual human happiness is the right metric to aim for. Personally I think the world became a better place with ten million humans than with one million, primarily because it’s pretty cool that those extra nine million get to live. But I can understand how people would disagree here.

    Sapiens is similar to Guns, Germs, and Steel in many ways. If you liked one, there’s a good chance you’d like the other. They both look at a broad stretch of human history, disregard the “standard historical important stuff”, and ask what was the real important factors that led to this outcome. Overall, this book is much more willing to come to controversial conclusions. I found it a bit suspicious that Guns, Germs, and Steel came to only conclusions that social science academics would agree with politically.

    The best example comes in the discussion of the Scientific Revolution, which Harari describes as about 1500 to the present. The main events in world history during this period are basically Europeans violently taking over the world, and there’s the obvious question of why Europeans?

    Framing all of human history as these three revolutions, there are obvious parallels between the three. It seems pretty likely that Homo Sapiens became the only Homo species by killing off the other ones. There is even more evidence in the agricultural era that farming societies were much stronger militarily and frequently wiped out non-agricultural societies. And then a similar thing happened again in the Scientific era, where European cultures got military dominance over the rest of the world, and while they did not actually slaughter everyone they at least spread their culture everywhere. Nowadays every patch of ground is part of a nation-state and all of the leaders wear suits and spend money. So each of these revolutions was not just a revolution in everyday life, but also a revolution in military technology that let the new order violently displace the old.

    So why Europeans? Harari basically concludes, there’s no fundamental reason. It’s just the culture that happened to get scientific first. More than science, it’s the idea of “progress” - that society can be improved, by discovering new technology, new lands, new all sorts of things.

    Another controversial theme of this book is drawing parallels between different human ideas. For example, Harari classifies many things as “fictions that help societies despite being biologically false”. All religions, ideologies like communism and capitalism, principles like human rights, concepts like nations, everyday conventions like the use of money and the concept of ownership. How much difference is there between believing in a sun god and believing in free markets? How much difference is there between believing your country was destined to rule the world and believing your species was destined to rule the world? Why do we believe what we believe?

    Overall I found pondering these questions to be a lot of fun and I would recommend this book to anyone who likes thinking about what it means to be human.

  • Book Review of 'Rationality: From AI to Zombies' parts 3-6

    I finished Rationality: From AI to Zombies so I thought I should finish my book review as well. For my comments on the first part of this book see here.

    I found myself becoming more fascinated with this book as I read it, thinking “I don’t quite agree with this book, but the subject matter is interesting, the author starts off with axioms like my own, and I can’t put my finger precisely on why I don’t agree, so I am compelled to keep thinking about it.”

    Since the first part of my book review I have changed my mind on whether this book overrates rationality. As long as you define rationality as “making the correct decisions in every circumstance” you can’t really overrate rationality. The real question is whether the Bayesian method described in this book is actually rational. That I think the author overrates.

    This book goes into three areas and tries to take a hyper-Bayesian methodology to get a rational approach for each of them. Quantum mechanics, evolutionary psychology, and the author’s personal life.

    The discussion of quantum mechanics is focused on whether the “many worlds” interpretation of quantum mechanics is correct, as opposed to the “waveform collapse” theory. The author’s stance is that not only is the “many worlds” theory correct, but it is so clearly correct that the fact that many people don’t agree with “many worlds” shows that they are insufficiently rational.

    The evolutionary psychology discussion is similar. Yudkowsky claims that scientists are constantly hoping for evolution to favor morality, which leads to a bias in favor of more pleasant interpretations. The underlying claim is that for unsettled areas of science, there is still one rational interpretation that is superior, and if scientists disagree on how to interpret findings on the frontier of science, it is because they are not rational enough.

    The author frequently cites Aumann’s agreement theorem - that two perfectly rational people with the same knowledge cannot disagree. Therefore in practice two rationalists should not disagree. It seems like the thing for them to do is to argue incessantly, and if they cannot come to an agreement then each should conclude that the other is not rational enough. That feels a bit wrong.

    The last section of the book discusses more personal issues and the trouble with growing this rationalist movement. Yudkowsky mentions he has trouble “getting things done” and he also has trouble getting groups of people to work together. To me, both of these seem like problems with what I would colloquially describe as “overthinking it”.

    To the Yudkowsky-rationalist mind, there is no such thing as “overthinking a problem”. You keep thinking, you get to a more intelligent solution. The problem is, mental energy is a limited resource. If every statement and action, you feel obliged to analyze it to perfection, then you’re going to end up more exhausted than if you let yourself make quick decisions and prefer to be 80% correct immediately than 90% correct slowly. No wonder the author laments that he has trouble working as much as he intends to.

    Sometimes your goal really does take precedence over rationally rethinking all of your premises. If you really rethink everything you do until it’s 100% correct, you will constantly be stalled and frustrated in making progress. Eliezer writes, for example, that he was quite discouraged in trying to raise donations from a group of rationalists that the general consensus was that it was irrational to donate money. Is it actually rational for rationalists to donate money? Yudkowsky hesitates to attack that question, probably out of fear of concluding that it is indeed not rational and thus rationalizing himself out of a job.

    I’m not saying that no nonprofit should collect donations. I just think the money-raisers should not constantly angst about whether it is really the most rational thing to collect money. They should not expect 100% of the followers to think alike and be willing to donate. They should just practically see what methods work for raising money, and which don’t, and use the methods that work rather than assuming that arguing about rationalism is the way to solve every problem.

    The core paradox at the heart of Eliezer-style rationalism is that, when you define “rational” as using the best strategy available, once you add any additional principles to your philosophy of rationality, it is inevitable that in some situations, disregarding that principle is the most effective. Yudkowsky loves Bayesianism because in a limited number of situations it does provide a perfect analysis of what to do. But beyond that limited number of simple situations, it does not seem that a Bayesian approach to a problem is actually the most effective way of solving it. So why try to be more Bayesian in your life?

    I have a more technical criticism here too. Even in a situation where you are just focused on decisions and you have a clear set of input variables, a Bayesian model may very well not be the most effective. For example, let’s say you have a large enough number of inputs n that you can process all of them which is O(n), but you can’t process all the pairs of them which is O(n2). You have some boolean output you are trying to decide on. And many of the variables may be correlated. Logistic regression is probably a better fit here than naive Bayes, because you’ll end up capturing much of the input correlation implicitly if not explicitly.

    When you apply this to a practical situation, you end up with a system that locally appears to violate Bayesian statistics. You will have an input that, statistically when X happens, Y happens 60% of the time. But, your gut tells you that when X happens, it’s actually an indicator of not-Y. Maybe your gut is doing logistic regression on a large number of hidden variables and coming to a more successful strategy than your local Bayesian analysis is. Should you really cheat on a test when your probability estimate tells you it’s worth it? Or should you listen to your gut telling you to be ethical? Even if you can’t verbalize all the reasons encapsulated in this gut instinct, it doesn’t mean that rejecting it will lead to better outcomes. Even if you mathematically analyze every visible variable, it still doesn’t mean that rejecting your gut instinct will lead to better outcomes.

    I do want my criticism to be falsifiable. So, what would convince me would be seeing that adopting this rationalist philosophy actually does lead to better outcomes at some practical endeavor. This does not yet appear to be happening.

    All of that said, the book is quite compelling and contains many arguments that make me rethink some of my own basic principles. It is worth reflecting on your own decisionmaking processes, even if you don’t agree with the hyper-Bayesian methodology advocated here.

    If you want a quick hit of this book without reading 1800 pages of it, try this essay on the twelve virtues of rationality.

    This book also left me curious about the author’s theories on artificial intelligence. How would one build a “friendly AI”? What would it look like to get 10% of the way there? My suspicion is that working on “how to make AI friendly” will indeed be a very valuable thing to do, but you can’t really make much progress unless you have some basic architecture of how any AI would be built, and it doesn’t really seem like humanity is there yet. We need the equivalent of the Von Neumann architecture - what parts will lead to a whole that can do humanlike things. Learning functions from vector spaces to booleans is neat but it’s like we’ve only built a CPU and we haven’t figured out that we’ll also need some permanent storage, some I/O devices, and a way to enter programs.

    This book also left me thinking about cryonics. In passing the author claims that signing up for cryonics is such a good decision that everyone should do it. I do not have a good counterargument, yet I have not signed up for cryonics. The pro-cryonics argument might be the most compelling practical part of this book; I wish Eliezer had spent as much time on that as he did on quantum mechanics and evolution.

    One last note - this book reminded me a lot of A New Kind Of Science. Both have quite complex and deep thoughts which diverge a lot from the mainstream. Both discuss how a hypermathematical approach could cause a paradigm shift in a different field. Both are convinced their work is revolutionary but the concrete evidence is not enough to convince the world of it. Both are insanely long in a way that discourages normal people from reading them.

    I would like to see more books like this.

  • Book Review of 'Rationality: From AI to Zombies' parts 1-2

    I read a lot of books, so I thought it would be fun to do some book reviews.

    As I make this decision I am in the middle of reading Rationality: From AI to Zombies which is just a monstrously long and complicated book. Six parts. 1800 pages. So this is just a review of parts 1-2 which account for about the first third of the book.

    This book is a collection of blog posts. To enjoy this sort of book you need to be able to enjoy a dense collection of nonlinearly organized thoughts. If you have never found a blog that intrigued you and just read all of its past posts in a sitting, this book may not be for you.

    Enjoying Infinite Jest with its aggressive footnoting and endnoting is also a sign that this might be your sort of book.

    If you do enjoy this sort of book, and you happen to be a manager, you might also enjoy Managing Humans.

    This book is by Eliezer Yudkowsky who does several curiously nonstandard things like work on AI and write Harry Potter fanfiction.

    The first two parts of this book are about rationality.

    I came into this book thinking that rationality was rather overrated. I took a game theory class in college that turned me off. In particular I was disappointed by the game-theory definition of rationality which did not seem like it was always the right thing to do.

    Here’s an example. We called this the “Microsoft game”, after the olden days in which dominant industry players frequently crushed newcomers just by copying their products. The way it works is, one player is Microsoft. There are thousands of other players. At each time step, one of them is chosen randomly to become a startup and challenge Microsoft.

    The startup has two options:

    • The “conservative option”. The startup gets $1M, Microsoft loses $0.
    • The “aggressive option”. The startup gets $10M, Microsoft loses $1B.

    The trick is, if the startup chooses the aggressive option, Microsoft then has the option to retaliate. If Microsoft retaliates, Microsoft loses an extra $1, and the startup loses all of its $10M winnings.

    That’s all. The question is, what is the rational way for each side to play this game?

    According to standard game theory and the theory of dominant strategies, there is no reason for Microsoft to ever retaliate, because the outcome for Microsoft is always just a dollar better when not retaliating.

    The next step of standard game theory is to repeatedly eliminate dominated strategies. So if you eliminate the ability to ever retaliate, then it’s clear that all startups should choose the aggressive option. Therefore according to standard game theory the “rational” way to play this game is for startups to always be aggressive and for Microsoft to never retaliate.

    In practice, this seems silly. Microsoft should obviously retaliate all the time, and in a world where Microsoft will obviously retaliate all the time then it’s clear for all startups to choose the conservative option.

    I ended up arguing with my game theory professor and converging on a state where, we agreed that the “rational” thing to do according to game theory was one thing, and the intelligent thing to do in reality was something else. Which left me thinking that rationality was overrated.

    The author makes a good case in this book that this is just misusing the term “rationality”. The right way to think of rationality is that if a logical argument indicates that a particular course of action is the wrong thing to do, then it should not be described as “rational”. So I now think this is just a case where game theory needs to fix a glitch.

    This book contains many, many examples of ways in which you can trick yourself into believing an illogical argument, or leave a mental conclusion in place that really should be overturned on closer inspection. In particular, I think I have a tendency to underrate the likelihood of complex plans to fail for some unpredictable reason, compared to simple plans. I reflected on this tendency while reading this book and found it quite a fun experience.

    I still think the author overrates rationality. Rationality is a great root philosophy when you need to determine whether a statement is true or false, or when you need to pick between a small number of clearly delineated choices. But often you need to take action when the space of possibilities is quite vast, or make a decision where your data is all so fuzzy and poorly categorized that reductionism offers little practical help. I suspect there are cases where human biases are actually really good for you, despite being irrational.

    So far I would recommend this book to anyone who can stand it.


...