-
Book Review of 'Sapiens: A Brief History of Humankind'
“History” can mean a lot of things. The traditional history book reminds me of history class in high school. There’s some period of American or European history and you are encouraged to memorize a list of naval battles. Sapiens: A Brief History of Humankind by Yuval Harari is not a traditional history book.
The thing about history is that there is so much of it. Even in a single year. Think of all the things that happened in 2014. This amount of stuff probably also happened in 1492. We just have decided that we only care about a portion of it. So a normal history ends up being a history of politics. A history of kings, nations, wars, and conquest.
Instead, this book steps back and takes a broader view. What are the three most important events in human history? Harari picks three revolutions: the Cognitive, Agricultural, and Scientific revolutions.
The Cognitive Revolution is when human beings went from being just like other animals to being unique. The key was the ability to spread culture - communication that helped humans coordinate better and learn faster than DNA-based evolution allowed. We usually take it for granted that humans dominate animals but it wasn’t always the case, and it’s interesting to think of verbal communication as a technology of war that let us defeat animals that previously were our predators.
One theme of this book is confronting painful truths. For example, we know from the fossil record that there used to be different species similar to humans, like Neanderthals. What happened to them? Was Homo Sapiens the eventual victor because of our ability to genocidally destroy the other human species? We will never know for sure but it seems like yes.
The Agricultural Revolution was when we started farming. This book makes the case also made in Guns, Germs, and Steel that the invention of farming was actually pretty terrible for the average human. Our lives were worse as farmers. It’s just that farming could feed far more people than hunting and gathering, so the farming way of life won out.
I don’t agree with this argument, because I don’t think individual human happiness is the right metric to aim for. Personally I think the world became a better place with ten million humans than with one million, primarily because it’s pretty cool that those extra nine million get to live. But I can understand how people would disagree here.
Sapiens is similar to Guns, Germs, and Steel in many ways. If you liked one, there’s a good chance you’d like the other. They both look at a broad stretch of human history, disregard the “standard historical important stuff”, and ask what was the real important factors that led to this outcome. Overall, this book is much more willing to come to controversial conclusions. I found it a bit suspicious that Guns, Germs, and Steel came to only conclusions that social science academics would agree with politically.
The best example comes in the discussion of the Scientific Revolution, which Harari describes as about 1500 to the present. The main events in world history during this period are basically Europeans violently taking over the world, and there’s the obvious question of why Europeans?
Framing all of human history as these three revolutions, there are obvious parallels between the three. It seems pretty likely that Homo Sapiens became the only Homo species by killing off the other ones. There is even more evidence in the agricultural era that farming societies were much stronger militarily and frequently wiped out non-agricultural societies. And then a similar thing happened again in the Scientific era, where European cultures got military dominance over the rest of the world, and while they did not actually slaughter everyone they at least spread their culture everywhere. Nowadays every patch of ground is part of a nation-state and all of the leaders wear suits and spend money. So each of these revolutions was not just a revolution in everyday life, but also a revolution in military technology that let the new order violently displace the old.
So why Europeans? Harari basically concludes, there’s no fundamental reason. It’s just the culture that happened to get scientific first. More than science, it’s the idea of “progress” - that society can be improved, by discovering new technology, new lands, new all sorts of things.
Another controversial theme of this book is drawing parallels between different human ideas. For example, Harari classifies many things as “fictions that help societies despite being biologically false”. All religions, ideologies like communism and capitalism, principles like human rights, concepts like nations, everyday conventions like the use of money and the concept of ownership. How much difference is there between believing in a sun god and believing in free markets? How much difference is there between believing your country was destined to rule the world and believing your species was destined to rule the world? Why do we believe what we believe?
Overall I found pondering these questions to be a lot of fun and I would recommend this book to anyone who likes thinking about what it means to be human.
-
Book Review of 'Rationality: From AI to Zombies' parts 3-6
I finished Rationality: From AI to Zombies so I thought I should finish my book review as well. For my comments on the first part of this book see here.
I found myself becoming more fascinated with this book as I read it, thinking “I don’t quite agree with this book, but the subject matter is interesting, the author starts off with axioms like my own, and I can’t put my finger precisely on why I don’t agree, so I am compelled to keep thinking about it.”
Since the first part of my book review I have changed my mind on whether this book overrates rationality. As long as you define rationality as “making the correct decisions in every circumstance” you can’t really overrate rationality. The real question is whether the Bayesian method described in this book is actually rational. That I think the author overrates.
This book goes into three areas and tries to take a hyper-Bayesian methodology to get a rational approach for each of them. Quantum mechanics, evolutionary psychology, and the author’s personal life.
The discussion of quantum mechanics is focused on whether the “many worlds” interpretation of quantum mechanics is correct, as opposed to the “waveform collapse” theory. The author’s stance is that not only is the “many worlds” theory correct, but it is so clearly correct that the fact that many people don’t agree with “many worlds” shows that they are insufficiently rational.
The evolutionary psychology discussion is similar. Yudkowsky claims that scientists are constantly hoping for evolution to favor morality, which leads to a bias in favor of more pleasant interpretations. The underlying claim is that for unsettled areas of science, there is still one rational interpretation that is superior, and if scientists disagree on how to interpret findings on the frontier of science, it is because they are not rational enough.
The author frequently cites Aumann’s agreement theorem - that two perfectly rational people with the same knowledge cannot disagree. Therefore in practice two rationalists should not disagree. It seems like the thing for them to do is to argue incessantly, and if they cannot come to an agreement then each should conclude that the other is not rational enough. That feels a bit wrong.
The last section of the book discusses more personal issues and the trouble with growing this rationalist movement. Yudkowsky mentions he has trouble “getting things done” and he also has trouble getting groups of people to work together. To me, both of these seem like problems with what I would colloquially describe as “overthinking it”.
To the Yudkowsky-rationalist mind, there is no such thing as “overthinking a problem”. You keep thinking, you get to a more intelligent solution. The problem is, mental energy is a limited resource. If every statement and action, you feel obliged to analyze it to perfection, then you’re going to end up more exhausted than if you let yourself make quick decisions and prefer to be 80% correct immediately than 90% correct slowly. No wonder the author laments that he has trouble working as much as he intends to.
Sometimes your goal really does take precedence over rationally rethinking all of your premises. If you really rethink everything you do until it’s 100% correct, you will constantly be stalled and frustrated in making progress. Eliezer writes, for example, that he was quite discouraged in trying to raise donations from a group of rationalists that the general consensus was that it was irrational to donate money. Is it actually rational for rationalists to donate money? Yudkowsky hesitates to attack that question, probably out of fear of concluding that it is indeed not rational and thus rationalizing himself out of a job.
I’m not saying that no nonprofit should collect donations. I just think the money-raisers should not constantly angst about whether it is really the most rational thing to collect money. They should not expect 100% of the followers to think alike and be willing to donate. They should just practically see what methods work for raising money, and which don’t, and use the methods that work rather than assuming that arguing about rationalism is the way to solve every problem.
The core paradox at the heart of Eliezer-style rationalism is that, when you define “rational” as using the best strategy available, once you add any additional principles to your philosophy of rationality, it is inevitable that in some situations, disregarding that principle is the most effective. Yudkowsky loves Bayesianism because in a limited number of situations it does provide a perfect analysis of what to do. But beyond that limited number of simple situations, it does not seem that a Bayesian approach to a problem is actually the most effective way of solving it. So why try to be more Bayesian in your life?
I have a more technical criticism here too. Even in a situation where you are just focused on decisions and you have a clear set of input variables, a Bayesian model may very well not be the most effective. For example, let’s say you have a large enough number of inputs n that you can process all of them which is O(n), but you can’t process all the pairs of them which is O(n2). You have some boolean output you are trying to decide on. And many of the variables may be correlated. Logistic regression is probably a better fit here than naive Bayes, because you’ll end up capturing much of the input correlation implicitly if not explicitly.
When you apply this to a practical situation, you end up with a system that locally appears to violate Bayesian statistics. You will have an input that, statistically when X happens, Y happens 60% of the time. But, your gut tells you that when X happens, it’s actually an indicator of not-Y. Maybe your gut is doing logistic regression on a large number of hidden variables and coming to a more successful strategy than your local Bayesian analysis is. Should you really cheat on a test when your probability estimate tells you it’s worth it? Or should you listen to your gut telling you to be ethical? Even if you can’t verbalize all the reasons encapsulated in this gut instinct, it doesn’t mean that rejecting it will lead to better outcomes. Even if you mathematically analyze every visible variable, it still doesn’t mean that rejecting your gut instinct will lead to better outcomes.
I do want my criticism to be falsifiable. So, what would convince me would be seeing that adopting this rationalist philosophy actually does lead to better outcomes at some practical endeavor. This does not yet appear to be happening.
All of that said, the book is quite compelling and contains many arguments that make me rethink some of my own basic principles. It is worth reflecting on your own decisionmaking processes, even if you don’t agree with the hyper-Bayesian methodology advocated here.
If you want a quick hit of this book without reading 1800 pages of it, try this essay on the twelve virtues of rationality.
This book also left me curious about the author’s theories on artificial intelligence. How would one build a “friendly AI”? What would it look like to get 10% of the way there? My suspicion is that working on “how to make AI friendly” will indeed be a very valuable thing to do, but you can’t really make much progress unless you have some basic architecture of how any AI would be built, and it doesn’t really seem like humanity is there yet. We need the equivalent of the Von Neumann architecture - what parts will lead to a whole that can do humanlike things. Learning functions from vector spaces to booleans is neat but it’s like we’ve only built a CPU and we haven’t figured out that we’ll also need some permanent storage, some I/O devices, and a way to enter programs.
This book also left me thinking about cryonics. In passing the author claims that signing up for cryonics is such a good decision that everyone should do it. I do not have a good counterargument, yet I have not signed up for cryonics. The pro-cryonics argument might be the most compelling practical part of this book; I wish Eliezer had spent as much time on that as he did on quantum mechanics and evolution.
One last note - this book reminded me a lot of A New Kind Of Science. Both have quite complex and deep thoughts which diverge a lot from the mainstream. Both discuss how a hypermathematical approach could cause a paradigm shift in a different field. Both are convinced their work is revolutionary but the concrete evidence is not enough to convince the world of it. Both are insanely long in a way that discourages normal people from reading them.
I would like to see more books like this.
-
Book Review of 'Rationality: From AI to Zombies' parts 1-2
I read a lot of books, so I thought it would be fun to do some book reviews.
As I make this decision I am in the middle of reading Rationality: From AI to Zombies which is just a monstrously long and complicated book. Six parts. 1800 pages. So this is just a review of parts 1-2 which account for about the first third of the book.
This book is a collection of blog posts. To enjoy this sort of book you need to be able to enjoy a dense collection of nonlinearly organized thoughts. If you have never found a blog that intrigued you and just read all of its past posts in a sitting, this book may not be for you.
Enjoying Infinite Jest with its aggressive footnoting and endnoting is also a sign that this might be your sort of book.
If you do enjoy this sort of book, and you happen to be a manager, you might also enjoy Managing Humans.
This book is by Eliezer Yudkowsky who does several curiously nonstandard things like work on AI and write Harry Potter fanfiction.
The first two parts of this book are about rationality.
I came into this book thinking that rationality was rather overrated. I took a game theory class in college that turned me off. In particular I was disappointed by the game-theory definition of rationality which did not seem like it was always the right thing to do.
Here’s an example. We called this the “Microsoft game”, after the olden days in which dominant industry players frequently crushed newcomers just by copying their products. The way it works is, one player is Microsoft. There are thousands of other players. At each time step, one of them is chosen randomly to become a startup and challenge Microsoft.
The startup has two options:
- The “conservative option”. The startup gets $1M, Microsoft loses $0.
- The “aggressive option”. The startup gets $10M, Microsoft loses $1B.
The trick is, if the startup chooses the aggressive option, Microsoft then has the option to retaliate. If Microsoft retaliates, Microsoft loses an extra $1, and the startup loses all of its $10M winnings.
That’s all. The question is, what is the rational way for each side to play this game?
According to standard game theory and the theory of dominant strategies, there is no reason for Microsoft to ever retaliate, because the outcome for Microsoft is always just a dollar better when not retaliating.
The next step of standard game theory is to repeatedly eliminate dominated strategies. So if you eliminate the ability to ever retaliate, then it’s clear that all startups should choose the aggressive option. Therefore according to standard game theory the “rational” way to play this game is for startups to always be aggressive and for Microsoft to never retaliate.
In practice, this seems silly. Microsoft should obviously retaliate all the time, and in a world where Microsoft will obviously retaliate all the time then it’s clear for all startups to choose the conservative option.
I ended up arguing with my game theory professor and converging on a state where, we agreed that the “rational” thing to do according to game theory was one thing, and the intelligent thing to do in reality was something else. Which left me thinking that rationality was overrated.
The author makes a good case in this book that this is just misusing the term “rationality”. The right way to think of rationality is that if a logical argument indicates that a particular course of action is the wrong thing to do, then it should not be described as “rational”. So I now think this is just a case where game theory needs to fix a glitch.
This book contains many, many examples of ways in which you can trick yourself into believing an illogical argument, or leave a mental conclusion in place that really should be overturned on closer inspection. In particular, I think I have a tendency to underrate the likelihood of complex plans to fail for some unpredictable reason, compared to simple plans. I reflected on this tendency while reading this book and found it quite a fun experience.
I still think the author overrates rationality. Rationality is a great root philosophy when you need to determine whether a statement is true or false, or when you need to pick between a small number of clearly delineated choices. But often you need to take action when the space of possibilities is quite vast, or make a decision where your data is all so fuzzy and poorly categorized that reductionism offers little practical help. I suspect there are cases where human biases are actually really good for you, despite being irrational.
So far I would recommend this book to anyone who can stand it.
-
Hello World
It feels like a good time to start blogging again.
For a while I had a blog going on lacker.info, but unfortunately I was using some “Blogspot for your domain” product that was deprecated over the subsequent decade.
This time, I’m betting that plain old HTML will be a format for the ages. Also .info is so lame now! So lacker.io is where it’s at.
I’m using Jekyll to generate that HTML because I liked it when we used it for the original Parse blog, although the current Parse blog is just Wordpress. Also Jekyll is nice for
wordsLikeThis
.# Also it has nice built-in syntax highlighting. def is_that_really_necessary(blogger): if blogger == "lacker": return True raise "TODO: implement logic here"
And so it begins. I’m really curious what people would want me to blog about, so let me know.
this is page 13 of 13.
previous.