Nonsense

some of it isn't, but opinions differ as to which is which

Book Review: Superintelligence, by Nick Bostrom

Because I am very late to all the worst parties, I have finally read Superintelligence by Nick Bostrom. The hold waitlist at the library where I got it was sixteen deep, and yet I got my hands on a copy in only a few weeks, which probably says something awkward and rude.

This is not a good book. It does not make particularly large amounts of sense. It drives itself in maddening circles of vicious, unanswerable doom and then presents a secular prayer to an AI god-child as the most plausible answer to the apocalypse. You might be forgiven having a different assumption. Maybe you’ve only heard of it from the Silicon Valley AI Safety set of precocious and adorable children who float high on the VC tides in their skiffs of concerned rationalism. I’m not likely to forgive you if you’ve actually read it and liked it.

Let’s get a few things out of the way. This book is somewhat overwritten. It also has a list of tables and figures and boxes. A few of them are related to the subject at hand. The subject at hand is how Nick Bostrom considers AI-based superintelligence1 to be the coming god-emperor, and involves figuring out out a way to ask it politely to be good to us.

Superintelligence starts with a literary attempt called “The Unfinished Fable of the Sparrows.” Some sparrows have resolved to tame an owl to help them out with life, liberty, and the pursuit of happiness. A few smart and unheeded sparrows wonder about how they will control this owl once they’ve got it. There is no ending, and yes, it’s an unsubtle synopsis of the book as a whole.

If that were not enough, next is a preface in which Bostrom talks about how he’s written a book he would have liked to read as “an earlier time-slice of [himself],” and he hopes people who read it will put some thought into it. Please, we should all resist the urge to instantly misunderstand it. He would also like you know how many qualifiers he’s placed in the text, and how they’ve been placed with great care, and he might be very wrong, but he’s not being falsely modest, because not listening to him is DANGER, WILL ROBINSON2.

Don’t ask me to make sense of that last bit, I’m not a professor of philosophy at Oxford.

The least boring parts of the book, where the argument is attempted, are Chapters 4-8 (What will this superintelligence be like, how quickly will it take over, and how bad will the resulting hellscape be?) and Chapter 13 (I bet we can avoid this hellscape through being very clever).

The first problem pops up when he starts discussing the explosive growth of the potential superintelligence in Chapter 4. Hardware bottlenecks are basically ignored. Software bottlenecks are ignored. Any other bottlenecks of any sort are definitely ignored. Instantly, there are no more bottles whatsoever, and suddenly the superintelligence is building nanofactories for itself because it thought up the blueprints very fast. At some point (handwaving) it achieves world domination. At some further point (faster handwaving) it is running a one-world government. And eventually (hands waving very excitedly) it overcomes the vast distances of spacetime and sends baby Von Neumann robots out to colonize the universe.

I can see how this sort of thing is very tempting. It’s difficult to imagine entities smarter than us, so it’s difficult to imagine them having any problems or hardships. I assume this is also difficult when your career and success therein seem to rely on your own intelligence, rather than sheer dumb luck. But failing to have the depth of imagination to consider a being-space between humanity and unknowable, omnipotent gods is concerning, on a scale comparable to H. P. Lovecraft.

A similar problem occurs in Chapter 8 (titled, excitingly, “Is the default outcome doom?”). Having satisfied himself with the inevitability of the new AI god, Bostrom runs through a titillatingly long list of ways in which it could turn out to be malignant, deceptive, or downright evil. Failing that, it could just misinterpret anything we try to tell it. Or maybe it could care about making paperclips way more than it cares about humans. We can’t do anything to stop it, therefore doom creeps over the horizon.

Again, this is largely a failure of imagination. There is no corresponding list of ways things could go right, or well, or even ambiguously. This is an argument about this worst possible case being more terrible than we can survive. This is not an argument for this worst possible case being the most likely case, or even a fairly likely case. It’s important to remember the difference.

There are times when it’s useful to base your discussion on the worst possible case. But the worst possible case here is already several branches down a large logic tree that may or may not actually exist. It is not actual existential danger. It is a theoretical possibility of an existential danger that may or may not come into play should certain possibilities all coincide.

There are hundreds, thousands, millions of other theoretical possibilities of existential danger, that Bostrom is not writing entire books about. A planet could collide with a large asteroid in another solar system and send a huge chunk of itself ricocheting directly towards us on a vector we’re ignoring. That big supervolcano under Yellowstone might get triggered because of an awkward reaction between a solar flare, the Earth’s magnetic pole switching polarity, and a bad bit of shale drilling, and yes, I’m obviously making this stuff up, but that’s kind of the point.

At this point in the story (and it is a story, more on that later, help this is going to be pages and pages) we realize we need to be very clever to avoid a vastly terrible superintelligence from doing terrible things to us in devilishly creative ways. We can’t keep it from arising, because Bostrom has already told us that’s impossible. Maybe we can guide it? Persuade it? Subjugate it first? Control it? Keep it in a tiny box? He considers some of these more or less promising. There is a table. He quickly moves on from how to make a baby superintelligence have values to how to decide which values it should have, as that’s where he believes the trouble truly lies. Values are like wishes; they may not be interpreted the way we assume they will be.

In fact, they probably won’t be. Which is where we need to be very clever. As far as I can tell, Bostrom’s Big Answer to getting a superintelligence to be nice to us is something he’s borrowed from Eliezer Yudkowsky3. It’s called Coherent Extrapolated Volition, which will always be referred to as CEV for short and because I mean, really, those big words. I’m going to quote Yudkowsky’s definition as it’s quoted by Bostrom:

“Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together, where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.”

I feel a little bad. It’s obvious that time and effort has been spent getting the wording just right, making sure all the bases are covered, trying to eliminate misunderstandings and misinterpretations. It’s a very lovely wishful statement about ourselves and our future!

But.

This is a religious statement. Despite the claims that this is a way of avoiding moral judgments and definitions, this is a prayer that the coming AI God will give us not what we want in our deeply irrational now, but what we ultimately will want in our best future, that God will be on our side, help us be better than we are. This is praying for God to be Good and Not Evil. This is praying away the Apocalypse.

We shouldn’t be surprised, we’re humans. Where else did you think we’d end up? Not writing religious stories about ourselves, our world and our futures? This is the sort of thing we do in our sleep. We write stories about the things that go bump in the night, or the things we’re afraid will go bump in the night. We write magical incantations to protect ourselves from the vast, cool intelligences that exist outside our ken, because we have to sleep somehow now that we’ve thought them up. We write stories to tell us that we have some say in our own lives, and in our futures, and in the futures of our children. Sometimes we even dress these stories up as academic books with roots in philosophy and computer science.

At any rate, this is not cleverly and rationally avoiding certain existential danger. In the end, a superintelligent AI as defined by Bostrom is not controllable, is not guaranteed to grow up in the way we want it to, and this CEV is merely a suggestion that it behave itself for our sake. Bit of a shame really. I’d honestly like to see him put some good work into something like AI safety, maybe some acknowledgment that algorithms and learning systems don’t have to be smarter than us or even all that advanced to make a hash of things because of the ways we program our faulty assumptions into them. But instead, this is what we have.

So, no. Nick Bostrom’s Superintelligence isn’t a good book; it doesn’t do what it sets out to accomplish. Bostrom hasn’t given us a warning about a definite existential danger. He also hasn’t given us a way to clearly see or avoid said existential danger. It’s not even a very good story; there is far better science fiction and fantasy and theological work being written every day. Go read some of those.


  1. Superintelligence mostly just means comprehensively and substantially smarter than humans. [return]
  2. I apologize, I’ve been watching the Lost in Space remake. [return]
  3. Of excessive reaction to Roko’s Basilisk fame. [return]

Hugo: A Different Twitter Shortcode

Hugo has a handy little shortcode to embed tweets into a page. It takes the form1:

{{ < tweet [ tweet id ] >}}

So for example, in a Markdown page:

{{ < tweet 763188308662898691 >}}

Produces this:

Under the hood, it’s using the Twitter API to provide the embed code via GET statuses/oembed. Which is fine and all, but there are times when you don’t want the default embed style, and want to use some of the options the Twitter API provides. In a project, I wanted to hide previous-tweet-in-a-thread that Twitter provides by default, using the hide_thread option. (If I want to display threading, I can do it myself with hardcoding and CSS, using styling that’s a bit easier to follow.) The easiest way to turn off the default Twitter thread display was to make my own shortcode that I could call instead of Hugo’s internal one2.

In layouts/shortcodes, I have tweet-single.html, which looks like this:

<div>

  {{ (getJSON "https://api.twitter.com/1.1/statuses/oembed.json?dnt=1&hide_thread=1&id=" (index .Params 0)).html | safeHTML }}

</div>

And now I can call tweet-single just like I can call tweet:

{{ < tweet-single [tweet id x] >}}

I, uh, also told Twitter to turn off tracking for that embed via dnt, because I’m a decent and not at all paranoid person.

Now that everything is exactly the way I want it, I should probably upgrade Hugo and see what needs to be fixed.


  1. I’ve added extra spaces to the start of the shortcode in these examples to keep Hugo from trying to run them as actual shortcode calls. You’ll need to remove the space between "{{" and "<". [return]
  2. All of this works until Twitter changes the api, or Hugo changes under my feet, obviously. [return]

Hugo: Section Sorted by Taxonomy

One of the other weird-ish things I needed to be able to do for this site setup was the Projects page over there to the side. It’s the section page for the Project section, and on it I wanted to sort all my project posts, and only my project posts, by some sort of assigned type, rather than listing by date or whatever.

I’m doing this the cheaty way by just using the ‘tags’ taxonomy for blog posts, and using the ‘categories’ taxonomy only for project posts. I didn’t just want to link to the taxonomy list page, because then I would have to fiddle with the page name and title and whatnot to make it what I wanted. This meant that I needed to pull the full “category” taxonomy listing into the “project” section list template.

In the /layouts/project/project.html template file I have this in the main content <div>:

<section> 
    {{ range $key, $value := $.Site.Taxonomies.categories }}
    <h2> {{ humanize $key }} </h2>
        <ul>
        {{ range $value.Pages }}
            {{ .Render "li" }}
        {{ end }}
        </ul>
    {{ end }} 
</section>

It took surprisingly long for me to find a bit of sample code to modify for this, so here’s hoping I’ve provided another potential search result for someone trying to figure this out. The ‘humanize’ bit in there is because I am difficult and I like capitalization in my organizational structure.

This is using Hugo version 0.37.1 at the time of writing.

Hugo and Footnotes

Hugo does Markdown footnotes! Excellent. However, Hugo tends to assume you will never have more than one post with footnotes visible on a page, because Pretty Links.1 Awkward if you like multiple full posts on the homepage and you’re inordinately fond of footnotes like me. This is easily fixable! I had to spend some time figuring out how, so I’m putting the details here. (Current as of Hugo version 0.37.)

Footnote reference link styles are part of the Blackfriday Markdown engine internal to Hugo. They get adjusted in a separate blackfriday section in your site config file, using the plainIDAnchors setting. To turn off plain ID anchors, and have footnote reference links that reference the post ID as well as the footnote number, this needs to be in config.toml (in the root of your Hugo site directory):

[blackfriday] 
plainIDAnchors = false

If you’re using config.yaml, it’ll be:

blackfriday:
    plainIDAnchors: false

And then you should be able to put all the footnotes you want wherever you want.


  1. In other words, the default Hugo settings assume you’ll prefer short reference links that point to ‘fn:1’ rather than longer, more defined links that point to ‘fn:[long unique ID]:1’ [return]

Some Thoughts About Hugo

Let’s get one thing straight off the top. I really like Hugo, it’s quite fun conceptually, copying things from Scrivener into a Markdown file is not terrible, and should get quite a bit easier/automatic once I can get my hands on version 3 and get things set up properly. Now that I’ve just about got my site’s template files set up properly, that’s all that I really need to do.1 Hugo lets me fiddle with basically everything, and set most things up the way I want them. I think it was also probably the easiest way for me to make this type of website. I’m planning on using it for the foreseeable future.

However.

This whole, fairly uncomplicated site was notably more difficult to set up than I anticipated. I have a development background,2 and that helped quite a lot, and my prickly stubbornness got me the rest of the way.3 There are a few reasons for why this was difficult.

First, I’m playing with version 0.39 (currently) of an open source tool. This probably means you should take most of my following complaints with a grain of salt, although you should also remember that I’m claiming that this was the easiest tool to use that would do what I needed. That says things about my available options. The nice thing about Hugo is that you can get it as a standalone executable. I didn’t have to install a package management system, or manage dependencies, or install an entire language interpreter. I did have to install git (a command-line version control system) to make installing themes easy. I did have to deal with documentation that was pretty inadequate at times.

I left development over five years ago, so practices and tools and workflows have changed, and also now I get pretty grumpy when software assumes I’ll be some sort of developer when I use it. Hugo definitely does that. It’s really powerful, but a lot of its power comes because any theme’s template files are written in a combo of HTML and Go inclusions, with maybe some Javascript if the theme developer was feeling ambitious, and while a lot of settings could be abstracted out from the theme files to the site configuration file, they generally… aren’t. Hugo also doesn’t come with anything useful in the way of a default theme, so you end up needing to get one from the theme site, just about all of which are community provided. I think? It’s not very clear.

Which is fine, but an individual theme isn’t guaranteed to be up to date with your new version of Hugo, and you’ll probably end up wanting to change things a bit, and maybe the template files weren’t quite complete in the first place, and the next thing you know, you have a bunch of empty pages that should be holding Stuff.

Again, this is the static site generator that had the easiest entrance requirements on a Windows machine.

So I really enjoy what I can do with Hugo, and again, I intend to keep using it, but I’m not sure I’d be able to recommend it to that many people who want to set up their own site. The learning curve is waaaay steeper than the blogposts by adoring developers who used it to put up some project notes and a two-post blog about their Hugo Experience would have you believe. This is extra true for people running things on a Windows desktop.

A lot of this could be fixed with a really solid useful template that tries to abstract out a bunch of settings to the site configuration file so users don’t have to dig into the template files until they were ready, and that is also kept up-to-date. Which may already exist! But I was having a lot of trouble trying to find it. A lot of problems could also be fixed with clearer documentation for Hugo itself, particularly by describing how to put all the various templates and functions and variables together into a cohesive whole. It’s really very by-developers-for-developers currently, and I don’t think it has to be. There’s nothing about constructing a static website that demands that only professional software developers be able do it, and I’d really like to see some more widely usable options here that don’t rely on software-as-a-service or drag-and-drop hand-holding.


  1. Well, I also need to get a script or whatever sorted to SFTP the constructed site onto my webserver, but that should be pretty minor. [return]
  2. C++ embedded, nothing even remotely frontend, but I can look at unfamiliar code and not run screaming. More importantly, I know when to expect there to be code. [return]
  3. Ask my husband how often I turned down his offers to help because “I need to learn how to make this work myself, dammit.” Apparently I felt very strongly about how it was an Important Life Skill. [return]