Live from the Singularity Summit 2007

I’m currently at the Singularity Summit 2007 at San Francisco’s Palace of Fine Arts. I’ll post updates during the day.

I will eventually (swear to the flying spaghetti monster), put these into a more coherent form. But here’s some things that stuck in my head as a relative outsider to “futurism” per se, but someone fairly familiar with AI and longtime reader of the sl4 list. Some of these are my ideas triggered by something mostly unrelated.

Day One


Day Two

Peter Norvig
His position is more along the lines of building the subcomponents, then trying to assemble them. He considers key components probabilistic first order logic and hierarchical representation and problem solving. I'm not too clear on what he means by the second. He talks about how whether you see an acceleration depends heavily on how you interpret the data. Some data sets show no such accelleration. It is unclear that a perceived accelleration or recentness-of-important-events is real or caused by observer bias. My example: what we now see as different phyla are descended from what were once different closely related species. Is asked about smooth transition without noticing an AGI. He says is possible, but will be noticed afterwords. I think mostly pointless question because a human level AGI will be quickly followed by a post-human AGI, and then there will be no difficulty with the question. Annoying "question" that starts "it seems to me". Note to world: "it seems to me..." is not a question-word. But question is about training technicians versus scientists and whether the university system fails. Some smart ass finds contradiction in his points.
J. Storrs Hall: Asimov's Laws of Robotics -- Revised
I recognize the name as a frequent writer on SL4. Interesting to put a face to him now. Has a book out on machine ethics. Analogy: Hammurabi trying to hand down a code to prevent Enron. Thesis: ethics considered as behavioral ESS converges to something like friendly behavior. Seems to me to omit the issue of how they will deal with us. Seems to strongly conflict with Omohundro's point, but has the advantage of taking into account situations like Hammurabi.
Peter Thiel
If singularity, then either the world goes to shit or it is a sustained boom. No sense in investing for the former. I think I've written before on this topic of ignoring extreme bad situations. How to invest for singularity? Thiel seems to be handwaving over the issue that just because there may be a massive growth, it doesn't mean that pets.com will have any part of the growth. Says Warren Buffett is moving toward investing in catastrophe insurance.
Michael Lindsey: XPrize
They are trying to formulate an XPrize in the area of educational software. Another long question and answer session. And by "question", I mean whiny hippie "why aren't you adopting my idiot pet method of looking at the world" soapboxing. Except the elearning person who points out exactly what I was thinking: this is not a well defined prize like "2 space launches in 10 days".
Christine Peterson: Open Source Physical Security
I think she was walking the line between wanting to actually go into detail for how this has been thought out, and risk boring and boggling people, versus the "Isn't open source great?" cheering section. She points out some good things about Open Source, but I'm skeptical of the claim that Open Source is good at debating things like physical security. The defining feature of physical security, or at least the current usual example of airport security, is that the choice made by one person affects many other people. As I argue in [my previous post](/blog/2007/03/freedom-not-democracy-1.html), the value of open source comes from the market, the freedom to choose that solution that best fits the needs of the person doing the choosing. I'm not even sure I'm a fan of the principle of openness in physical security, at least not to the degree many people take it. Remember, the idea of security through obscurity was such a difficult idea to overcome in computer security precisely because it is such a useful idea in physical security. I am a fan of her libertarian decentralization ideas, but that isn't specific to anything related to the conference.
James Hughes
Made a good point to at least mention the "millenialist cognitive biases", but I think his political vision is hopelessly obsolete. Data is easier to hide, easier to move, and vastly more potentially useful than guns, but somehow the same political system which cannot prevent misuse of guns even when they do restrict legitimate use will magically gain the ability to shackle AGI with regulations.
Eliezer
Also mentions the editing of value functions. Seems to want something more similar to Hall's model than to Omohundro's. Uses terminology "terminal values" and "instrumental values". Possibly these are easier to understand when spoken. I think I prefer ends values and means values for written. Someone, not sure who, asked a question to the panel of the three above about whether AGI will impose it's morality on us. Eliezer cites importance of valuing freedom as an intrinsic / terminal / ends value. Hughes waves his hands and says that somehow multi-national governmental bodies are going to be better at regulating such things than designing for friendliness. I'll be sure to feel happy and safe when Iran is chair of the UN Commission on Permissible Uses of AGI.
"All Watched Over by Machines of Loving Grace"
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
(right now please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.

Text from Richard Brautigan Bibliography and Archive. Brautigan initially published it with a vague free for non-commercial use license.