Live from the Singularity Summit 2007
Posted 2007-09-09.I’m currently at the Singularity Summit 2007 at San Francisco’s Palace of Fine Arts. I’ll post updates during the day.
I will eventually (swear to the flying spaghetti monster), put these into a more coherent form. But here’s some things that stuck in my head as a relative outsider to “futurism” per se, but someone fairly familiar with AI and longtime reader of the sl4 list. Some of these are my ideas triggered by something mostly unrelated.
Day One
- Morality is an attempt to avoid guilt. Guilt is an emotional response based on projecting the self onto others. We model other people but we cannot separate the emotions experienced while being them from our own.
- The evolutionary origin of the brain, and the physical substrate it runs on, place constraints on how careful we have to be in mapping, measuring and simulating it. The onus is on Penrose to explain how a dirty system like the brain could evolve to use some mystical microtubules computation method.
- Interesting phrase from Wendell on robot rights: "are they conscious in a way that we cannot prove they are not?" I like people who are this precise in exactly what they are trying to communicate.
- I think my idea of the doctrine of subjective immortality, that death only has negative value by its perception, means that we can freely turn them off, since in principle they can always be restarted.
- Motivated by Sam Adams response to the rights question. Would a computer's "recognition of the other" be hindered if the "other" did not recognize it? That is, is the refusal to recognize robots as our moral equals a potential problem in the early childhood development of something like Joshua Blue?
- Does Kismet use the same subsystems for generating its own facial responses as it does for analyzing those of others?
- Marcos says system will scale to 100 billion neurons on system using approx 500 TB ram. I'm just assuming realtime. Rough calculations: 500 TB = 4 GB / machine * 125,000 machines * $1000 / machine / $125,000,000.
- Jamais Cascio can give the "in a world where ..." guy a run for his money. His question "if we need to get more people involved how do we get their attention?" makes me feel bad for not encouraging Sam to come.
- Steven Omohundro's presentation was fascinating to me, right up my alley with the game theory. I will be reading his paper, then starting on the references. His thesis is sort of like the Coase theorem for self improving systems: a self improving system will approach ideal economic rationality. AGI will protect its utility function at all costs. Most other work is concentrating on the belief function.
- Peter Voss is a little spooky. I find it interesting that he just automatically assumes that life, continued existence, is of value, and dismisses the need to justify this. I've heard that he is a hard core Objectivist, and this position is consistent with Rand's. Voss says as soon as five years.
- What in the world could "treaty verification" as a field for narrow AI mean?
- Ben Goertzel wants to embody AGI as a baby to be used as a fashion accessory in Second Life so that they get lots of interaction with humans. I'm looking forward to more Kraftwerk scored demo reels. They may be ready to demo Novamente in Second Life by the Virtual Worlds conference in San Jose October 10th. I should get his book Hidden Pattern. I have metacog.org in my notes, think it's background. Also a book Probabilistic Logic Networks is coming out from Springer Verlag soon. He mentions releasing some of the Novamente framework open source to get narrow AI implementers to start thinking in a more AGI way. I wonder if it will possibly allow the development of standardized interfaces, allowing, for example, more or less biologically valid models of different components of mind for different purposes. Particularly, you can cheat and feed predigested stuff to your AGI to get it interacting with a simulation well before you can start embodying it.
- Paul Saffo read the poem "Machines of Loving Grace" by Richard Brautigan. He encouraged us to spread the world of what's going on to those who are not geeks.
Day Two
- Peter Norvig
- His position is more along the lines of building the subcomponents, then trying to assemble them. He considers key components probabilistic first order logic and hierarchical representation and problem solving. I'm not too clear on what he means by the second. He talks about how whether you see an acceleration depends heavily on how you interpret the data. Some data sets show no such accelleration. It is unclear that a perceived accelleration or recentness-of-important-events is real or caused by observer bias. My example: what we now see as different phyla are descended from what were once different closely related species. Is asked about smooth transition without noticing an AGI. He says is possible, but will be noticed afterwords. I think mostly pointless question because a human level AGI will be quickly followed by a post-human AGI, and then there will be no difficulty with the question. Annoying "question" that starts "it seems to me". Note to world: "it seems to me..." is not a question-word. But question is about training technicians versus scientists and whether the university system fails. Some smart ass finds contradiction in his points.
- J. Storrs Hall: Asimov's Laws of Robotics -- Revised
- I recognize the name as a frequent writer on SL4. Interesting to put a face to him now. Has a book out on machine ethics. Analogy: Hammurabi trying to hand down a code to prevent Enron. Thesis: ethics considered as behavioral ESS converges to something like friendly behavior. Seems to me to omit the issue of how they will deal with us. Seems to strongly conflict with Omohundro's point, but has the advantage of taking into account situations like Hammurabi.
- Peter Thiel
- If singularity, then either the world goes to shit or it is a sustained boom. No sense in investing for the former. I think I've written before on this topic of ignoring extreme bad situations. How to invest for singularity? Thiel seems to be handwaving over the issue that just because there may be a massive growth, it doesn't mean that pets.com will have any part of the growth. Says Warren Buffett is moving toward investing in catastrophe insurance.
- Michael Lindsey: XPrize
- They are trying to formulate an XPrize in the area of educational software. Another long question and answer session. And by "question", I mean whiny hippie "why aren't you adopting my idiot pet method of looking at the world" soapboxing. Except the elearning person who points out exactly what I was thinking: this is not a well defined prize like "2 space launches in 10 days".
- Christine Peterson: Open Source Physical Security
- I think she was walking the line between wanting to actually go into detail for how this has been thought out, and risk boring and boggling people, versus the "Isn't open source great?" cheering section. She points out some good things about Open Source, but I'm skeptical of the claim that Open Source is good at debating things like physical security. The defining feature of physical security, or at least the current usual example of airport security, is that the choice made by one person affects many other people. As I argue in [my previous post](/blog/2007/03/freedom-not-democracy-1.html), the value of open source comes from the market, the freedom to choose that solution that best fits the needs of the person doing the choosing. I'm not even sure I'm a fan of the principle of openness in physical security, at least not to the degree many people take it. Remember, the idea of security through obscurity was such a difficult idea to overcome in computer security precisely because it is such a useful idea in physical security. I am a fan of her libertarian decentralization ideas, but that isn't specific to anything related to the conference.
- James Hughes
- Made a good point to at least mention the "millenialist cognitive biases", but I think his political vision is hopelessly obsolete. Data is easier to hide, easier to move, and vastly more potentially useful than guns, but somehow the same political system which cannot prevent misuse of guns even when they do restrict legitimate use will magically gain the ability to shackle AGI with regulations.
- Eliezer
- Also mentions the editing of value functions. Seems to want something more similar to Hall's model than to Omohundro's. Uses terminology "terminal values" and "instrumental values". Possibly these are easier to understand when spoken. I think I prefer ends values and means values for written. Someone, not sure who, asked a question to the panel of the three above about whether AGI will impose it's morality on us. Eliezer cites importance of valuing freedom as an intrinsic / terminal / ends value. Hughes waves his hands and says that somehow multi-national governmental bodies are going to be better at regulating such things than designing for friendliness. I'll be sure to feel happy and safe when Iran is chair of the UN Commission on Permissible Uses of AGI.
I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.
I like to think
(right now please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.
I like to think
(it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.
Text from Richard Brautigan Bibliography and Archive. Brautigan initially published it with a vague free for non-commercial use license.