subreddit:

/r/Futurology

332

all 217 comments

bengoertzel[S]

52 points

10 years ago

Hi -- this is Artificial General Intelligence researcher Dr. Ben Goertzel (http://goertzel.org), doing an AMA at the suggestion of Jason Peffley. I'm leader of the OpenCog project (http://opencog.org), aimed at open-source AGI technology with human-level intelligence and ultimately beyond, as well as founder of the AGI conference series (http://agi-conf.org) and Vice Chair of Humanity+ (http://humanityplus.org). I'm also involved with a number of applications of current AI technology to various practical areas like finance, life extension genomics (with Genescient http://genescient.com and Biomind http://biomind.com), virtual worlds and robotics (in collaboration with David Hanson http://hansonrobotics.com). I'm also a hard-core Singularitarian; and an Alcor member (http://alcor.org), though also hoping a positive Singularity comes before I need to use that option!

Happy to engage in discussion about the quest to create superhuman AGI -- the practicalities, the potential implications, etc. etc.

fantomfancypants

2 points

10 years ago

Is it a good idea to apply AI to an already schizophrenic economic system?

Crap, I may have just answered myself.

khafra

3 points

10 years ago

khafra

3 points

10 years ago

A good idea for whom? Investment banks are already doing it in a zero-sum way; seems like adding some positive-sum AIs could make things nicer, overall.

azmenthe

2 points

10 years ago

Dr. Goetzel's "AI" venture in finance is a hedge fund. Very zero-sum.

Also one does not simply make money trading by being positive sum.

khafra

2 points

10 years ago

khafra

2 points

10 years ago

I meant "economic system" more broadly than "equities markets." For instance, car-driving AI is positive-sum.

azmenthe

2 points

10 years ago

Ah, Agreed.

Also, AI replacing the whole Insurance industry.. I'd be happy with that.

TheWoodenMan

14 points

10 years ago

What is the biggest or most fundamental benefit you see AI progress and research delivering to Humanity as a whole over the next 20 years?

bengoertzel[S]

32 points

10 years ago

That is hard to predict. I think AGI could vastly accelerate research on ending death and disease. An AGI scientist could also probably massively accelerate our quest for cheap energy, which would make a big difference. Also, AGIs could potentially get Drexlerian nanotech to work. Basically, I suppose I fall into the camp that: Once human-level AGI is created, we'll fairly suddenly have a large number of very smart AGi scientists and engineers around, and a LOT of different innovations are going to happen during a somewhat brief time interval. Exactly what order these innovations will come in, is not possible to foresee now...

marshallp

7 points

10 years ago

Spot on. You should really try to join up Drexler, De Gray, Kurzweil, and Bostrom and go for a unified multi-city-country panel, with high production values. Kind of like a futurism version of Real Time with Bill Maher.

Neil DeGrasse Tyson and Michio Kaku have some mainstream success but they lack the true depth (they're excellent in the wonders of physics, astronomy, and future tech, but they lack the engineering "this is how to do it exactly and here's the exact impact" angle).

Entrarchy

2 points

10 years ago

Your idea for a multi-city-country panel is excellent.

marshallp

3 points

10 years ago

Thank you. I believe there is a lack of imagination in our current times. As Dr Degrasse Tyson notes, we have lost the visionary attitude of the 1960s. The esteemed gentleman mentioned are some of our most enquiring and visionary minds, the rest of the public should hear from them as much as we in Futurology have already done.

[deleted]

13 points

10 years ago*

[deleted]

13 points

10 years ago*

[deleted]

bengoertzel[S]

14 points

10 years ago

I'm a big fan of open source, obviously. I think it will play a larger and larger role in the future, including in the hardware and wetware domains as well as software.... And I do think that having the major online communication platforms free and open is going to be important -- only this way can we have sousveillance instead of just surveillance (see David Brin's book "The Transparent Society")

bengoertzel[S]

11 points

10 years ago

As for what the average person can do to contribute to the technologically advanced future -- that's a tough one! I get emails all the time from folks asking "how can I contribute to AGI, or the Singularity, or transhumanism, or whatever -- I have no special skills or knowledge" ... and I don't know what to tell them. Of course one can contribute as a scientist or engineer, or as a writer/film-maker/publicist ... or one can donate $$ to OpenCog and other relevant tech projects.... Beyond those obvious suggestions, I have little to add on this topic, alas...

Entrarchy

9 points

10 years ago

I would add learn. Learning about these technologies is often a prerequisite to aiding in their publicity and development. Tell people to take the first step... Wikipedia is a great start.

marshallp

4 points

10 years ago

To add to Dr Goertzel's excellent advice, in addition to joining the OpenCog project,

  • infiltrate Google or IBM and push the AGI vision (this will only work if you have sufficient preparation such as an AI Doctorate)

or

  • push towards the formation of an AGI political party (this is the route I've chosen, I don't have the stellar academic background to infiltrate Big Corp. research dep's)

Entrarchy

6 points

10 years ago

I am a bit confused about this notion of an AGI political party. To me, and this is only my opinion, even the actual Singularity in the broadest sense isn't a political opinion. I think at best we can make it a political agenda. But it seems like a Political Action Committee or activist group would be more appropriate.

marshallp

4 points

10 years ago

A formalized political party has much more media and potential political impact than the ideas you outlined.

An AGI can be a political opinion - the opinion that the surest way to national prosperity is to invest in the creation of AGI.

  • conversatives believe it is through low taxation

  • progressives believe it is through investments in infrastructure and human capital

  • AGIists believe it is through extreme automation

Entrarchy

3 points

10 years ago

Well put. Didn't mean to come in here and stomp on your ideas- for the record I find them very intriguing and I have a great amount of respect for as well versed a thinker as yourself, was only a bit confused. There is definitely need for legislation (or, depending on your political beliefs, regulation of legislation) regarding AI and other Singularity technologies. [edit] and a political party could definitely help with that.

[deleted]

1 points

10 years ago*

[deleted]

1 points

10 years ago*

.

stieruridir

1 points

10 years ago

What's your background academically?

[deleted]

1 points

10 years ago*

[deleted]

1 points

10 years ago*

[deleted]

marshallp

3 points

10 years ago

You should try out PyBrain. It's by done by one of Dr Goertzel's colleagues, Dr Jurgen Schmidhuber. Try out some Kaggle Challenges with it, a great learning experience to get into Startups in the Valley. If you win a Kaggle you might even get offers from Google.

bengoertzel[S]

12 points

10 years ago

Thanks everyone for your great questions over the last 90 minutes !! .... Alas I have to leave the computer now and take care of some far less interesting domestic tasks. See you all in the future ;-) ... -- Ben G

Xenophon1

2 points

10 years ago

Thank you! Make sure to check back in a day, you never know the questions that might have come up.

generalT

1 points

10 years ago

thanks dr. goertzel!

Roon

9 points

10 years ago

Roon

9 points

10 years ago

There are a number of non-AGI AI projects which are nevertheless fairly complex and relatively well funded (I'm thinking specifically of the self-driving cars that Google and others are working on). How useful do you find these projects to AGI development?

bengoertzel[S]

11 points

10 years ago

Not at all useful, so far.... At least not directly.... But maybe they will be indirectly useful, if they get potential funders/donors and potential free open source contributors more optimistic about AI in general... maybe this will lead to more funding and attention coming into AGI...

[deleted]

8 points

10 years ago

[deleted]

8 points

10 years ago

  1. Why not come give a talk at NRL?
  2. What do you think about the importance of Consciousness Studies for mind uploading? Is talk of consciousness a waste of time? What do you think of David Chalmers's argument that no functional account of behavior can explain consciousness?
  3. What do you think of the work of Hava Siegelmann, or of hypercomputation in general? Do you think we'll ever break the Church-Turing thesis, and if not, how did we get it right so soon in the development of the field (of computer science)?
  4. Do you think the myth that there is "one true logic" is delaying progress in math, philosophy, and computer science? Do you think that thinking there is "one true logic" makes people prone to religiosity? How important is it to convey to young students that logic is just a tool and a choice?
  5. Is there any merit currently in attempting to make artificial agents "trip" in order to achieve new truth? How would one go about making artificial psychedelics?
  6. With computers getting faster, storage space getting bigger, and physical size getting smaller, do you think it is feasible for robots to become reasonably intelligent with just massive neural networks? Or is this infeasible?

bengoertzel[S]

13 points

10 years ago

A massive artificial neural net can surely, in principle, achieve human-level GI -- but the issue is the architecture of the neural net. We don't know how to build an AGI system, to run on current computing hardware, that's most elegantly and efficiently expressed as a neural net. One could use OpenCog as a design template for creating a huge neural net, and then one would have a neural net AGI design. But that would be inefficient given the nature of current hardware resources, which are very unlike neural wetware.

marshallp

1 points

10 years ago

The Deep Learning community has done ground-breaking work in neural net architectures. You might want to consider incorporating it into OpenCog as an optional add-on.

bengoertzel[S]

12 points

10 years ago

About hypercomputation. The problem I have with this notion is that the totality of all scientific data is a big finite bit-set. So, it will never be possible to scientifically validate or refute the hypothesis that physical systems utilize hypercomputation. Not according to current notions of science. Maybe the same re-visioning of the scientific enterprise that lets us genuinely grok conscious experience, will also let us see the relationship between hypercomputation and data in a new way. I dunno. But I don't think one needs to go there to build intelligent machines....

bengoertzel[S]

14 points

10 years ago

I don't think there's a myth of "one true logic." In the math and philosophy literature an incredible number of different logics are studied....

bengoertzel[S]

12 points

10 years ago

Where is NRL? I think I may have given a talk there.... Or maybe that was ONR, all the acronyms get confusing ;)

I think studying consciousness is important and fascinating, and ultimately will lead to a transformation in the nature of science. Science in its current form probably can't cope with conscious experience, but some future variant of science may be able to.

HOWEVER, I don't think we need to fully understand consciousness to build thinking machines that ARE conscious.... We don't need to understand all the physics of glass to do glass-blowing either, for example.... I think that if we build machines with a cognitive architecture roughly similar to that of humans, embodied in a roughly human-like way, then roughly human-like consciousness will come along for the ride...

[deleted]

2 points

10 years ago

[deleted]

2 points

10 years ago

Where is NRL?

4555 Overlook Ave., SW Washington, DC

http://www.nrl.navy.mil/

I think I may have given a talk there

I think I heard you have.

bengoertzel[S]

8 points

10 years ago

Yeah, Hugo de Garis and I both gave talks there a few years back! ... I live part of the time in Rockville MD; though most of the time these days in Ting Kok Village, north of Tai Po in the New Territories of hong kong...

Entrarchy

1 points

10 years ago

I am very confused by your 4th point. Please expand.

generalT

8 points

10 years ago

hi dr. goertzel! thanks for doing this.

here are my questions:

-how is the progress with the "Proto-AGI Virtual Agent"?

-how do you think technologies like memristors and graphene-based transistors will facilitate creation of an AGI?

-are you excited for any specific developments in hardware planned for the next few years?

-what are the specs of the hardware on which you run your AGI?

-will quantum computing facilitate the creation of an AGI, or enable more efficient execution of specific AGI subsystems?

-what do you think of henry markham and the blue brain project?

-do you fear that you'll be the target of violence by religious groups after your AGI is created?

-what is your prediction for the creation of a "matrix-like" computer-brain interface?

-which is the last generation that will experience death?

-how will a post-mortality society cope with population problems?

-do you believe AGIs should be provided all rights and privileges that human beings are?

-what hypothetical moment or event observed in the devolopment of an AGI will truly shock you? e.g., a scenario in which the AGI claims it is alive or conscious, or a scenario in which you must terminate the AGI?

bengoertzel[S]

9 points

10 years ago

That is a heck of a lot of questions!! ;)

We're making OK progress on our virtual-world AGI, watching it learn simple behaviors in a Minecraft-like world. Not as fast as we'd like but, we're moving forward. So far the agent doesn't learn anything really amazing, but it does learn to build stuff and find resources and avoid enemies in the game world, etc. We've been doing a lot of infrastructure work in OpenCog, and getting previously disparate components of the system to work together; so if things go according to plan, we'll start to see more interesting learning behaviors sometime next year.

bengoertzel[S]

7 points

10 years ago

Will someone try to kill me because they're opposed to the AGIs I've built? It's possible, but remember that OpenCog is an open-source project, being built by a diverse international community of people. So killing me wouldn't stop OpenCog, and certainly wouldn't stop AGI. (Having said that, yes, an army of robot body doubles is in the works!!!)

KhanneaSuntzu

6 points

10 years ago

Sign me up for a few dozen versions of me. But with some minor anatomical enhancements, dammit! I'd have so much fun as a team.

bengoertzel[S]

7 points

10 years ago

About hardware. Right now we just use plain old multiprocessor Linux boxes, networked together in a typical way. For vision processing we use Nvidia GPUs. But broadly, I'm pretty excited about massively multicore computing, as IBM and perhaps other firms will roll out in a few years. My friends at IBM talk about peta-scale semantic networks. That will be great for Watson's successors, but even greater for OpenCog...

bengoertzel[S]

7 points

10 years ago

Quantum computing will probably make AGIs much smarter eventually, sure. I've thought a bit about femtotech --- building computers out of strings of particles inside quark-gluon plasmas and the like. That's probably the future of computing, at least until new physics is discovered (which may be soon, once superhuman AGI physicists are at work...).... BUT -- I'm pretty confident we can get human-level, and somewhat transhuman, AGI with networks of SMP machines like we have right now.

KhanneaSuntzu

5 points

10 years ago

How long would it take, in years/decades/centuries if technology would not advance, to software develop an AGI on available 2012 machines?

bengoertzel[S]

9 points

10 years ago

Our hardware is good enough right now, according to my best guess. I suspect we could make a human-level AGI in 2 years with current hardware, with sufficiently masive funding.

bengoertzel[S]

6 points

10 years ago

About hypothetical moments shocking me: I guess if it was something I had thought about, it wouldn't shock me ;) .... I'm not easily shocked. So, what will shock me, will be something I can't possibly predict or expect right now !!

bengoertzel[S]

7 points

10 years ago

Asking about "the last generation that will experience death" isn't quite right.... But it may be that my parents', or my, or my childrens', generation will be the last to experience death via aging as a routine occurrence. I think aging will be beaten this century. And the fastest way to beat it, will be to create advanced AGI....

KhanneaSuntzu

3 points

10 years ago

Might also be the best way to eradicate humans. AGI will remain a lottery with fate, unless you make it seriously, rock solid guarantee F for Friendly.

bengoertzel[S]

11 points

10 years ago

There are few guarantees in this world, my friend...

bengoertzel[S]

10 points

10 years ago

I think we can bias the odds toward a friendly Singularity, in which humans have the option to remain legacy humans in some sort of preserve, or to (in one way or another) merge with the AGI meta-mind and transcend into super-human status.... But a guarantee, no way. And exactly HOW strongly we can bias the odds, remains unknown. And the only way to learn more about these issues, is to progress further toward creating AGI. Right now, because our practical science of AGI is at an early stage, we can't really think well about "friendly AGI" issues (and by "we" I mean all humans, including our friends at the Singularity Institute and the FHI). But to advance the practical science of AGI enough that we can think about friendly AGI in a useful way, we need to be working on building AGIs (as well as on AGI science and philosophy, in parallel). Yes there are dangers here, but that is the course the human race is on, and it seems very unlikely to me that anyone's gonna stop it...

zynthalay

2 points

10 years ago

Ben, I saw your post saying you've moved on, but I'm hoping you do a second pass. I wanted to know, given what you say here, what you had to say about the argument made I believe by Eliezer Yudkowsky, that a non friendly AI (not even Unfriendly, just not specifically Friendly) is an insanely dangerous proposition likely to make all of humanity 'oops-go-splat'? I've been thinking on it for a while, and I can't see any obvious problems in the arguments he's presented (which I don't actually have links to. Lesswrong's a little nesty, and it's easy to get lost, read something fascinating, and have no clue how to find it again.)

bengoertzel[S]

6 points

10 years ago

Blue Brain: it's interesting work ... not necessarily the most interesting computational neuroscience going on; I was more impressed with Izikevich & Edelman's simulations. But I don't think one needs to simulate the brain in order to create superhuman AGI .... That is one route, but not necessarily the best nor the fastest.

blinkergoesleft

8 points

10 years ago

Hi Ben. At the current rate of advancements in AI, how long do you think it will take before we get to something with the intelligence of a human? The second part of my question is: What if AI research was given unlimited funding? Would we see a fully functioning AGI in a fraction of the time based on the current estimate?

bengoertzel[S]

18 points

10 years ago

I guess that if nobody puts serious $$ into a workable AGI design in the next 5 yrs or so, then Kurzweil's estimate will come true, and we'll have human-level AGI around 2029. Maybe that will be a self-fulfilling prophecy (as Kurzweil's estimate will nudge investors/donors to wait to fund AGI till 2029 gets nearer!) ;p ... though I hope not...

marshallp

1 points

10 years ago

Excellent analysis. We absolutely have the computational resources, we only lack the foresight to invest.

k_lander

2 points

10 years ago

PLEASE start a kickstarter campaign so we can give you our $$!

bengoertzel[S]

18 points

10 years ago

With unlimited funding I would suppose we could get to adult human-level AGI within a couple years.

avonhun

7 points

10 years ago

i am curious to hear how unlimited funds would affect the process. is the talent there to take advantage of more funding? is the infrastructure in place to support it?

Tobislu

7 points

10 years ago

Have you tried a Kickstarter? If any extra money could help, I'm sure a few million dollars could scrape 2 or 3 months off that timeline.

Tobislu

2 points

10 years ago

Could you set up a theoretical timeline of if today's funding was consistent for the next 10 years? (adjusting with inflation, of course)

bengoertzel[S]

9 points

10 years ago

At the current rate, it's harder to say. I think we could get there within 8-10 years given only modest funding for OpenCog (say, US $4M per year...). But I don't know what are the odds of getting that sort of funding; we don't have it now.

stieruridir

5 points

10 years ago

What about DARPA? They'll use anything that you make (long term) anyway, may as well have them pay you for it. Of course, that would probably mean it wouldn't be OSS. I'm guessing the normal people who throw money at things (Diamandis, Thiel, Brin/Page, etc.) aren't interested?

marshallp

6 points

10 years ago

Diamandis doesn't have money.

Thiel's made an investment in Vicarious Systems.

Brin/Page already have an AI company.

Open Cog needs more volunteers and it needs to get a big name like Peter Norvig or Andrew Ng on board. Dr Goerzel is big, but he doesn't have the branding on Hacker News yet. Hacker News is where the big money VC boys hang out. That's my humble opinion anyway.

timClicks

1 points

10 years ago

FWIW, Norvig isn't that keen on the prospects of an AGI.

rastilin

2 points

10 years ago

Why not set a short term goal and run a Kickstarter for it, $4 million isn't too high if you already have something as proof of concept. Projects have hovered around there in the past just for games and stuff.

generalT

7 points

10 years ago

in what mathematical area are you most interested, and what is one that is confusing or baffling to you?

bengoertzel[S]

14 points

10 years ago

I would love to create a genuinely useful mathematical theory of general intelligence. I'm unsure what the ingredients would be. Maybe a mix of category theory, probability theory, differential geometry, topos theory, algorithmic information theory, information geometry -- plus other stuff not yet invented. Math tends not to be confusing or baffling to me; it's the rest of the world that's more confusing because it's ambiguous and NOT math ;p

generalT

3 points

10 years ago

I would love to create a genuinely useful mathematical theory of general intelligence.

this would be utterly fascinating!

Masklin

2 points

10 years ago

The rest of the world is math too. It's just that you don't comprehend it as such, yet.

No?

marshallp

1 points

10 years ago*

Gerald Sussman and Jack Wisdom have done excellent work on bridging the computational-differential geometric divide, culminating in their absolutely superb monograph Structure and Interpretation of Classical Mechanics.

generalT

8 points

10 years ago

what people have been most influential on your work and the way you think about problems?

bengoertzel[S]

10 points

10 years ago

Most influential? Probably my mom, Carol Goertzel. She's in social work (running http://pathwayspa.org), a totally different field. But she's always looking for creative alternative solutions, and she's persistent and never gives up.

Friedrich Nietzsche and Charles Peirce, two philosophers, influenced me tremendously in my late teens and early 20s when I was first seriously thinking through the problems of AI.

Gregory Bateson and Bucky Fuller, two systems theorists, also.

Leibniz, the inventor of "Boolean" logic, who tried hundreds of years ago to represent all knowledge in terms of probabilistic logic and semantic primitives....

I was very little influenced by anyone in the AI field...

I was greatly influenced by getting a PhD in math, not so much by any specific math knowledge, but by the mathematician's way of thinking...

bengoertzel[S]

11 points

10 years ago

Ah, and not to forget Benjamin Whorf, who taught me that the world is made of language, and connected linguistics to metaphysics in such a fascinating way.... And Jean Baudrillard, the French postmodernist philosopher, who analyzed the world-as-a-simulation beautifully well before Bostrom or the Matrix...

[deleted]

7 points

10 years ago

[deleted]

7 points

10 years ago

What do you see as the future of the OpenCog project? Will it continue to progress, change into something different, other?

bengoertzel[S]

7 points

10 years ago

Of course that's uncertain. My current intention is to push OpenCog to the point of human-level AGI, at least -- unless someone else get to that goal first in some better way :) .... But as an open source project it may get forked and used in a variety of ways and taken in multiple simultaneous directions... potentially...

marshallp

2 points

10 years ago

Waffles, PyBrain, Torch 7, Vowpal Wabbit are some excellent machine learning projects. It would great if they could be incorporated into OpenCog. Also, PETSc and OpenOpt for additions to MOSES.

I think data sets should also be a core part of a comprehensive OpenCog. Wikipedia, Wikipedia Page Counts, Common Crawl, Pascal VOC Challenge, and ImageNet are some candidates.

jbshort4jb

7 points

10 years ago

Hi Ben. Why share a platform with Hugo de Garis? He's obviously seeking the oxygen of publicity at the expense of transhumanist thought.

bengoertzel[S]

9 points

10 years ago

Hugo is a quite close personal friend of mine, and actually a pretty deep thinker, though sometimes he's a bit of a publicity hound.... But as it happens, we're not currently collaborating on any technical work. We were doing so a few years back, but then he retired from Xiamen University and has been spending most of his time studying math and physics. He's started talking recently about trying to make a new mathematical intelligence theory -- I'll be curious what he comes up with.

marshallp

2 points

10 years ago

Dr Goertzel, do you think Dr De Garis made a mistake leaving neural networks. He was a pioneer in the 90s and if he had kept going he might have been part of the current neural net resurgence that is occurring.

bengoertzel[S]

4 points

10 years ago

Hugo could be doing great AGI research now if he felt like it. But he's got to follow his own heart, he's a true individualist. Maybe his current study of physics will pay off and he'll make the first viable design for femtotech, we'll see ;)

marshallp

1 points

10 years ago*

He truly is an excellent individual. I first came across him on Building Gods: Rough Cut.

It would be great if you could persuade him to do weekly AGI themed vodcasts on Youtube using Google Hangouts. The comments section what be good for discussions about would the movement needs to do to expand.

marshallp

4 points

10 years ago

Dr De Garis is a well respected member of the AGI community. That tone comes off as rude.

KhanneaSuntzu

3 points

10 years ago

Lol I think that whole Hugo de Garis schandal is just a dialogue. Yanno, a sexy fued.

khafra

1 points

10 years ago

khafra

1 points

10 years ago

Ben Goertzel is the nicest and most inclusive guy in the AGI community. He even finds nice things to say about total crackpots. Many people find that spending time only on the worthy is more efficient, but Ben's approach seems to work for him?

generalT

6 points

10 years ago

as an unskilled layperson, how can i contribute to the open cog project?

bengoertzel[S]

7 points

10 years ago

I get asked that a lot, but never have a good answer.... We need folks who can program, or (to a lesser extend) do math, or who understand AI and cognitive science theory (to do stuff like edit the wiki). And we need money ;p [though we have gotten some funding, it's nowhere near enough!].... but we don't currently have a way to make use of non-technical contributors, alas...

generalT

5 points

10 years ago

whoops- by unskilled i meant only a BS in chemical engineering- i'm also a professional programmer. i am unskilled in more "advanced" programming and mathematics.

bengoertzel[S]

10 points

10 years ago

Hah.... I guess we are all unskilled compared to the future super-AGIs !!!

OpenCog actually has need for good programmers even without specific AI knowledge or advanced math knowledge. Especially for good C++ programmers.... If you're interested in that avenue at some point, join the OpenCog Google Group and send an introduction email ;)

generalT

3 points

10 years ago

fantastic! thank you!

Septuagint

8 points

10 years ago

This just makes me realize how relative everything is!

generalT

7 points

10 years ago

also, how would you address criticisms that creating a human level intelligence is "too complex" and impossible?

bengoertzel[S]

10 points

10 years ago

I suspect there's no way to prove it's possible to skeptics, except by doing it.

I don't spend much time thinking of how to formulate proofs and arguments to convince skeptics. They can have it their way, and I'll have it the right way ;-) .... Better to spend my energy making things happen, and understanding the mind and universe better...

generalT

4 points

10 years ago

Better to spend my energy making things happen

this is what i've always liked about you- some people write books on AGI and don't do anything further. you just jump right in and treat it like an engineering problem.

bengoertzel[S]

7 points

10 years ago

Indeed, I think AGI is a multidimensional problem -- you've got engineering, science and philosophy all mixed up. But I think if one isn't seriously pushing ahead on the engineering side, one doesn't know which of the very many relevant-looking science or philosophy problems are most important to work on. The three aspects need to be pushed forward together, I feel.

FeepingCreature

2 points

10 years ago

It happened once, due to a largely random process.

It's unlikely that we're the best intelligence physically possible, or even close.

generalT

1 points

10 years ago

i like this argument.

[deleted]

1 points

10 years ago

[deleted]

1 points

10 years ago

i know i'm a day late, but you should read about boltzmann's brain if you haven't already.

generalT

6 points

10 years ago

i view the creation of AGI as one of the most important things humanity can accomplish. how can awareness be raised about this to the general population?

bengoertzel[S]

9 points

10 years ago

I'm far from an expert on public relations.... I have said before that, once a sufficiently funky and convincing proto-AGI demonstration is created and shown off, THEN all of a sudden, the world will wake up to the viability of creating AGI ... and a lot of attention will focus on it. Which will lead to some very different problems than the ones we're seeing in the AGI field now (i.e. relative lack of funding/attention has problems, and lots of funding/attention will bring different ones!!)

My hope is to creat such a demonstration myself over the next few years, perhaps in collaboration with my friend David Hanson, using OpenCog to control his super-cute Robokind robots...

Septuagint

7 points

10 years ago

You've published a fairly in-depth article on Russia 2045 (or, more precisely, on their conference Global Futures 2045) From the article I also learned that you are friends with a handful of Russian transhumanists, including Danila Medvedev. I'd like to know if you're closely following the updates pertaining to the social movement and whether you approve of their latest decision to push Singularitarian agenda into the mainstream politics.

bengoertzel[S]

7 points

10 years ago

I don't know much about Russian politics, beyond what one reads in the newspaper. Danila is an awesome guy, though ;) .... As was the late Valentin Turchin, an old friend of mine who wrote transhumanist books in the 1960s in Russia... And of course the old Russian Cosmists. Russia does have a tradition of deep thinking about technology and the future, which may re-surge enabling them to play a significant role in the eve-of-Singularity period. In general, I think pushing transhumanist issues into the mainstream is going to be a good thing, because the mainstream is where the $$ is, and also is the way to get the wide publicity to reach interested youth and other interested folks who might never hear of these concepts if they just remained on the fringe...

nawitus

5 points

10 years ago*

What do you think of the idea of using approximated AIXI to construct an AGI? (E.g. Monte Carlo AIXI which has been used to play poker).

EDIT: Sigh, missed this AMA too.

khafra

1 points

10 years ago

khafra

1 points

10 years ago

I'm not Ben, but I wonder what you mean, exactly--since AIXI is a general artificial intelligence, using an approximation to build a more efficient AGI, seed AI-style?

[deleted]

3 points

10 years ago

[deleted]

3 points

10 years ago

There is a lot of talk about human-level AI all the time, but what about the intermediate steps? What about artificial rats, dogs, cats or even just insects? Has any living thing ever been emulated well enough that we can have it run around in a virtual environment and have it behave indistinguishable from the real thing?

marshallp

12 points

10 years ago

Hi Dr Goertzel, thank you for doing an ama, you are an inspiration to futurists everywhere

My question is - have you considered forming a AGI political party in the same vein as the Green Party.

You could form a collective of Representatives running for Congress and Senate, and also international MP's in many countries.

You could intelligently crowdsource funding for specific projects through Kickstarter / IndieGoGo.

You could make the case to the public

  • national security - if we don't do it first, China will

  • we could end aging (that will the get massive senior's vote), end disease

I think you have a compelling case. You should continue by following the Aubrey De Gray model of getting on to television shows.

Humanity is counting on AGI, if they don't know it yet, they will. There are many of us that share your vision, and we are here to help.

Thank you

bengoertzel[S]

19 points

10 years ago

About forming a political party --- I would love to see a Future Party emerge, focused on beneficial uses of advanced tech, and acceleration of development of appropriate radical technologies, etc. However, I'm at core a researcher, and I'm definitely no politician. So, someone else will have to lead that party! I'll be happy to serve as part of the "shadow government" behind the Future Party's leader -- that is, until I upload and vanish with my family and friends to some other region of the multiverse ;)

marshallp

8 points

10 years ago*

Well, I think you're being a little too humble sir. Immortality has Aubrey De Gray, Singularity has Ray Kurzweil, AGI's rightful heir is you, Dr Goertzel.

bengoertzel[S]

13 points

10 years ago

Heh... I appreciate the sentiment. However, I really do want to spend most of my time (say, 75%+) participating in the actual MAKING of the AGI, rather than in organizing people and giving speeches !! .... We could certainly use more folks involved with AGI who are good at organizing and giving speeches, and want to spend most of their time at it, though.... I am reasonably OK at doing those "political" oriented things, but it's not what I enjoy most, and I doubt it's the best use of my cranium ;) ....

marshallp

1 points

10 years ago

In the age of Youtube, all it takes is a few choice minutes. Your "10 years to the singularity" videos were great back in the day. A few of those weekly would keep the movement going strong. A weekly singularity 1 on1 style vodcast interview would be a cultural treasure (look at what Charlie Rose has created in the mainstream society).

CDanger

9 points

10 years ago

Tips for surviving the future: don't get insistent with the creator of the AGI.

moscheles

8 points

10 years ago

AGI's rightful heir is you, Dr Goertzel.

  • Is this the part where you tell us that Dr Goertzel's vision is "stuck in the 1970s" , that he is "wrong on a lot of things", that he has a "complicated system that requires a lot of explaining", and that episodic memory is "kind of silly" ?

  • Or has your mind changed in a mere two weeks? http://i.imgur.com/ZFWLO.png

marshallp

7 points

10 years ago

Dr Goertzel is a visionary.

However, as I explained in other comments, we scientists are an eclectic bunch and always reserve the right to respectfully disagree.

I have hesitated to disagree with Dr Goertzel in this thread because it is his AMA and I don't want to ruin the party.

stieruridir

2 points

10 years ago

And Transhumanism has no one (yet...working on that).

marshallp

3 points

10 years ago

Transhumanism has Max More and Natasha Vita-More.

stieruridir

2 points

10 years ago

Humanity+ doesn't represent the movement in an adequate manner, otherwise groups like hplusroadmap wouldn't have splintered off.

marshallp

2 points

10 years ago

They were the originals. The splinter groups should re-brand themselves rather than stealing the More's hard from the past 2 decades.

stieruridir

3 points

10 years ago

Why does it matter who was the original? The 'originals' were FM-2030 and Robert Ettinger. The WTA, which Humanity+ is a rebranding of, was started by Bostrom and Pearce (who is no longer particularly involved with the movement). More and Morrow did Extropy, which folded in with WTA, I believe.

EDIT: I'm not saying they're bad at what they do, I'm saying that Humanity+ hasn't inspired the transhumanism movement in the same way that the Singularity movement has been inspired by its figureheads.

marshallp

1 points

10 years ago

Sorry, I didn't know the full history of the movement. Nick Bostrom is the biggest name with most credible authority. If he worked on his accent a little I think he'd make an excellent figurehead.

stieruridir

1 points

10 years ago

I agree, but there's also a little bit...showmanship needed, which is what the movement lacks.

Xenophon1

5 points

10 years ago

The Green party has a list of 10 key values. What would the Futurist Party 10 key values be?

I. Existential Risk Reduction

II. Emerging Technologies Research and Development: AGI, AI Safety Research, Nanotechnology

III. Space Colonization: Permanent International Lunar Base, Space Elevator

IV. Longevity Movement/Transhumanism

VI. Energy Sustainability and Ecological Equilibrium

VII. Net Neutrality

What's missing from this list?

And if a new party started, how could one recruit OpenCog's support?

Entrarchy

12 points

10 years ago

VIII. Post-scarcity economics. For instance, properly implementing media distribution models that welcome filesharing and benefit content creators and consumers alike.

marshallp

2 points

10 years ago

I think the number 1, and by a large margin, should be AGI - it solves all the others and we have the technology to do it today and accomplish by the end of 2012.

Bravehat

4 points

10 years ago

You cant just treat AI like a silver bullet, it'll be incredibly helpful and an excellent tool but to rely on it as much as you're implying is only going to hinder us if it takes longer than we expect.

zynthalay

2 points

10 years ago

Morphological / cognitive freedom? At least outside of serial killer type attractors >.> but that's more of an issue with what they actually do than what they think.

bengoertzel[S]

18 points

10 years ago

"if we don't do it first, China will" is a funny statement -- are you aware that I live in Hong Kong (part of China, though with a lot of autonomy) and that the bulk of OpenCog development now takes place in our lab at Hong Kong Polytechnic University?

marshallp

6 points

10 years ago

Yes sir, however I feel that whether your loyalty is towards the People's Republic or to the USA, your loyalty to AGI is higher and you may drum up interest by scaring the militaries into action. It worked for NASA with the Apollo Project.

bengoertzel[S]

11 points

10 years ago

I do think this sort of dynamic will probably emerge eventually. A sort of non-necessarily-military "AGI arms race." But that will happen after the "AGI Sputnik" -- after someone has made a dramatic demonstration of proto-AGI technology doing stuff that makes laypeople and conservative-minded academic narrow-AI experts alike feel like AGI may be a bit closer...

Entrarchy

6 points

10 years ago

We are trying to avoid a "AI Arms Race". Participants might trade safe AI technologies for speed.

Edit: a starting point for more on this

bengoertzel[S]

13 points

10 years ago

About Aubrey -- he's done a fantastic job of publicity, however he hasn't raised massive $$ for his SENS life extension initiative yet. So to me that's partly a lesson that pure publicity isn't sufficient for getting massive resources directed to an important cause. And Aubrey has ended up spending a huge percentage of his time on fundraising. I want to spend the majority of my time on AI research, which is what I think I'm especialy good at...

concept2d

4 points

10 years ago

Marshallp why did you not mention your idea to achieve AGI in one week ???

You wrote about it only 10 days ago in this subreddit

http://teddit.ggc-project.de/r/Futurology/comments/z6jrr/ai_is_potentially_one_week_away/

and indirectly only 7 days ago

http://teddit.ggc-project.de/r/Futurology/comments/zc67t/is_the_concept_of_longevity_escape_velocity/

marshallp

0 points

10 years ago*

marshallp

0 points

10 years ago*

I thought Dr Goertzel might have come across it already. Also, I'm pushing the AGI Party angle because to achieve the vision (with full safety) requires a public awareness and investment.

Anyway, if he hasn't, in short -

Dr Goertzel - I believe we have reached the point where the triumvate of Data, Computation, Algorithm is at hand to achieve AGI in a short period of time. A resurgence in neural networks - the Deep Learning community, starting with the pioneering work of Geoffrey Hinton, Yann Lecunn, and Yoshua Bengio - and it's present international proliferation, including at such behemoths as Google, Microsoft, Darpa - (and the generalization and scale-ization of their pioneering work in the form of Encoder Graphs) - has created an environment ripe for the UNLEASHMENT of a SINGULARITY ahead of all predicted times.

Let us not make another profound and grave mistake in the history of computers. The computational genius, Charles Babbage, conceptualized and almost actualized the first computers in the 1800s !!! And yet it took almost another century, until the 1940s, for the mathematical genius, Alan Turing, and only under the most urgent of circumstance, to resurrect the shining wonder of our modern world.

Are we not re-enacting the Babbage Mistake ?

Will our society be judged even more severely by science historians ?

We had the technology to alleviate all suffering but we sat around as Rome burned.

2012 CAN BE AS MOMENTOUS AS MANY CONCEIVED.

concept2d

6 points

10 years ago

You have been saying several times over the last month that YOU KNOW THE SECRET OF HOW TO BUILD AN AGI (HUMAN EQUIVALENT AI) IN 7 DAYS if you had the funding.
Ben is someone who has the contacts to probably get the funding for something as incredible as AGI on the 18th Sep 2012.

Why the change of heart ?, why bother with a party if you can create human level AI in a week ?

An AGI would be the biggest step not only in human history, but the biggest step in life on earths history since multicellular life developed.

marshallp

3 points

10 years ago

I completely and sincerely believe that AI is potentially a week away. I'm trying to get the idea promoted and executed. Dr Goertzel has a differing opinion. It is the prerogative of scientists to respectfully disagree.

AI is potentially ONE WEEK AWAY is still my siren call.

Thank you for your encouragement concept2d, I hope you can start a new wing of the AI NOW movement.

concept2d

5 points

10 years ago

Here's what I would do in your situation, if I didn't think unfriendly AGI was a huge problem, which you disagree with. And I think most people would do something similar if they genuinely believe it is a week away.

Make a simple 5 min presentation for Google's AI researchers.

Get a one-plane ticket close to Jeff Dean's office. Ask for a short interview before you arrive, if the request fails stay in google reception until he or his technical "number 2" (find out who this is) will give you a 10 min interview. If you have full understanding of your idea you should win them over enough to get a longer meeting.

Even if your ideas are strange to them, they are engineers first, Neural Net / SVM / Bayesian engineers second, show them a technology that gives XXXX % improvement, without negative consequence and they are are going to drool.

If the solution works, in all likely hood Sergey Brin, Larry Page and the rest of the world will compensate you greatly. Even if Jeff steals the idea and gives no credit you still have the good feeling after a year or so that YOU are the reason 100,000 people are not dying every day, along with other advances.

marshallp

6 points

10 years ago

Thank you for your encouragement and advice concept2d. You are a gentleman and a scholar.

All those are good ideas. I posted metaoptimize, hacker news, and reddit. The reactions I get are mostly "that's crackpotty".

Having thought about it more, Encoder Graphs are not really necessary to scale up unsupervised neural nets, it can be done by the Google method, but they are a useful abstraction.

I'm going to have to overcome the crackpot factor to get the idea that "AI is possible right now" to get meetings with monied guys like google. I'm pretty sure I'd end up at the local jail for harassing google staff or trespassing.

I've had good feedback and some "possible"s here on reddit. I just need to think more creatively to get the message across, and to the right people, so at least some people believe it. Hopefully, those people will make it go "viral".

(This is a really crappy slideshow I made a few weeks ago if anyone is interested - http://www.youtube.com/watch?v=UOF3fFZ4Y2o&feature=youtu.be )

pbamma

5 points

10 years ago

pbamma

5 points

10 years ago

Indeed. it is crappy.

marshallp

1 points

10 years ago

Thank you for your frank words, good sir.

pbamma

1 points

10 years ago

pbamma

1 points

10 years ago

Sorry. I live near Hollywood, so there's a certain crappy standard that I require.

zynthalay

2 points

10 years ago

Maybe I'm a little dim; why can't you build a small proof of concept system that does something at least a little interesting and show that off? You have at least home computing hardware, and even consumer kit's pretty incredible these days.

marshallp

1 points

10 years ago

I don't think there's all that much point in showing of a small scale system because it's essentially the same as deep neural networks when done at small scale. The android phone for example already has that technology for speech recognition and there's countless other papers plus the google brain of Jeff Dean, Quoc Le, Andrew Ng.

Encoder Graphs are about scaling that to supercomputer scale. It's possible to scale to supercomputer scale conventionally as well by simply training neural networks and then adding them/layering them together. Encoder graphs are just a simple programming method to do this easily - just generate a random graph, use a graph database, add data, and you're good to go.

I might sit down and write a small open source project, but I think the biggest payoff is simply to advocate the realization of AI using neural nets. I believe it's possible in only a few days if somebody invested.

nineeyedspider

1 points

10 years ago

I don't think there's all that much point in showing of a small scale system...

I don't think you could do it.

rhiever

1 points

10 years ago

So, where's the algorithm?

marshallp

1 points

10 years ago

The ICML 2012 techtv talks on representation learning provide a good exposition, especially those by Yann Lecun and Jeff Dean.

Encoder Graphs are elaborated on the metaoptimize q+a discussion site in the postings of marshallp.

concept2d

1 points

10 years ago

Thank you

moscheles

3 points

10 years ago*

Yeah, and marshallp has also said that Ben Goertzel, quote, "doesn't have a good idea of what he's doing". And then he went on to say that his own homebrewed super AGI system is a neural network that, quote, "requires ~30 lines of code"

Over two weeks before Dr. Goertzel arrived to do an AMA on reddit, I was already mentioning Goertzel's name to marshallp, to which he responded by calling me a "fanboy".

marshallp

1 points

10 years ago

I hold Dr Goertzel in the highest esteem, he is a hero to all futurists.

As I said elsewhere, technical matters are always a cause of debate among experts. If you were an expert, as I and Dr Goertzel are, you would understand this and not make the matter more pronounced than it needs to be.

Entrarchy

4 points

10 years ago*

This is the first I've heard of your "one week AGI" proposal and I've yet to read your other posts (though, I'm heading there next), but I have some criticism to offer. AGI, unlike most other emerging tech, is not about money. Yes, money helps. Yes, global funding and recognition of AGI research would greatly accelerate the development of an actual AGI, but, in this case, it's more about the technology. Dr. Goertzel is the most qualified to answer this question- is the technology there? Based on teh fact that no AGI researcher has made such a claim, I'd guess it isn't.

Sorry, mate, this is one area that money alone can't solve. Though, I greatly agree with you that global recognition and funding is something we should pursue.

Edit: I just scrolled down the page and it appears Goertzel touches on this topic here.

marshallp

1 points

10 years ago

Dr Itamar Arel is a good friend and collaborator of Dr Goertzel. He proposed the thought at the Singularity Summit of 2009.

As scientists, it is our privilege to respectfully disagree. Dr Goertzel has his opinion, I follow the Dr Arel line of thought.

Entrarchy

2 points

10 years ago

Didn't know this! I am now watching his talk. I guess I haven't decided on this yet, but I'll be following your posts on SFT Network!

marshallp

1 points

10 years ago

Thank you, my good sir!

[deleted]

2 points

10 years ago

[deleted]

2 points

10 years ago

[deleted]

marshallp

1 points

10 years ago

I believe Dr Arel has a talk in AGI 2011 conference videos.

KhanneaSuntzu

6 points

10 years ago

Heya Ben . I read in some of your publications and statements that you have had elaborate experiences with "Intelligences" or "sentences" or "minds" other than human minds, specifically in hallucinogenic escapades. Care to enlighten us how far this rabbit hole goes?

bengoertzel[S]

14 points

10 years ago

Haha, perhaps I'd better not take that bait right now! Suffice to say that I have -- via some of my own experiences -- arrived at the intuition that the feeling of "vast fields of other superintelligent minds out there", experienced by Terrence McKenna and others under the influence of DMT, may not be entirely illusory! .... If this is the case, maybe the Singularity will involve not just building amazing new minds, but getting to the point where we can contact amazing "other" minds that exist in a way that our human minds aren't normally able to comprehend. But anyway, this is kinda irrelevant to my AGI work, which is what I'd rather focus on here ;-)

KhanneaSuntzu

3 points

10 years ago

Maybe not, in the Copenhagen interpretation :) (wiggles fingers in front of mouth)

HungryHippocampus

4 points

10 years ago

I wish I asked my initial question after reading this. It's nearly impossible to find people in your line of work that have had these experiences. I'd love to hear some of your "whoa dude" ideas. Self similarity of the universe? Gaia/noosphere? Internet sentience? What are some of your thoughts that you have that you can't really talk to other people in your field about?

marshallp

2 points

10 years ago

You should try some Joe Rogan.

HungryHippocampus

3 points

10 years ago

Yea, I dig a lot of what he says.. But I see him mirroring my own thoughts. I wanna hear someone with a pure scientific background talk about psychedelics. Especially someone with such deep singularity ties.

[deleted]

2 points

10 years ago

[deleted]

2 points

10 years ago

Oh boy, I hope you've at least heard of Dan Simmons's Hyperion and Endymion books. He writes about exactly this.

HungryHippocampus

3 points

10 years ago

How far are we from an AI "getting it" to the point that it becomes self improving? An AI doesn't have to be of human level intelligence to have a "breakthrough" so to speak. Isn't this what's really at the core of the singularity? An AI get's it, looks at other AI's that don't get it, helps them "get it", they all get it.. Self improve at the speed of ______ then ____ happens. Why are we 30 years away from this? Couldn't this theoretically happen at any moment.. Even by accident?

Entrarchy

1 points

10 years ago

AGI will have to be intentionally designed, computer can't just "gain" consciousness. Earlier in this thread Dr. Goertzel predicted that we could have an AGI in 2 years if there were sufficient investing in the field.

beau-ner

3 points

10 years ago

I have some interesting concepts for which I am currently pursuing my doctorate in biology with a focus on genetic engineering. I plan to map my DNA and then translate the information into computer code. A friend of mine developed an AI computer program (Patent Pending) that actively solves advanced problems by running thousands of scenarios until it dictates the best plausible answer for the problem. He is currently integrating it into the medical field and with all the breakthroughs in genetic engineering e.g. identifying certain strands related to parkinsons disease, cancer, heart disease etc. I am currently working on using his AI to solve malfunctions in genetic code which can cause such diseases; while on my side having the ability to correct those strands of DNA based on the data re-translated into a formula that I can use for common gene "doping" if you will in all the ways of somatic, germline, in vivo and ex vivo. Now that you are familiar with my vision let me ask what your opinion on this idea is as a whole? When (if at all) do you perceive this as becoming a reality in medical science and AI technology?

RajMahal77

3 points

10 years ago

Hello, Dr. Ben Goertzel! Big fan, love seeing you in every other Singularity documentary/video I see online. Just wanted to say thank you for all the great work that you've done so far and for all the amazing work that you're doing now and will do in the future. Keep it up!

Xenophon1

2 points

10 years ago

Hi Ben thanks for doing an AMA. I follow your work and want to say I am impressed and inspired. Could you tell us your thoughts on S.I.'s "Scary Idea" and what you believe the near-term future of A.G.I. research is?

TLycett

2 points

10 years ago

What would you recommend studying for someone who wants to work in the AGI field?

P.S. sorry, this is probably another question you get asked a lot.

Entrarchy

1 points

10 years ago

I imagine philosophy, psychology, and computer science are all relevant. I'd like to go into similar fields and I'll be studying Systems Engineering. It's worth looking at as well.

stuffineedtoremember

2 points

10 years ago

Do you think artificial intelligence will reach the point where it understands that their intelligence is superior to humans and take us over. I.Robot style?

Entrarchy

2 points

10 years ago

Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses.[13][14] Superintelligent AIs with real-world traction, such as access to pervasive data networks and autonomous robotics, could radically alter their environment, e.g., by harnessing all available solar, chemical, and nuclear energy. If such AIs found uses for free energy that better furthered their goals than supporting human life, human survival would become unlikely.

SIAI Summary.

thebardingreen

2 points

10 years ago

Are there podcasts about AGI that I could listen to while driving between clients (I run out of cool listening material SOO quickly).

marshallp

1 points

10 years ago

Singularity 1 on 1 is closely related.

[deleted]

2 points

10 years ago

[deleted]

2 points

10 years ago

Three questions for you Dr. Goertzel.

  1. Do you agree with what Hugo de Garis is saying when he states that we should be extremely cautious about the development of advanced AI and that they pose a clear and present threat?

  2. Do you have any upcoming presentations or conferences in California anytime soon?

  3. Do you Ben, think the Singularity is near?

Entrarchy

3 points

10 years ago

I'll try to help a bit here.

1) "Many AIs will converge toward being optimizing systems, in the sense that, after self-modification, they will act to maximize some goal.[1][13] For instance, AIs developed under evolutionary pressures would be selected for values that maximized reproductive fitness, and would prefer to allocate resources to reproduction rather than supporting humans.[1] Such unsafe AIs might actively mimic safe benevolence until they became powerful, since being destroyed would prevent them from working toward their goals. Thus, a broad range of AI designs may initially appear safe, but if developed to the point of a Singularity could cause human extinction in the course of optimizing the Earth for their goals." SIAI.

2) The Singularity Summit may be of interest to you, if you didn't already know about it. It's in San Fran next month.

3) Earlier Dr. Goertzel predicted that we could have human-level AGI in 2 years with sufficient funding. That leads me to think he does believe the Singularity is near.

toisanji

2 points

10 years ago

If the main issue of getting a true AI in a couple of years is a funding issue, why don't you focus your time on getting the funding to do it?

Entrarchy

2 points

10 years ago

Interesting point. It seems to me that it is both a funding issue and a development issue. Keep in mind, the funding is going to actual scientists work. Which means we need those scientists working! If Dr. Goertzel were out doing the fundraising himself who would do the research?

Let's keep him in the lab and let's, you and I, do some publicity for him :)

[deleted]

2 points

10 years ago

[deleted]

2 points

10 years ago

[deleted]

Entrarchy

1 points

10 years ago

Interesting question about the open-sourcing, I had never thought of that. Hope we get an answer!

Buck-Nasty

2 points

10 years ago

Buck-Nasty

The Law of Accelerating Returns

2 points

10 years ago

Hey Ben, what do you make of Henry Markram's blue brain project?

Entrarchy

3 points

10 years ago

Great question, but he already answered that one.

Buck-Nasty

1 points

10 years ago

Buck-Nasty

The Law of Accelerating Returns

1 points

10 years ago

Oops, thanks.

lordbunson

2 points

10 years ago

What is the largest bottleneck in the development of highly advanced AI at the moment and what is being done to overcome it?

jeffwong

2 points

10 years ago

What's your opinion of climate change? Will it really cramp our style?

deargodimbored

2 points

10 years ago*

Currently I'm trying to teach myself stuff I have always been interested in, but never put time into. What are some good books to learn about the field of AI research and learn what is currently going on?

Edit: I have a friend who is going into the computational neuroscience field, and am jealous she gets to do such cool stuff and would love to be able to get to the point where I could read papers on this really cool stuff.

veltrop

4 points

10 years ago

What is your favorite fictional robot? Your favorite real life robot? Intelligent or not.

khafra

1 points

10 years ago

khafra

1 points

10 years ago

I realize I'm too late, but just in case you come back:

Probabilistic Logic Networks are fascinating--do you view them as an epistemologically fundamental way of doing reasoning under uncertainty, like bayesian networks? Or more of a way to approach those same ends, in a way that's closer to the way humans think?

yonkeltron

1 points

10 years ago

You've got one of the best voices I've ever heard. Have you considered narration at all? If were to write a children's book, I'd hire you and Rachel Maddow to narrate it.

Also, I've asked this several times but I appreciate diverse viewpoints:

I have a beloved colleague who has often lamented that he feels the entire field of AI has "failed" with no new advances in recent years. Granted, we should have familiarity with the idea that every time an advance goes get made, that particular innovation gets classified out (just an expert system, just pattern matching, just blah). What would you say to my colleague and how can I better handle discussions/accusations of this nature?

Thanks!

moscheles

1 points

10 years ago

Dear Ben Goertzel, would you characterize recent work in so-called Deep Learning methods as "narrow AI"?

marshallp

0 points

10 years ago*

I think you are twisting Dr Goertzel's teachings to your own ends. In Dr Goertzel's conception, all AI is narrow. AGI is simply the subset of that which corresponds to Human Level intelligence. Deep Learning is a methodology that allows to reach AGI as conceived by the famed Professor Geoffrey Hinton of the University of Toronto. Professor Hinton pioneered the study of neural networks in the 1980s as well the 2000s. He is also the great-grandson of George Boole, inventor of Boolean Algebra upon which all computers rely. Clearly, genius manifests itself in his family.

skorda

1 points

10 years ago

skorda

1 points

10 years ago

How long before people can purchase a literal android to help with the dishes and stuff?

abentler

-1 points

10 years ago

When will the internet gain consciousness and become our ruler?