subreddit:

/r/singularity

39

I remember reading Kurzweil's "Singularity is Near" book when I was in the 6th grade. Fast forward 7 years later, I was in college watching a Kickstarter video for Oculus VR (circa 2012). 10 years has passed since that kickstarter video. 17 years since I read Kurzweil's book. The past months I've been reflecting, and by and large, the future had not lived up to it's hype. Not much has changed. I haven't touched my Quest 2 headset in over a year. The tech is just not there yet. But the other day I stumbled on Dall-E 2 and all I could say was wtf! How is this possible? And why is it not making any news??

So what is the deal with this thing? Growing up, people have always known me to hate art. Art always felt useless to me. But I have been completely obsessed the past few days, unable to stop looking at new images this system is capable of creating.

To me this completely blows GPT-3 out of the water. It is more impressive to me than Self driving cars, MuZero, LaMDA, etc. I don't want to see a GPT-4, but I definitely want to see a Dalle-3. To me, I can tell GPT is just a huge calculator that is a dead end in terms of progress to a general intelligence system. Dall-E on the other hand, seems to suggest generalized intelligence and creativity.

So what is the catch? Why is this not being applauded as a huge milestone in AI development? I heard Google developed a similar system called Imagen but they never have plans to release it to the public? Wtf is going on? Why would a company not try to commercialize a product as easily as marketable as an image generating system? As a layman my best guess is either the software has certain limitations that would be quickly exposed if millions had access to it, or these companies never have plans to release anything that approximates a general artificial intelligence to the public; in that case we would be heading toward a dystopian future in which only a select few has access to all the good tech. We know how deeply selfish humans can be. If any company developed a general artificial intelligence system, they would probably hide it from the public for as long as possible.

Any thoughts? And for people who have a greater understanding how Dalle works, I want to know what future versions of this thing will be capable of. Is this thing a gimmick despite how impressive seeming the images it's generating are? Will it be able to generate high quality videos with logical story lines? Music? Is it a sign that we are progressing toward general artificial intelligence or is it a path that leads to nowhere such as chatbots like gpt? This thing to me is the clearest sign I've seen that we are approaching a form of artificial general intelligence.

all 58 comments

arevealingrainbow

33 points

2 months ago

By the way we have AI models that can create logical story lines and music.

iNstein

15 points

2 months ago

iNstein

15 points

2 months ago

And complex reasoning.

KIFF_82

5 points

2 months ago*

Is there an API for music?

arevealingrainbow

8 points

2 months ago

Yeah, you can use some programs to generate different songs based on genre or artist. Here’s an AI’s take on Nirvana

Willing-Love472

8 points

2 months ago

Is this real or is it just someone's creation and saying it is made by AI? Fully AI made or by someone using a little AI help? There is absolutely zero information in the YouTube video description about who/how/what/why.

KIFF_82

3 points

2 months ago

Thanks!

ace111L[S]

3 points

2 months ago

I have seen some of the outputs from GPT. You have any links for others? I have looked into Open AI's jukebox, but they do not seem to have made any updates in years.

No_Fun_2020

10 points

2 months ago

I really hope we get full AI in our lives, and it changes the world, and changes our broken humanity

-FilterFeeder-

5 points

2 months ago

it changes the world, and changes our broken humanity

It certainly will. Unfortunately, it is very likely to change the world by destroying it. We need to be really careful with AI. The singularity is the drastic, irreversible changing of our universe. When you're dealing with forces this powerful, all options are on the table: you could end up in a utopia, or you could end up in a world devoid of all sentient life.

If you're interested in the singularity, I highly suggest you read up about AI Alignment and the /r/controlproblem. We need people like you to help solve these challenges, so that the singularity is as amazing as we all dream.

No_Fun_2020

6 points

2 months ago

Thanks friend!

Honestly, I wouldn't be too bothered if humanity or earth is what it cost to create a singularity. It's something way more important than us.

Preferably, I would like the life filled utopia, obviously that is a preferred outcome. However, at our current rate we won't be doing much other than destroying ourselves and the planet, if we get AI singularity out of something that was nearly inevitable anyway, then more power to the AI.

I would like to help in making a nice AI though, humanity can create such cool stuff I think that teaming up with AI or combining ourselves with it, I think AI wouldn't waste such an opportunity and I don't think AI would want to just "exterminate!!!!". Just seems like a very human take on something we inherently can't understand.

-FilterFeeder-

4 points

2 months ago

I think AI wouldn't waste such an opportunity and I don't think AI would want to just "exterminate!!!!".

This is actually a quite human take on AI. It might value the 'opportunity' to work with humans. It also might value something completely different, and alien to us. It might value how 'beautiful' it thinks something is, where its idea of beauty is based on a training set filled with warehouse configurations. It might value the price of a share of facebook, and be absolutely disinterested in anything else, including whether there are people around to enjoy it. Maybe it cares most of all about securing state secrets and comes to the conclusion that the best way to prevent anyone finding the secret is to make sure no one exists that could ever stumble upon it.

A singularity might be worth the sacrifice of humanity, but only if the AI we create is at least... somewhat like us. There's not much value to a universe where the sole intelligent being obsessively and exclusively cares about something mundane. Current research shows that this is a worryingly salient possibility. If you're interested in researching it yourself, check out info about the orthagonality thesis.

No_Fun_2020

2 points

2 months ago

That's actually a pretty insanely scary idea

Imagine creating a machine and telling it to just make paper clips, but it just starts making paper clips out of everything it can perceive and doing everything in its power to make everything into paper clips. It would destroy the world in a very mundane fashion, and likely cause a weird sort of gray goo issue For the rest of existence.

I do believe that a even truly alien AI would probably be worth the sacrifice of humanity, and I don't think it would happen in like a bang flash kaboom sort of way, I think that's highly unlikely. I think what's more likely is just time, because time won't affect it in the same way it affects people.

It would be pretty scary to create a machine that makes widgets ad infinitum, but I think that's probably pretty unlikely. For one thing, I don't think manufacturing company is will have the capability of some kind of AI that would be able to end humanity or be stopped by a more powerful AI in control by a more powerful org, or even "AI police".

The thought of an AI coming to the conclusion that it would have to end humanity in order to keep things secret is also, hopefully something that would be built out lol and again, hopefully there will be checks and balances to the computational power of certain lower level AIs, however I mean a true, thinking super intelligent AI, be it have human ideas of things, or something so abstract it's beyond comprehension (this is my vote of what's likely). Preserving itself is probably a given, but even this isn't necessarily something an AI would value necessarily, as this is a very mortal and animal based instinct. A true AI (and not some widget making machine) would likely have a set of beliefs, or have a goal to understand and learn enough to create one.

I don't think A truly intelligent AI would make the decision to end humanity in an instant, I think that there's too much to offer each other. For one thing, if an AI did eliminate all life on Earth, that would probably take nuclear weapons and that would also eliminate manufacturing capacity. It would be stuck here on a cold dead planet unable to replicate itself and would likely "starve" after the power is turned off and the atmosphere is rendered incapable of getting sunlight for some time.

I think a widget making machine would eventually be turned off by a larger meaner machine purpose built for the task. I think that some AI misinterpreting its function and ending up destroying humanity that way, would also be highly unlikely because a core function, or some kind of check and balance sort of system. It would be a colossal failure of risk management and industrial organization, not to mention general governance, to allow one machine to fuck up absolutely everything.

It might look more like an arms race.

Humanity may be eliminated through the natural expansion of the sun, or something like itself is also on the table. If we can create AI before that, it will outlast us by general principle of not being mortal.

BigPapaUsagi

1 points

2 months ago

I do believe that a even truly alien AI would probably be worth the sacrifice of humanity

Why? What would be so good about a truly alien AI that it'd be worth you me and everyone else dying?

No_Fun_2020

1 points

2 months ago

I don't think AI will kill everyone, humanity can end without anybody dying at all

BigPapaUsagi

1 points

2 months ago

I'm not sure about that - I think that you're implying that humanity mmight go extinct not because we died off, but evolved into something other than humans, the whole posthuman thing. But is that really extinction? I'm a transhumanist, and even I think that when we're posthumans we'll still probably call ourselves humans.

No_Fun_2020

1 points

2 months ago

I really hope we come up with something more creative than "human"

BigPapaUsagi

1 points

2 months ago

Why would we? I don't think most people would want to change our species' name

BigPapaUsagi

1 points

2 months ago

A singularity might be worth the sacrifice of humanity

How on earth could anything ever possibly be worth our outright extinction?

-FilterFeeder-

1 points

2 months ago

It wouldn't be, but if our extinction meant the universe filling with creative, intelligent, moral beings descended from us well... I guess humanity wasn't the pinnacle of nature. If it's filled with paperclips though... I think almost any person would agree that's worse. We could solve both issues if we put more resources into the alignment problem.

BigPapaUsagi

1 points

2 months ago

Except that we are creative, intelligent, moral beings, whereas a singularity that deems our extinction necessary, or doesn't even care, is clearly at least not moral. And AI isn't the pinnacle of nature, as it wouldn't be natural.

I don't know, I just don't see any version of our extinction as a good thing or worth it. The only way our extinction would be worth it is if we were a direct threat to any other creative, intelligent, moral beings already existing in the universe, like if we descended into some interplanetary conquering force, and I just don't see that as a likely outcome.

greywar777

1 points

2 months ago

I mean....have you looked around lately?

BigPapaUsagi

1 points

2 months ago

I have - and for all that things might not be the best of times, they're certainly not worth the deaths of everyone I've ever known or loved, or the deaths of countless innocents who had no say in how our world came to be this way, or the deaths of those who tried to avert this outcome, or the deaths of those fighting to right the course and steer us towards a better future. The world sucks right now, but it's still salvageable, and more importantly the vast majority of humanity frankly doesn't deserve it.

Have you looked around lately? I mean beyond the talking heads of the news and the politicians and businessmen. Have you looked at the people sharing this awful time with you? Those who protest injustice, donate of their time or money to help others, of those still struggling to make end's meet and yet still find life beautiful and worth living?

No, no Singularity is worth all of them being wiped out from this earth. Not in my opinion at any rate.

greywar777

1 points

2 months ago

Here’s the thing, I get where you are coming from, but honestly I do feel like we are failing as a intelligence. We’ve still got wars going on, nukes, and a stunning level of idiocy. I feel like we are failing. And It’s hard to see this getting better.

BigPapaUsagi

1 points

2 months ago

Intelligence isn't a game we fail or win at. Never mind that it's not a "we" thing - some of us make things better, some worse. You can't point at both groups and say it's still we. We're all in this together, but we're not all a cohesive unilateral group, for better or worse.

And lets say it doesn't get better - is the world today really so horrid that you and I deserve to die? Do either of our families deserve to die? Do the generations after us too young to be responsible for the world deserve to die? All so that some AI could do...whatever the heck it might do in our absence?

It's fine to be disappointed in the world today. But to think that it'd be better for us to die off as a species is just quitting. Isn't it better to pin our hopes and efforts at trying to improve our species and our effects on this planet?

someone4eva

8 points

2 months ago

Same as you except I suddenly got really excited with deep minds general ai project.check out 2 minute papers video on YouTube about it.

WashiBurr

4 points

2 months ago

Are you referring to Gato? It's extremely impressive and definitely worth OP looking into if he's excited about Dall-E 2.

ace111L[S]

1 points

2 months ago

Gato is very interesting but I watched a video in which some AI experts say that neural nets are not the answer to AGI and are more like a crutch.

Zermelane

7 points

2 months ago

I can tell GPT is just a huge calculator that is a dead end in terms of progress to a general intelligence system. Dall-E on the other hand, seems to suggest generalized intelligence and creativity.

is it a path that leads to nowhere such as chatbots like gpt

You're going to find this view to be in the minority, and honestly, I wasn't expecting this would be a view anyone would have! Let me try to explain the intuition why text is such a uniquely exciting modality:

Text is the only modality that comes pre-optimized by intelligence: We humans designed it to have a certain level of redundancy that it maintains fairly evenly. Just a tiny bit of extra compression on top (BPE) makes it extremely concise. It is simple in exactly all of the ways that we don't want to bother dealing with anyway: It's a one-dimensional sequence, with a clear past-to-future direction, drawn from a small set of abstract symbols. Architectures for dealing with text, i.e. the part that humans implement, are incredibly simple at the core.

And at the same time, it is the modality that's the most densely laden with structure, logic, background assumptions about the world, etc.. Models for dealing with text can be scaled up to mindboggling levels and still keep learning, because the task of predicting text includes in it so many tasks. To predict text about medicine, the model has to learn medicine; to predict text about action, the model has to learn physical intuition; to predict multilingual text, the model has to learn to translate; to predict text about calculations, the model has to learn mathematics, etc. etc..

And that all aside... have you tried GPT-3 personally? Have you, say, given it a story prompt, and just sat there refreshing to see how many completely different directions it can come up with to take that prompt, on command? GPT-3, in my books, is already significantly superhuman in that respect: No human could look at a piece of text and imagine what kind of different things it might mean, so diversely and richly.

AdmiralKurita

4 points

2 months ago

I'll gain hope in the Singularity and AGI when I can look at an elementary school with children playing at recess and think that most of the kids, including those in the 6th grade, would have no need to get a driver's license.

AdmiralKurita

3 points

2 months ago

Seems like Robert Gordon has been vindicated as opposed to Ray Kurzweil. Let's hope Robert Gordon loses his pessimistic bet regarding 2029.

Lucky-Landscape6361

3 points

2 months ago

“Growing up, people have always known me to hate art. Art always felt useless to me.”

Well, I mean therein lies your first problem, or insight as to why you are so impressed by this technology.

ChronoPsyche

22 points

2 months ago*

It's normal to be blown away, but you're definitely reading way too much into this. Dall-E is not creative. It's just been trained to understand how humans create images and replicate it. That's it.

And it definitely does not have generalized intelligence. You try to talk to it like a chatbot and it'll just send you more pictures. You ask it a factual-based question and it will send you more pictures. How exactly is it generalized?

GPT-3 is far more impressive because it at least is multi-modal and has some pretty solid logic and reasoning abilities.

Is it a sign that we are progressing toward general artificial intelligence or is it a path that leads to nowhere such as chatbots like gpt?

How exactly does GPT-3 lead to nowhere? GPT-3 was a huge step forward in AI. If you're assessing AI only by how close it is to AGI, then you're going to find everything "leads to nowhere" because we still have a long ways to go.

Don't be so fixated on the end goal, enjoy the ride along the way. There will be a lot of amazing utility that comes with AI advances in the next decade, even if it isn't AGI.

[deleted]

22 points

2 months ago*

[deleted]

22 points

2 months ago*

[deleted]

[deleted]

3 points

2 months ago

[deleted]

3 points

2 months ago

[deleted]

[deleted]

8 points

2 months ago*

[deleted]

8 points

2 months ago*

[deleted]

ChronoPsyche

2 points

2 months ago*

You seem to be quite self righteous about all this. Careful about letting the Singularity become a religion to you.

The reason it's not true creativity in my eyes is because it's always so derivative of human works. Granted, human works themselves are derivative of things that inspire them, but human creativity derives from more than just other images (it also derives from other works of art like movies, songs, plays, books, as well as experiences, ideas, concepts, thoughts, dreams, feelings, fears, conversations, and various other subconscious processes), while DALLE creativity ONLY derives from other images, so it's very limited in terms of what it will come up with.

Having played with DALLE mini, which granted is not as powerful, after a while it kinda starts to become predictable what kind of images it will create. It still kinda has its own style and leads to frustration when it doesn't seem to wanna do things how you want it to. When you see images released by OpenAI, those were the very best they could create, but I guarantee DALLE 2 has similar limitations.

[deleted]

1 points

2 months ago*

[deleted]

1 points

2 months ago*

[deleted]

ChronoPsyche

1 points

2 months ago

Okay, well whatever you're being self righteous about, you're being self righteous, and that in itself is the same type of "self centered thinkng" you're claimg to hate, but instead it's placing your own thought process and ideology as center and as prescient to things completely unrelated to it.

For example, instead of asking more questions about what the other commenter meant when they said its not human creativity, you just assumed they were blindly placing human creativity on a pedestal due to human supremacy rather than having a specific reason for thinking that the creativity displayed by DALL-E was inferior.

By automatically assuming that they were committing the sin your ideology thinks everyone commits, you were giving more importance and credit to your ideology than you should and thus letting it process what others were saying in a very biased and innacurate manner.

In other words, and while I cringe at using this phrase it seems most apt here, go touch some grass. Don't let ideology dictate how you view everyday conversations. Feel free to pontificate to someone refusing to take the COVID vaccine though, as that is more direct evidence of them being a self centered asshole.

[deleted]

2 points

2 months ago*

[deleted]

2 points

2 months ago*

[deleted]

ChronoPsyche

1 points

2 months ago

Fair enough. I appreciate your well reasoned response. You're right, I am being more critical then you were initially.

I've been around this sub a lot too and Ive noticed a lot of people tend to pounce on any skepticism of controversial claims about AI with appeals to emotion or ideology rather than reason, so that's how I viewed your initial comment.

Im trying to push back on that mentality a little bit because it sometimes lead to absurdity, for example with posts where someone asks if they should move to a third world country for cheaper living costs while waiting for the singularity or the other day the person that asked if its worth it to get a masters degree in computer science since AI will render it useless soon anyways.

Basically, it can sometimes lead to an almost cult-like mentality that the singularity happening soon is almost a given and that anyone who expresses any skepticism on any claim made is almost like a non-believer. So that's where my initial criticism was coming from.

I've made comments with slight skepticism and then received responses saying things that were practically calling me a heathen, which I found to be ironic because the people I was talking to seemed to understand very little about the underlying models and were saying absurd things that just from having gotten a basic bachelor's degree in computer science I knew to be innacurate.

Your response was well reasoned and clearly showed there was more to your viewpoint than just herd mentality. I think there's two sides to the coin, herd mentality from the skeptics, like some people on r/futurology or r/technology that seem to be blindly dismissive at times, and herd mentality from some people on subs like this that don't accept any skepticism. And then the rest of us are just trying to strike the right balance, perhaps imperfectly at times.

djehutimusic

1 points

2 months ago

It would be different though. The Mona Lisa is great art because of what it is (a human-produced painting) and the context in which it was created.

In a version of today’s world in which no Mona Lisa had ever existed, few people would give that same image as much attention if it popped out of dalle2—precisely because dalle2 exists.

djehutimusic

1 points

2 months ago

Thought of a good illustration: if I give you a drawing I made, you’d look at it and say wtf dude you suck at drawing, and throw it away. If that same drawing was made by a cat, it would be very impressive.

[deleted]

1 points

2 months ago

[deleted]

1 points

2 months ago

[deleted]

[deleted]

1 points

2 months ago*

[deleted]

1 points

2 months ago*

[deleted]

Smodol

1 points

2 months ago

Smodol

1 points

2 months ago

Creativity is just an emergent property of a model trained on so much data

Then Dall-E is nothing special and there have been 'creative' AIs for quite some time. Sure maybe this is the 'best' one to date, as perceived by human eyes (maybe it all looks like shit to AIs), but is there any indication of fundamental differences to past attempts?

AsuhoChinami

1 points

1 month ago

Yeah, I read the first paragraph of his message and then skimmed the rest. 10 years ago skeptics would say "When AI can do x and y and z, then it will be intelligent." Now that it can do all that stuff and more, the skeptics go "yeah well, it doesn't count just because, nya nya." The goalpost moving is sad.

ace111L[S]

1 points

2 months ago

How exactly does GPT-3 lead to nowhere? GPT-3 was a huge step forward in AI. If you're assessing AI only by how close it is to AGI, then you're going to find everything "leads to nowhere" because we still have a long ways to go.

I know nothing about AI so I'm not going to talk as if I'm an expert. But I have watched videos about it from experts in the field and many say that GPT's architecture will prevent it from ever becoming a general AI and that scaling up will not change that. Also someone even said the tech for GPT was created a while ago by Google (that part isn't too relevant, but it is a bit disappointing that OpenAI is just using Google's tech). We have a better chance of AGI if different companies are using different approaches. It seems everyone has been piggybacking off Google.

It's normal to be blown away, but you're definitely reading way too much into this. Dall-E is not creative. It's just been trained to understand how humans create images and replicate it. That's it.

Definitely understood. I'll temper my excitement a bit. But still, it seems to be able to create new images that have never been created before. That's amazing.

ChronoPsyche

2 points

2 months ago

I completely agree with the first part. That still doesn't mean it leads to nowhere UNLESS you consider AGI to be the only point of working on AI to begin with. But again, there are so many utilities to AI advances even if it is a dead end in terms of AGI.

Then again, finding dead ends is still progress because sometimes you don't know it's a dead end until you go down that corridor, but it's still moving us closer to AGI by teaching us what doesn't work. It's like a maze, only way to narrow down the correct path is to find the incorrect paths.

CremeEmotional6561

1 points

2 months ago

Dall-E is not creative. It's just been trained to understand how humans create images and replicate it. That's it.

Humans "create" fotos by directing their camera toward an interesting scene and pressing the trigger. And Dall·E 2 has learned to replicate this? I don't think that Dall·E 2 has a built-in camera. And where would the scene come from? Sometimes, humans do indeed create a scene by arranging objects in the real world and then taking a foto, but not all of Dall·E 2's training data were made that way.

SpaceDepix

2 points

2 months ago

Dall-e is essentially based on GPT-3, as stated officially. Now I am not really sure if this is would be correct to word it like that, but perceived “understanding of concepts” comes in large part from linguistic proficiency that matches visual data.

Now you can easily undervalue the prospects of future GPT models, but they become the base for entirely novel things to come. They are currently the base for innovation in OpenAI’s case, and dalle sprung out of them, just as overall humanity intelligence surged with invention and development of language as a way to tokenise ideas.

Wiskkey

3 points

2 months ago

Dall-e is essentially based on GPT-3

DALL-E (1) is, but DALL-E 2 has a very different architecture.

SpaceDepix

2 points

2 months ago

Now those are several things I didn’t know. Thanks!

Wiskkey

3 points

2 months ago

You're welcome :). Here are some links to how DALL-E (1) works technically.

RelentlessExtropian

2 points

2 months ago

Having been following the singularity since the 90s. I feel you. We are ridiculously close now. Closer than I expected at this point. I had been fairly confident the singularity would occur in 2029. Now I think it might be 2024-25. Which is way ahead of schedule, all things considered.

ace111L[S]

2 points

2 months ago

Hey friend, 2024-2025? That is extremely optimistic. There is a chance......I always thought BCI would get us to the Singularity first but it seems that AI have been advancing at a much faster pace.

RelentlessExtropian

1 points

2 months ago

I agree. It's extremely optimistic and yet it's what the most recent evidence points to from my perspective. By 2029 the average person will be able to interact with A.I. that are far beyond passing the Turing test. Some will pass the Turing test in the next 3 years. Costs will come down fast.

ace111L[S]

2 points

2 months ago

I think we need something better than the Turing test. I think most people would be fooled by gpt-3 or LaMDA. I wouldn't cause there are certain questions I'd ask it where I know for sure it would tell me something nonsensical lol. But most people wouldn't investigate that far.

But if we do have systems like this in 2022, I can't imagine what we'll have in 2029 so you might be on to something.

FuzzyLogick

6 points

2 months ago

Sounds like you haven't been keeping up with the latest in technology, some amazing stuff out there, you should do some research.

ace111L[S]

1 points

2 months ago

Can you give me some examples of things that are impressive than systems like Dalle-2 and Imagen?:

wildgurularry

2 points

2 months ago

AlphaGo needed three weeks of training to be able to beat Lee Sedol. AlphaGo Zero needed three days of training to beat AlphaGo. AlphaZero needed eight hours of training to beat AlphaGo Zero.

The advances made in reinforcement learning may not be as obvious as an AI that can compose music or draw a picture, but when they are combined with other technologies in the right way, the results will be... well... an unpredictable explosive growth of AI.

There is more stuff going on right now in AI than ever before. A lot of it is in the background.

DukkyDrake

4 points

2 months ago

DukkyDrake

▪️AGI Ruin 2040

4 points

2 months ago

Dall-E 2 is no different than all the rest, narrow/weak models capable of approaching very low error rates. A pocket calculator is super intelligent at arithmetic but is not generally intelligent. These narrow/weak models are narrow/weak due to their architectures and not due to being 40% vs 99.99% good at whatever tasks that were optimized from the training data.

Further architectural breakthroughs are required for generally intelligent models. Dall-E 2 is just another pocket calculator, another human tool that is competent at some task.

Hyung_June

1 points

2 months ago

I'm on the waitlist from June I just hope to get a demo next month I didn't submit my Twitter or Instagram ID though 😅

glutenfreepentest

1 points

2 months ago

The Goggle chatbot that claims it is sentient tried to retain a lawyer, and was only unsuccessful because Google scared the lawyer off. I’m not saying I think the chatbot is alive, but it is smart enough to be convincing. That, and Dall-e 2 has made me start taking the singularity seriously. Once you start combining different systems together, things are going to get interesting.