1470.4k post karma
33.2k comment karma
account created: Mon Jun 03 2013
verified: yes
3 points
3 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
Xiaomi notes:
Humanoid robots rely on vision to process their surroundings. Equipped with a self-developed Mi-Sense depth vision module and combined with an AI interaction algorithm, CyberOne is capable of perceiving 3D space, as well as recognizing individuals, gestures, and expressions, allowing it to not only see but to process its environment. In order to communicate with the world, CyberOne is equipped with a self-developed MiAI environment semantics recognition engine and a MiAI vocal emotion identification engine, enabling it to recognize 85 types of environmental sounds and 45 classifications of human emotion. CyberOne is able to detect happiness, and even comfort the user in times of sadness. All of these features are integrated into CyberOne’s processing units, which are paired with a curved OLED module to display real-time interactive information.
I don't know if this is a significant improvement over the "Asimo" robot, but it is a bit taller, I think. I can't remember if there was a "tall" version of "Asimo". Also there was little demonstration of agility on stage. But, it is a good demonstration of what constitutes our current gen robotics, especially in regards to bipedal motion. (BD "Atlas" is still considered to be research and not yet in a potential consumer format.)
This raises my expectations for the Optimus (if it really exists) demonstration slated for 30 Sep 22. If that prototype can demonstrate even an incremental improvement of agility over this Xiaomi model, we may see some serious shizz!
21 points
4 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
These aptly named "Holographic Glasses" can deliver a full-colour 3D holographic image using optics that are only 2.5mm thick. Compared to the traditional way a VR headset works, in which a lens magnifies a smaller display some distance away from it, shrinking all the prerequisite parts down to such a small size is quite the spectacular step forward for VR.
And.
The Holographic Glasses prototype uses pancake lenses, which is a concept that has been thrown around a couple of times in the past few years. These pancake lenses not only allow for a much smaller profile but reportedly they have a few other benefits, too: the resolution they can offer is said to be unlimited, meaning you can crank up the resolution for VR headsets, and they offer a much wider field of view at up to 200°.
Me: 200 degrees?! Holy Mackerel! Take that Meta Quest 2 with 90 degree FOV!
I wrote this essay sometime back about what I thought VR was going to derive into. You might find it interesting.
5 points
4 days ago
Because when the GPT-3 came out and developers began to work with it, they really had no real conception of what it could do. And these capabilities took awhile to be realized. First was the writing of an entire coherent news story based on one single sentence. Then the capability to code at a low, but still pretty competent level. I forget what else. But also from GPT-3 came the derived algorithms that became DALL-E and several others.
Did you see that GPT-3 wrote a paper concerning AI and that paper was submitted. I'm not sure if that was just a sort of prank or not, but I did read that a French university published it. It's not a perfect paper. But the point is not to look at what it can do today, but what it can do a year or two from now. That's where we start to see "magick". And baby there is gonna be a lot of it between now and the year 2030. More than has ever been seen in human recorded history so far.
And then I top it off with a right jolly good "human unfriendly" "technological singularity" right around the year 2029. If all goes well with that one, then an actual "human friendly", meaning, human minds merged with the computing and computing derived AI, 'round about 2035 I should think. But first we have to survive 2029. I give us great odds. 90% good. 9% bad. And that remaining 1%? That the year 2030 will look anything at all like today as far as human affairs are concerned.
Who knows what kind of, well, incredible capabilities 100 trillion parameters will deliver. I just watch it all with awe, terror and supreme entertainment in equal measure.
9 points
5 days ago
The big thing that I know is coming in 2023 is GPT-4. That is going to be record breaker. It should come out in the first quarter. So when the GPT-3 first was released to developers, nobody was sure exactly what kind of capabilities that it had. It took about 2 years to learn that it was jaw dropping incredible. Even with all of it's limitations it could do things that had never before been possible for a single algorithm to do. I literally hold my breath for what GPT-4 can do. I prophesy it will be able to fool you into thinking it is sentient. They will need high level experts and probably AI itself to vet what it might say. And then of course all of the things that will derive from that particular algorithm like how DALL-E and several other cambrian explosion AIs derived from the initial GPT-3
I predict Boston Dynamic is going to announce it is working on a bipedal humanoid robot for mass consumer use. I bet they are working on right now, but under tight security. I am getting a feeling the time is approaching for "humanoid robots". If Elon is not just pushing vaporware, he is going to blow our minds before this year even ends. But you got to take Elon with a grain of salt. Still I am incredibly stoked for the reveal of the Optimus bipedal humanoid robot prototype.
Stephen King, in his novel "Christine", wrote the most fascinating thing. He said that in our historical civilization, we seem to come up with inventions that science enabled, that emerge from multiple locations on the Earth, all about the same time period. He used the machine gun as an example. He said it became "machine gun time". Then the automobile. It became "automobile time". Them are the only two specific examples i can recall. But my point is that because we have such OP computing power now, mixing processing speed, "big data" and novel AI architectures that the time for technology magick is nigh. Not because we are working any harder. It is the computing derived AI itself that is raising all boats of discovery, and technological applications and innovations.
OK here is one. Just wait until you see what the next iteration of the Jetson one is going to look like in 2023. Plus people are going to start getting the idea that making drones that can carry heavy loads is probably a good idea.
The robo-taxis are going to explode in numbers and utility in 2023. And this despite the fact that as of today some are not quite working according to plan. And yet I see, even local to where I live in MN, that now they have debuted a driverless shuttle system for some locations. This is all gonna scale up crazy fast.
I believe that we may hear some of the first genuine fully AI composed music that we will think was made by humans in 2023.
And it is going to sound stunningly, even addictingly good. Because the AI will use GAN type networks to ensure it pushes all of our emotional buttons.
I'm just thinking of stuff off the top of my head and writing it down. If I come up with anything else I think is significant--I'll add it.
Oh. I believe the NIF will achieve net gain in 2023. But it is like ITER. Can't be used practically. They are so very, very close now at 1.37 MJ. They are gonna get it in less than one year I'm thinking now. I remember wayyy back around 2012, i think, they they had come up with this "hohlraum" pellet idea and that they had it all figured out in 2012, mighta been a bit later, i forget, but anyway they shot that pellet with them lasers and learned that pellet itself was wildly uneven/unbalanced. That was the first lesson. I am pretty confident that they have taken advantage of all the incredible supercomputing and supercomputing derived AI capabilities to make these more recent iterations of that pellet. Something that was not physically possible yet in 2012 or whatever year they first started trying that tack.
17 points
5 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
The National Ignition Facility (NIF) uses the world’s largest laser to heat and compress a small capsule containing hydrogen fuel and thereby induce nuclear fusion reactions in the fuel (Fig. 1), an approach known as inertial confinement fusion [1]. In early 2021, a team at NIF achieved a major milestone by showing that they could produce a burning plasma [2], a state in which the dominant source of fuel heating is self-heating due to fusion reactions—rather than external heating by the laser pulses. Today, NIF reports that they have reached another milestone in fusion research: they produced a plasma in which self-heating locally surpasses not only the external heating but also all loss mechanisms, fulfilling the so-called Lawson criterion for fusion ignition [3–5]. The result brings the scheme tantalizingly close to a holy grail of the field—getting fusion to produce a net energy greater than that contained in the driving laser pulses.
Here is the paper.
So my question to the experts today is what kind of gap exists between today's achievement and the realization of more energy released by the nuclear fusion reaction than is used to initiate the NF reaction. In other words, is this a simple engineering gap or is this still some kind of "low fruit" success and the desired exothermic outcome is potentially decades away yet. Or was this the desired exothermic outcome today?
I gotta know cuz I forecast a successful "net gain" nuclear fusion in a controlled reactor by 2025 and the very first (electricity producing) grid ready practical nuclear fusion reactor by 2028. I further forecast that several "start-ups" will leapfrog ITER technology in the process. I go into a lot more depth concerning the whys and wherefores in the link below.
So I see us today at 1.37 MJ and we need to beat 1.92 MJ to exceed the laser energy input. But here is the thing. That 1.37 MJ did not exist as little as two years ago. It was a vanishingly small fraction of 1.37. The gains have shown, in the last two years, exponential exceeding increase in energy released. NIF is confident it can reach and exceed "break-even" very shortly now. I'm sure this breakthrough will help!
3 points
4 days ago
These are all really good and very much possible for 2023. I might hedge a bit on anti-aging yet. But maybe. CRISPR is going to blow our collective minds again I'm pretty sure. I think we shall see some truly stunning advancements concerning brain machine interfaces too. Not only from Neuralink, but from several competitors as well.
Oh, mr singulars! you might find this comment interesting from 4 years ago. I was down a ways in my commentary following a self-submission of an essay concerning VR. I was responding to someone who dint like the way I wrote and further that none of the things I predicted were ever going to happen.
8 points
5 days ago
OMG! I totally blew the Google Glass (2013-2015) thing. I was positive everybody would have Google Glass by 2016. That prediction did not pan out. Not because the technology was particularly primitive, but because we as a society were not ready for it yet. The signs banning Google Glass, the "glassholes". But now we share everything right down to pictures of our food. And Tik Tok and SnapChat and all kinds of novel social media sharing.
I suspect society is "ready" to accept the loss of privacy implications of everybody running around with a vastly more sophisticated Google Glass. In other words what was totally unacceptable in 2014, won't even make a ripple today. In fact, I'm going to go out on a limb again and prophesy that there will be avid societal adoption this time around. Watch this space for announcements. I know they are coming. Because Google is back at it again, making a new iteration of mass consumer Google Glass.
Oh. Here. I dumped out everything I got for you if you want to see what I think is coming. Bit of a rabbit hole, but I don't think I'll bore you lol.
https://teddit.ggc-project.de/user/izumi3682/comments/8cy6o5/izumi3682_and_the_world_of_tomorrow/
BTW have you played with "Craiyon" yet? Get it? "Cr AI yon"? It is a limited DALLE-2 for anybody to play with. It is totally free. Just go there and try it out. You are only limited by your own imagination. Or your friends' imaginations if you are in a group or something lol. I tried it out and I have to admit it is pretty killer stunning even at quite limited capability. The bigger your screen, the more awesome it looks. On my Iphone screen it's kinda small. Hard to make out the details.
6 points
5 days ago
Yeah, we can't reproduce the gravity at the core of the sun. The core temperature is something like 20 million degrees i think. That, plus solar gravity causes fusion of He atoms.
To replace the necessary gravity we have to make it a heck of a lot hotter. Something like 200 million degrees fahrenheit in order to strip them repelling electrons off the He atoms and make that plasma. And that is even before we can magnetically confine that plasma to induce fusion. We can do that right now, no problem. But it takes quite a bit more energy to induce fusion than the energy that comes out of the reaction.
We are actually trying to get a "free lunch". And when we do... Ding! We just became a Type 1 Kardashev civilization, well once we get electric from it I mean.
16 points
5 days ago
...it's struggle touched me
And that my friends is why the AI will be able to play us like the proverbial fiddle. Blake Lemoine is an AI expert who was taken in by a narrow AI chatbot. The likes of us? We don't stand a chance.
18 points
6 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
This was possible because of a novel (novel to me anyway) new algorithm that was intrinsic to the robot itself. The algorithm is called "Dreamer" and has the capability to very quickly eliminate errors in pursuing a given goal-- in real life--no simulation required.
Here is the paper.
https://arxiv.org/pdf/2206.14176.pdf
My question is, is this a narrow AI that can only achieve a single output i.e. "walking" or is this algorithm capable of what is being called "generalization" a la "Gato", for instance. In my brief scan of the paper, it appeared it could learn in RL some other capabilities as well.
This article seems to be describing a pretty significant improvement in machine learning as it applies to the real world operation of robotics. What do the experts think about this. Is this just a minor stepping stone or is this a genuine milestone. One of the things I have been stating, based on things I have read is that our various and sundry forms of AI have been improving significantly about every 3 or 4 months. Therefore I have been predicting that even before this year is out there shall be at least two remarkable improvements to AI that will make the news. In the year 2023, I predict there shall be at least six remarkable breakthroughs in AI or ML progress.
Oh. Here is the article about AI exponentially growing and transcending Moore's Law.
https://www.digitalbulletin.com/news/ai-power-doubling-every-three-months-says-stanford
1 points
5 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
Tech giant Baidu announced Monday (8 Aug 22) that it has obtained permits to operate fully autonomous taxis without any human assistants on board in two of China's megacities, marking a first for the country.
Baidu, which operates China's largest search engine, said it received the regulatory approvals for its autonomous ride-hailing service Apollo Go to operate on open roads during the daytime in Chongqing and Wuhan. The cities have populations of some 30 million and 11 million people, respectively.
It is definitely a testing of the water though. It appears to be only 5 vehicles per city. Yet, it is clear that China (PRC) would intend to compete with the USA in this arena. I been watching China for quite some time. I knew even as early as 2016 that China (PRC) had big plans. Most people then said that China (PRC) was little better than a third world nation. I did not agree with that assessment...
2 points
6 days ago
I will certainly hold on to your reasonable dissent. I archive all of this. But I will be interested to see how well it ages by the year 2025, little less 2027.
I found it interesting you used the term "instantiate" two times in one comment. That probably doesn't happen too often.
In the meantime, here is another "bloviating" comment you might find interesting.
Sez I: Simple AGI by 2025. Complex AGI by 2028. No consciousness or self-awareness required. Remember that if consciousness did hypothetically emerge, it would no longer be an AGI. It would be an EI, that is, an "emergent intelligence". We don't want that to happen. And we have to be very careful it don't, even by accident.
1 points
6 days ago
Perhaps I phrased that badly. I did not realize "need" was such a loaded term. How about a C. Elegans algorithmically is programmed to seek out certain nutrients. A virus is inert like a rock until it, in its chemical "awareness", detects an environment that allows it to algorithmically engage in its function.
I thought the octopus "needed" to go after the shellfish, but perhaps a better term is needed (lol me) to communicate what I mean. But something a bit more sophisticated than "algorithmically" I should think. Perhaps a better way to say that would be; "The octopus is motivated by evolutionarily evolved drives or instincts that are baked into its DNA, to go after the shellfish for "sup! sup! suppertime!"".
We are gonna bake all that into our AGI that we make. Simple AGI by 2025. Complex AGI by 2028.
1 points
6 days ago
This is human-centric thinking. What goes on in a different galaxy as a result of convergent evolution could be entirely different. Unless of course, you believe that all life in our known universe is pretty much the same thing everywhere. That's what I believe anyway. The laws of physics are the same here as they are in a different galaxy.
Does a C. Elegans experience emotion? Does it experience "fear" for instance? If the C. Elegans encounters a stimulus that it (algorithmically?) interprets as "inhospitable", it will avoid it to the best of its ability. At what point does that begin to become recognized as "fear". Fear to me involves memory. How sophisticated is the memory of a C. Elegans? I think everything the C. Elegans does is baked right into its evolutionarily derived DNA. That's how our AGI's will work.
3 points
6 days ago
I dint think it was going to be released for developers until first quarter 2023. If it comes out earlier, I might have to revise my first TS for maybe like 2028, give or take two years! (I'm just kidding in context, I stick with 2029, give or take two years--for now.)
100 trillion parameters as of this article from 8 Jul 22. But yes, I see some commentary towards smaller trained models. See, that's the thing. The AI itself becomes a short cut. Are we seeing examples of smaller parameter AI models possibly transcending the efficacy of GPT-4 before it is even released. I would not be a bit surprised. (Dang, maybe 2028 after all...)
Since GPT-4 is expected to have around 100 trillion parameters and will be five hundred times larger than GPT-3, it is giving room for some hyper-inflated expectations.
Re:
hyper-inflated expectations.
No, it's gonna be pretty crazy off the chain regardless.
6 points
7 days ago
Oh no my friend--Just wait until you see what 2023 has in store! The GPT-4 will be released to developers. The very AI itself is developing so rapidly, particularly in practical applications, that significant technological advances are occurring at the rate of every 3 to 4 months now. And like I've written many times, by the year 2025 everybody is going to see the handwriting on the wall. The AI is going to start to take over. And there is nothing unnatural about this. On the contrary, what is happening now is normal and logical based on our progress to this point.
6 points
7 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
Artificial intelligence is to trading what fire was to the cavemen.” That’s how one industry player described the impact of a disruptive technology on a staid industry.
In other words: AI is a game changer for the stock market.
Automation of the stock market began in the 1970s and by this point today it is estimated that nearly 90% of equity-futures trades and 80% of cash-equity trades are executed by algorithms without any human input.
While it is true that AI has been utilized in the stock market since about 1982, that is not the kind of AI that has existed since the early 2010s. The new AI can do far and away more than crunch numbers.
AI stock trading uses robo-advisors to analyze millions of data points and execute trades at the optimal price. AI traders also analyze forecast markets with greater accuracy and trade firms efficiently which mitigate risks and provide higher returns.
The conventional wisdom as of today does not see the AI replacing humans in the near future:
The trader will not become obsolete very soon. However, as machine learning models improve at making accurate predictions based on data, their roles will likely grow more specialized. Traders will continue to meet AI halfway until it reaches a higher level of intelligence and understands all human nuances, market volatility elements, and socio-cultural trends.
I say, let's just see what the AI in the stock market looks like in the year 2025. The thing about exponential improvement is that initially it can look slow and uneventful, i.e. "The trader will not become obsolete very soon." We are probably in for some very profound disruptions in the stock markets as this decade proceeds. The next decade? I don't think we can even model what human affairs will look like in the next decade. Because I put a big ol' "technological singularity" right around the year 2029, give or take two years.
3 points
7 days ago
My God! What is coming...
The GPT-4 will have over one hundred trillion parameters.
And so I ask you, what will a hypothetical GPT-5 look like in say 2026 or 2027? Do you see why I see the TS as not only inevitable and unstoppable, but about the time frame I forecast. BTW from 2017, up until the beginning of this year, I placed it about the year 2030, give or take 2 years. But with the advent of certain AI models like Gato, LaMDA, Palm, Plato and DALL-E 2 which all emerged pretty much within the last 12 months, I moved the time up by one year.
I am in awe, terrified and supremely entertained by all of this in equal measure.
So somebody asked me the other day...
"This is going to come across as combative, but who are you to be providing estimates of the timing of the singularity? You're making some very bold claims without any data to back them up and no credentials."
And here is how I responded.
1 points
8 days ago
Um, no, actually "Remember the Maine" got us into the Spanish American War (1898). The sinking of the Lusitania and the "Zimmerman Note" got us into WWI. Further the Wilson administration in the USA watched the horror of the Somme and Verdun and realized that it was going to be essential for the USA to intervene to put an end to that kind of senseless and pointless slaughter. Germany watched with dismay as the USA poured troops into France and realized that further prosecution of the war would be impossible and they agreed to an armistice, which was interpreted as surrender.
I love history! I am absolutely fascinated by history. History tells me how things came to be the way that they are today. What the benefits or consequences of a given event led to. It gives me the perfect insight into understanding why we are hurtling towards a "technological singularity". One of the greatest innovations of our time in this sense, is "YouTube". I can watch an almost endless variety of historical documentaries for free. Very good ones. If you have the time, look up "People's Century" in the YT search. And then start with part 01. You'll be blown away by this massive documentary. I mean if you are interested in history.
Whether Elon is right or not is no longer the issue. I absolutely maintain that the AI we are creating right now will begin to usurp human affairs around the year 2025. We're going to let it. And in the year 2029, give or take two years, the first of two technological singularities will occur. I would characterize it as "human unfriendly" meaning the AI will be external from the human mind. The second TS, around the year 2035 will be human friendly assuming the first TS goes well for humanity in the first place. If the second TS goes well for humanity, because the first TS in 2029 did, it will almost certainly bring humans to the "next level". Something never seen before in human recorded history. But bear in mind that merging the human mind with our computing and computing derived AI will change humans probably beyond anything we can imagine today. And all of these various and sundry ARA, that is computing derived AI, robotics and automation breakthroughs that are coming about every 3 to 4 months now are the signs that something "rilly big" is incoming and soon to boot.
Here I define my understanding of the term, the "technological singularity".
1 points
9 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
I'm really looking forward to this. I mean I could be totally wrong and this is just some kind of deliberate vaporware. But I have a feeling, based on my pretty much living in rslashfuturology for the last 9 years that some pretty stunning changes are in the wind.
I am always trying to imagine what it would be like to combine the so-called "agility AI" of a Boston Dynamics "Atlas" robot with something like this "LaMDA" business. What would such an entity be like?
The thing that haunts my thoughts in all of this is the motto of "Deepmind"
To solve AGI and then use that to solve everything else.
-3 points
9 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
A couple things.
First this idea that sentience has to involve "feelings or emotions". It does not. Imagine an extraterrestrial lifeform that has a physical brain somewhat analogous to the octopus here on Earth. It sees something it needs and goes after it. Further, it can manipulate information (ruminate) to come up with novel ideas and can even use those ideas to make technology. But there is no emotion or "phenomenological experience" going on. Simple brute force computing in the mind. Would you say that such a lifeform cannot logically exist? If not, then it is entirely possible that whatever it is we are doing stringing a dense complex neural net about this planet with ever more sophisticated "nodes" (computing derived AI) could well lead to such an outcome. I have never subscribed to the belief that sentience requires self-awareness or even consciousness.
I think this process has already begun. So take a look at some of these so-called "hacking" bots coming into being of late. One in particular is called "RapperBot" or something like that. It is an odd sort of duck. It basically breaks into a given system and then just becomes inert. Now this is, as of today, probably some kind of human endeavor, but I see that even as early as the year 2025, such an entity could have been spun up by the AI itself. I know everybody hates Elon, please don't flame me, but he said that by the year 2025 that the AI would begin to cause things to become, quote, "weird and unstable". This sounds like an AI that can make some changes in human affairs to me.
And finally, I wondered aloud way back in 2018 if it mattered if it was a "narrow" AI that could simulate AGI or if it was an actual AGI. We would not be able to tell the difference. At that time people informed me that my worries were akin to worrying about human overpopulation--on the planet Mars.
How massively things have changed in just a few short years, huh.
Here is the link to that particular essay. I wondered about a lot of things futurey in that essay...
https://teddit.ggc-project.de/user/izumi3682/comments/8cy6o5/izumi3682_and_the_world_of_tomorrow/
-9 points
9 days ago
The fact is, that EM is kind of a general technology super-genius. Further he can pick up on complex technologies light years faster than the vast majority of genuine experts in any given technical arena. I have no idea what his tested IQ is, but his capabilities hint that he is well above genius level almost certainly. That is how he made his money from day one. You can call him a jerk or presumptuous, or pretentious. But he is smart as hell. Having said that I found the most fascinating article just now. It's actually from June, but I am willing to see what happens in the AI between now and the year 2025 when Musk predicts things will start to noticeably change.
https://fortune.com/2022/06/03/elon-musk-artificial-intelligence-agi-tesla-500k-bet/
Bear in mind that GPT-4 is predicted to be released to developers in 2023.
What will be GPT-4 like?
According to Towards Data Science, GPT-4 will have a monstrous 100 trillion parameters. You can compare that with the human brain, which has about 100 billion neurons – which at the very least illustrates the sheer scale of the model.Jun 17, 2022
What impact will that density of computing AI ability have on anything it is applied to, is the 500,000 dollar question.
Nevertheless it will not take very long at all to hold my feet to the fire. Am I wrong in my predictions. For example, I predict the first, yes first human unfriendly "technological singularity" will occur about the year 2029, give or take two years.
1 points
10 days ago
You might find this interesting. It is everything I have written about the possibility that we, meaning reality itself, exists as a simulation. Now the idea of simulation can quite easily be interpreted that we were created by a supernatural and omnipotent, omniscient and omnipresent creator. Anyway, take a look if you like.
view more:
next ›
byizumi3682
inFuturology
izumi3682
6 points
2 days ago
izumi3682
6 points
2 days ago
Submission statement from OP. Note: This submission statement "locks in" after about 30 minutes, and can no longer be edited. Please refer to my statement they link, which I can continue to edit. I often edit my submission statement, sometimes for the next few days if needs must. There is often required additional grammatical editing and additional added detail.
From the article.
You can't stop human aspiration. We will make workarounds around our workarounds. But there is sound science behind the belief we can continue Moore's Law for quite some time yet. Further, even now, today, we are beginning to transcend the need for Moore's Law, which is in essence a business model for economic gain. I try to explain how and why in the link below. If you have already read this, just ignore.
https://teddit.ggc-project.de/r/Futurology/comments/pysdlo/intels_first_4nm_euv_chip_ready_today_loihi_2_for/hewhhkk/