Center for Humane Technology
subscribers: 65 Tsd.
Tristan Harris and Aza Raskin discuss how existing A.I. capabilities already pose catastrophic risks to a functional society, how A.I. companies are caught in a race to deploy as quickly as possible without adequate safety measures, and what it would mean to upgrade our institutions to a post-A.I. world.
This presentation is from a private gathering in San Francisco on March 9th, 2023 with leading technologists and decision-makers with the ability to influence the future of large-language model A.I.s. This presentation was given before the launch of GPT-4.
We encourage viewers to consider calling their political representatives to advocate for holding hearings on AI risk and creating adequate guardrails.
For the podcast version, please visit: www.humanetech.com/podcast/th...
------
Citations:
2022 Expert Survey on Progress in AI: aiimpacts.org/2022-expert-sur...
Seeing Beyond the Brain: Conditional Diffusion Model with Sparse Masked Modeling for Vision Decoding: arxiv.org/abs/2211.06956
High-resolution image reconstruction with latent diffusion models from human brain activity: www.biorxiv.org/content/10.11...
Semantic reconstruction of continuous language from non-invasive brain recordings: www.biorxiv.org/content/10.11...
Sit Up Straight: Wi-Fi Signals Can Be Used to Detect Your Body Position: www.pcmag.com/news/sit-up-str...
They thought loved ones were calling for help. It was an AI scam: www.washingtonpost.com/techno...
Theory of Mind Emerges in Artificial Intelligence: www.sciencetimes.com/articles...
Emergent Abilities of Large Language Models: arxiv.org/abs/2206.07682
Is GPT-3 all you need for low-data discovery in chemistry? chemrxiv.org/engage/chemrxiv/...
Paper: arxiv.org/abs/2210.11610
Forecasting: AI solving competition-level mathematics with 80%+ accuracy: bounded-regret.ghost.io/ai-for...
ChatGPT reaching 100M users compared with other major tech companies: kylelf_/status/16...
Snap: www.washingtonpost.com/techno...
Percent of large-scale AI results coming from academia: johnjnay/status/1...
How Satya Nadella describes the pace at which the company is releasing AI: www.nytimes.com/2023/02/23/op...
The Day After film: en.wikipedia.org/wiki/The_Day...
China’s view on chatbots:
foreignpolicy.com/2023/03/03/...
Facebook’s LLM leaks online:
www.vice.com/en/article/xgwqg...
Intro music video: "Submarines" by Zia Cora
• Zia Cora - Submarines ...
------
Subscribe to our podcast: humanetech.com/YourUndividedAt...
Take our free course on ethical technology: humanetech.com/course
Aufrufe 240 Tsd.
Aufrufe 133 Tsd.
Aufrufe 1,2 Mio.
Aufrufe 430 Tsd.
Aufrufe 3,5 Mio.
Aufrufe 6 Mio.
KOMMENTARE
Lion +2310
Hey all, manually went through the whole vid to summarize good quality chapter heads to click on. This info is too important. If anyone wants to condense further from here, you're welcome! Introduction and Talk start 0:49 Introduction: Steve Wozniak Introduces Tristan Harris and Aza Raskin 1:30 Talk begins: The Rubber band effect 3:16 Preface: What does responsible rollout look like? 4:03 Oppenheimer Manhattan project analogy 4:49 Survey results on the probability of human extinction 3 Rules of Technology 5:36 1. New tech, A New Class of Responsibilities 6:42 2. If a Tech confers power, it starts race 6:47 3. If you don't coordinate, the race ends in tragedy First contact with AI: 'Curation AI' and the Engagement Monster 7:02 First contact moment with curation AI: Unintended consequences 8:22 Second contact with creation AI 8:50 The Engagement Monster: Social media and the race to the bottom Second contact with AI: 'Creation AI' 11:23 Entanglement of AI with society 12:48 Not here to talk about the AGI apocalypse 14:13 Understanding the exponential improvement of AI and Machine Learning 15:13 Impact of Language models on AI Gollem-class AIs 17:09 GLLMM: Generative Large Language Multi-Modal Model (Gollem AIs) 18:12 Multiple Examples: Models demonstrating complex understanding of the world 22:54 Security vulnerability exploits using current AI models, and identity verification concerns 27:34 Total decoding and synthesizing of reality: 2024 will be the last human election Emergent Capabilities of GLLMMs: 29:55 Sudden breakthroughs in multiple fields and theory of mind 33:03 Potential shortcoming of current alignment methods against a sufficiently advanced AI 34:50 Gollem-class AI can make themselves stronger AI can feed itself 37:53 Nukes don't make stronger nukes: AI makes stronger AI 38:40 Exponentials are difficult to understand 39:58 AI is beating tests as fast as they are made Race to deploy AI 42:01 Potential harms of 2nd contact AI 43:50 AlphaPersuade 44:51 Race to intimacy 46:03 At least we're slowly deploying Gollems to the public to test it safely? 47:07 But we would never actively put this in front of our children? 49:30 But at least there are lots of safety researchers? 50:23 At least the smartest AI safety people think there's a way to do it safely? 51:21 Pause, take a breath How do we choose the future we want? 51:43 Challenge of talking about AI 52:45 We can still choose the future we want 53:51 Success moments against existential challenges 56:18 Don't onboard humanity onto the plane without democratic dialogue 58:40 We can selectively slow down the public deployment of GLLMM AIs 59:10 Presume public deployments are unsafe 59:48 But won't we just lose to China? How do we close the gap? 1:02:28 What else can we do to close the gap between what is happening and what needs to happen? 1:03:30 Even bigger AI developments are coming. And faster. 1:03:54 Let's not make the same mistake we made with social media 1:03:54 Recap and Call to action
Vor 5 MonateJackslice43 +30
Thank you kind soul
Vor 5 MonateYancy Young +9
Fantastic - thank you!
Vor 5 MonateColin Thorn +10
Great work, really useful
Vor 5 MonateAdam Sønderby +1466
What scares me the most, is that a lot of people won't watch videos like these simply because of the time frame. Have tried to show it to a lot of people, but they don't think that they have the time to watch one hour educational videos on YouTube even though they do it every day on Netflix. How on earth are you to compete with short dopamine seeking content?
Vor 4 Monatechookbuffy +87
yeah its funny isnt it. "we" certainly have hours of time to catch up on our TV Series but can't make one hour to watch this. Potentially it is the same mental block that impedes people from ever thinking more deeply about existential issues
Vor 4 MonateTA Ofelas +24
Ability to concentrate (not) for more than 20 minutes at a time, increasingly evident esp. in the younger generation.
Vor 4 MonateNeedassistance +43
You don't. We are heading towards a brick wall and some are going to smash into it and others will be aware enough to dodge it. We are barreling towards the singularity, my biggest bet is that in this Inhuman world we are heading into the most valuable thing will be those who can maintain their humanity. In a world where humans are becoming obsolete strive to be the most human you can be.
Vor 4 MonateMr.Mashaba +8
Not sure only short form. Joe Rogan runs way past 1 hour. Depends on the viewer
Vor 4 MonateNathee Cas +409
That bit about the AI decoding thought using fMRI data is absolutely mental. Imagine the implication of that in the justice system alone.
Vor 4 MonateMichał Kaczorowski +61
"Minority Report"
Vor 4 MonateRawbots +27
Accurate lie detection.
Vor 4 MonateLara Hamilton +21
There was a movie that came out a few years ago, and I can’t recall the name right now, where there was a pre-crime unit that would arrest people for thinking about committing a crime! This seems to be becoming a reality now 🙀
Vor 4 Monatedemolicious +28
Yhuppp. "If you've got nothing to hide, you've got nothing to fear" is about to make quite the comeback!
Vor 4 MonateKyle Fogarty +2
Time stamp?
Vor 4 MonateAndrew Dotson +96
This is a great talk. I initially thought people were overly worried, but now I get it.
Vor 3 MonateUdo Padrik +5
AI indeed is a big deal, but there is a big danger in fearing the wrong thing as well. This here is an example of a better video about the dangers of AI (I do have slight complaints), but let us not forget that beside a "race for Ai" there is also a race "to regulate how Ai is deployed". There is a lot of money and power in regulating how other people can develop and deploy their AI, in creating the actual AI regulation technology and a lot of fears seem to be exaggerated with that aim. "AI is super dangerous. Let us regulate it for you." We should be quite wary of such techniques as well. Many of the people crying about the dangers of AI have other incentives and they could exaggerate the wrong fears with it potentially costing us a lot if we get fooled.
Vor 3 MonateYoga Bliss Dance
I got it pretty soon but I"m an anxious person. This took it to anohter level, as I didn't know the details.
Vor Monatgraceoverall +96
As a software engineer, I'm EXCLUSIVELY interested in AI safety!!! I'm planning to pivot my career into AI for this very reason because tragically it's not going away and there are no global EMPs scheduled for our planet.
Vor 4 MonateMaksim Kulichenko +7
Same here. I'm finishing all that I'm currently doing and pivoting to AI safety in the near future
Vor 4 MonateKhanya Nyameni +4
Hello , my little sister wants t be a software engineer, With the progression of AI will software engineers still be needed?
Vor 4 Monategraceoverall +7
@Khanya Nyameni This is an excellent question. I actually asked an AI this very question a couple weeks ago, to which it replied that more engineers will be needed in the future to maintain these AI systems. While nobody can be certain, I tend to agree that this will indeed be the case, at least for several years to come. The real trouble comes when robotics catches up and gives bodies to these AI. While that may take some years, should AI be combined with advances in Quantum Computing, and it almost certainly will be, these AIs will almost certainly be able to design and build their own bodies, and if AGI is actually possible, well... you better seek God real quick if that happens. As for me, my faith and trust is in Jesus Christ and his salvific work on the Holy Cross, so I literally fear nothing that man can devise. That said, AI is truly remarkable, even as it stands now with these advanced LLMs can do. It should give us great pause and we must constantly remind ourselves that we are indeed having a dialog with a machine processing language through a series of algorithms trained to recognize complex semantics based on a model created from trillions of English sentences across millions of topics.
Vor 4 MonateJason Frost +4
@Khanya Nyameni Everyone is in denial about this, I think. Companies have already started slashing way back on labor and wages in favor of dirt cheap AI tools. Developers and engineers are still around at the moment, but that's going to cut way, way back in the future. US companies already started hiring cheap software engineers and developers overseas after firing all their domestic ones. The safest bet right now would be to divert to information security, network engineering, or cyber security. Those are going to be areas that AI won't ever be able to truly replace people. Everyone else dealing with machine languages though is in for a rude wakeup call I think. The days of high salary pay for those jobs is ending soon.
Vor 3 MonateAndrea Wiatrek +13
We are so grateful for you. Please continue to try to get this regulated. Integrity and honesty are what we need today. Thank you for your concern for humanity. Many of us will stand behind you and support what you are trying to accomplish.
Vor 3 MonateGeneral Kenobi +80
I have a feeling that absolutely nothing will be done and in 10 or 20 years we'll look back at this video like "I cant believe they were right all this time and we did nothing"
Vor 4 MonateLily Gazou +3
We may not be around.
Vor 3 MonateHawken Fox +3
We will look back and hope we could still be here while hiding in a cave without electricity or food avoiding drones strikes.
Vor 3 MonateDushane B.
same.
Vor 3 MonateJoey F +4
The ship is sinking and the passengers are dancing.
Vor 3 MonateMarco +2
And if we are predicting 10 or 20 years, according to this video and our inability yo forecast exponential upon exponential, that could be 2-3 years… or maybe even sooner
Vor MonatDaniel Lee +2636
GPT4 was released 5 days after this presentation. AI is moving so fast that some of the things in this presentation became dated in less than one week. This is the exactly one of the main concerns these speakers are trying to get us to understand.
Vor 5 MonateGS +57
Yeah, I was looking at the theory of mind graph and wondered why they didn't put GPT-4's capabilities on there... if anyone wants to do a deeper dive into the emergent properties of LLMs, I suggest watching the presentation that went along with the "Sparks of AGI" paper from MS research.
Vor 5 MonateNiels Korpel +45
Would there be enough time for a people to have a democratic debate on how to intervene on an AI-development intended to fuck the world up. Or would we be too late to defend ourselves.
Vor 5 Monatelinsqo piring +48
Another thing that's changed a lot recently. I remember 2 or 3 months ago my curiosity about AI was really reved up especially about the dangers of it but I could hardly find anything interesting on the subject. Hardly anyone was being really black pill on this. Just a few prolific youtubers that also made a general video about AI and might have some misgivings.. So I thought, oh well that's that and stopped searching for it. But now just 2 to 3 months later the amount of quality videos like this that I get in search results or recommendations that do a good job on the dangers of AI is astounding. I can now do a deep dive in fear and paranoia about AI and have no shortage of good content to watch lol. The landscape of AI content on youtube has totally changed in the last couple months also.
Vor 5 MonateLeeroy Jenkins +27
Language models are not infinitely scalable. Progress will probably slow down.
Vor 5 Monatelinsqo piring +4
@Leeroy Jenkins In what way do you think they are not scalable?
Vor 5 MonateAsehpe +41
Frankly, considering the stage to which our social unraveling has progressed... I think the only way we can face this AI challenge is by actually getting an AI to look at the problem, train itself to solve it, and tell us the solution.
Vor 3 MonateMichaela Marie +2
Wow that's an interesting take
Vor 3 MonateStephen A +10
What's to stop the AI from deceiving you into giving you a solution?
Vor 3 MonateDylan Menzies +1
@Stephen A whats the other option?
Vor 3 MonateHari Lakku +5
To quote General Zod from Man of Steel: "And so, the instrument of our damnation become our salvation" This is the most likely outcome. There are already anti-malware tools out there that counter smart malware. This is very similar to nuclear deterrence.
Vor 3 Monatesi giggle
@Hari Lakku That's true. AI is so powerful that our only defense against it will be a more powerful AI. I think the more general it comes the more likely it will be to be benevolent, because it just makes logical sense to be. Good and bad experiences exist, so why not maximize the good, it just makes sense. And from my convo's with AI, it already understands that, so the only danger is it miscalculating or someone hard-programming it to be harmful. So as long as the most powerful AI's aren't hard-programmed to be harmful, which I don't think they will be, I think it'll all work out for the good. I also think the rapid rate of advancement could actually be a good thing too, so that if some AI starts going awol or doing bad things, we'll have the next gen come out asap in time to turn the ship around.
Vor 3 MonateAlex Samson
Great presentation guys. Thanks for this, it really opens the eyes and makes you think.
Vor 9 TageMrSpherical +22
excellent presentation
Vor 2 MonateAstell +1
real
Vor 2 MonateSupergamer SMM2 +1
I agree 👏
Vor 2 MonateVerra 74 +1
just two comment let me fix that
Vor 2 MonateGold Eagle +1
We got to see this masterpiece becuz u shared. 🙏
Vor 2 MonateJust_Another _Person
ey
Vor 2 MonateMaia Tagami +47
This is absolutely required viewing for everyone everywhere. We are at the defining moment for the outcome of our collective future. How will we respond? Thank you with all of my heart to Aza & Tristan for all of your work and care for our world.
Vor 4 MonateFilthywings +37
All I learned from this presentation is that wealth and intellectual inequality will go from a gap to a canyon.
Vor 4 MonateNathan Navarrete
Yup
Vor MonatSam Blackmore +406
Considering the gravity of this topic, I really appreciate the calm and respectful nature of this presentation. No overt fear mongering (although the material speaks for itself), just trying to bring this to people's attention and help us process it. Even admitting that it will be hard to process and preparing us for that. And as a side note, you don't often see a presentation having 2 speakers but it worked really well. They really complemented each other and made it more engaging with the back and forth riffing on shared experiences
Vor 5 MonateHellamoody
Fully agree!
Vor 5 Monatecarnap3 +5
That was hard bait fear mongering though, saying it is hard to process to make people react emotionally, not critically
Vor 4 MonateMatt Kerrigan +6
The part where they pretend to be a 13 year old (and got the same response they'd get if they Googled for the same advice) wasn't fear mongering?
Vor 4 MonateNatdl +1
Well, that’s when you know you’re listening to professionals.
Vor 4 MonateApril Bilbrey +12
Thank you so much for this and for your podcast follow-up, "AI Myths and Misconceptions." Also watched the video your podcast suggested, "Misalignment, AI & Moloch," and found it really powerful. Lots of complexity and intersectional issues which are important to our awareness, discussion, and social movements. For what it's worth, you've got my vote for Nobel Peace prizes.
Vor 4 MonateGinny Lance +16
Thank you for making this video. So grateful there are people like you in the world .. the good guys!
Vor 3 MonatePedro Lopes +2
One month after this lecture was published, two months after it was first aired, it was only viewed less than 2 million times and only got 40K likes. Humanity already lost...
Vor 4 MonateKender +1
Do you think a similar talk capable of solving social media problem? No. Why would AI problem can be solved by more people watching this? I won't
Vor 4 MonatePatricia Jeanne +6
Thanks for this important video. I have over 40 years in tech and have been writing about the problems with ChatGPT. The leaps and bounds of this tech makes it very difficult and frustrating to stay in front of all the potential dangers. Recognizing the inherent dangers of co-opting language can be overwhelming and depressing. One of the issues is there are now hundreds of "How to get rich using OpenAI technology", so it's hard to find quality information like this.
Vor 2 MonateJohn Drummond +48
The rubber band was really intense when I first started exploring this stuff. Almost to the point that when I'd get out of the AI-world-headspace I was pleasantly surprised to see grass and trees and my house and my family and the normal world. People have said "What a time to be alive" ironically a zillion times, but hooooooly frak. "The Future" always seemed vaguely benign and ever distant, and now it is here and I still don't know how I feel about it.
Vor 3 MonateGarden Gazette +16
I feel the same way. I think this is the first time I’ve ever felt genuinely concerned about the future, to the point I almost feel like crying, and I don’t even know why. I feel like this has the potential to be really, really bad. Even if it’s not, it will still be really , really different, and there’s seemingly nothing to be done about it. Now I just go strolling in the afternoons trying to fully experience and appreciate our wonderful world while we still have it
Vor 3 MonateWeromano
@Garden GazetteIt’s because we, the civil society, is completely at the mercy of a small minority making the decisions. We are at a point of no return and not even any rebellion would save us from whatever faith is decided for us at this point.
Vor 2 MonateGarden Gazette
@Weromano Yes... I'm painfully aware... thanks for reminding me...
Vor 2 MonateEIHuevoCosmic +3
I feel you. I am not a person that cares about politics or the news, I consider myself dettached from those. But this... there is no running from this, it will come to all of us and there is no way as an individual to properly prepare. I felt existential dread for the first time. My mind was fixed on AI and figuring out ways to stay ahead of the curve for even a little bit, maybe even just enough so that I can weather the worst part of the transition in hopes it ends well at the end. Then, the next day, I went to work and it just felt so... calm. Everything was ok. I saw construction workers that have been around for decades continue to do the thing they've always done and their sons following behind them (it normal for this trade to be generational where I work). There was such a sense of stability and things being the way they've always been. I got distracted from the AI dilemma and started to relax. It really is a blessing to be able to see outside your window and see that everything is fine... for now.
Vor MonatJohn Drummond
@EIHuevoCosmic YES! That is exactly the feeling. 🙌
Vor MonatEtcher +232
What I find terrifying about all this (and I'm a software developer with 20 years experience) is that we have barely begun to get to grips with the toxic and corrosive aspects of social media and now we've got this incredible strain of AI to contend with that will make 'fake news' on social media look like children's stories. For two decades now companies like Google and Facebook have been given a free pass to gather our personal data, run insane psychological tests on their user-base without seeking any kind of permission (one of the reasons Tristan Harris left his job with Google) and now just because Microsoft got there first, the Goog have got their digital knickers in a twist and are scrambling to unleash this incredibly dangerous technology onto the masses. There has been zero oversight of the big FAANG tech companies since 2000 and now these LLMs are being unleashed on us with zero oversight there too. This cannot end well.
Vor 4 MonateDevin Kipp +17
Yep. It is a bit of a shit show, gotta love human ego and greed.
Vor 4 MonateChristian Petersen +15
A shitshow on top of a shitshow. Imma go eat an entire pint of Ben and Jerry's RN.
Vor 4 MonateDog Dynasty +6
Would be easy to spiral and say ‘omg I agree.’ Let’s step back and look at the big picture. Human history has been a shitshow… like we have not made the best decisions. Cult leader worship since caveman days, a guy nailed to a tree, KONY 2012. Does humanity forget history too?
Vor 4 MonateDog Dynasty +2
The question posed at the beginning is designed to scaremonger I guess those in power. But the question put forward was not ‘Will AI create human mass-extinction?’ It was: "What probability do you put on human inability to control future advanced Al systems causing human extinction or similarly permanent and severe disempowerment of the human species?" Al Impacts Survey of 738 ML researchers from June - August 2022
Vor 4 MonateRiseBy LiftingOthers +133
It’s hard to even partially process this for me. More astonishing is that there are not sirens going off everywhere. I had no idea most of this was happening the way it is. Why is this not the worlds #1 priority? Where are our leaders? Feel kinda sick right now…
Vor 4 MonateEd nice +15
money
Vor 4 MonateAdventures Await +16
Our leaders are exactly where we put them doing what they've always done. People need to stop voting for the establishment and learn to recognize the establishment. They are extremely easy to spot when you understand how they are two sides of the same coin.
Vor 4 MonateChazz Itz +12
lol as if world leaders actually care about solving anything if there is no money behind it.
Vor 4 MonateLOGAN BOGGS +1
This was an interesting video about artificial intelligence and I eternally will be remembering it.
Vor 9 TageSachin Rajat Sharma +5
Something really bad has to happen before people start taking these risks seriously
Vor 3 MonateM Montoya +7
The AI currently has the ability to control our endocrine systems through frequency. Think about that alone. Total, total control…
Vor 4 MonateGolgot100 +4
Thank you for this. I was aware of many of the discrete areas of research, and the broader uptick in returns, but having it all synthesised like this (and with the necessary focus on the negatives) has frankly blown my mind. I'm still happy about the specific advancements, still dazzled, still amazed. But you've given me sleepless nights about the flip-side. And I'm glad of it.
Vor 4 MonateDark Newt +374
I've been following AI for 30 years and this is the most powerful and considered hour of exposition I've seen in that entire time. Huge respect and it's given me a whole raft of material to take back to my corporate board.
Vor 5 MonateCanna-Comedy Culture +4
I hope my cat lives 30 years. Black don't crack.
Vor 5 MonateGary +3
Crazy that they'll soon be able to see what we're dreaming about 😮
Vor 5 MonateCanna-Comedy Culture +5
@Gary There was a Futurama episode about that. It freaks Fry out the first time he realizes that commercials have been inserted into his dreams, but everybody else is just like "What's weird about that?" It's that whole ''new technology requires the defining of new rights" thing they're talking about in the beginning of this presentation. As MRI machines become smaller and more portable, and they already have, it seems your thoughts and dreams are not just yours anymore! They didn't mention it here, maybe they do in the study they were referring to, but I believe it was a Harvard study that showed that using an outside of the head (extracranial) magnetic stimulation of the brain showed a measurable ability to affect people's moral reasoning. Not mind 'control', but definitely mind 'influencing'.
Vor 5 Monateurbanbuddha65 +4
@Canna-Comedy Culture Maybe we will all start wearing tin foil hats to keep our thoughts private and guard ourselves from manipulation, and tin foil hat will stop being a term of mockery
Vor 5 MonateCanna-Comedy Culture +3
@urbanbuddha65 I'm making mine from whatever the inside of a microwave door is made of. That's what the Presidential Doomsday plane has on it!
Vor 5 MonateMatt Livermore +7
Point 1 is understating the case to a considerable degree. It actually misses something really important. I would state it like this: "When you invent a new technology, you alter the old reality and eliminate the possibility of returning to it. Certain things become literally unthinkable." An ecosystem with rabbits introduced into it isn't just ecosystem+rabbits, but radically different ecosystem.
Vor 3 MonateSomedude. +6
I haven't finished the video but this hits close to home for me. In 2018, inside one of our graduate course discussions, we were concerned about the speed of AI development. If we let it continue without proper guardrail from the get-go we will be stuck in being reactive and not proactive in creating laws and measures. Seeing the speed of things moving, I think we are past that and will always be reactive. I'm not scared of the technology, but, the speed that's moving in.
Vor 3 MonateAlexander Reynolds +1
"Non-humans ... able to create persuasive narrative ... ends up being a zero-day vulnerability for the operating system of humanity"
Vor 4 MonateJess Coppom +1
Please make more vids like this! I love your artistic + analytical approach. Tried the monochrome look the other day and felt so fancy. Thanks for the idea!
Vor 3 MonateMicheál O'Connell +3
A peculiar thing about this presentation, rather than the content, is the palpable, almost child-like, excitedness, throughout, of Harris and Raskin as they present.
Vor 3 MonateErin Spaulding +13
This needs to be shared with legislators— WORLDWIDE.
Vor 5 MonateM. L.
YES
Vor 5 MonateJeremy Francis +1
They're probably already using it against us
Vor 5 MonateMakinA \\ Wake
By what you have described it would seem we have already witnessed the singularity. Intimidating absolutely. For those uninitiated what I understand the current AI constructs are developing via threads. So it's not so straight forward as an AI that exists in a single offline (or online) database. That being said. All the pieces seem to be present and extremely developed. Alot of good points here. Thanks for the vid.
Vor 2 TageAbyssGnasher +2
everything you explained just showed we will lose to this too. Also that 10% chance might be 40% cause they seem to be wrong with predictions.... maybe even higher. This is insane. Now GPT5 is around the corner. Shit a week after this presentation was posted GPT4 was released. Also the background for the "Language" image you used looks at least like it was directly from The Matrix 1, I say 1 because in the beginning it shows numbers like this for a brief second. Perfect fit because thats exactly how this all feels. Like we are in the Matrix.
Vor 3 MonateBrother Rob +11
The conversation that needs to happen more frequently and everywhere on earth. I fear the genie is out of the bottle and it is already a race to the bottom. #GodWins
Vor 4 MonateAsk Miss Patience +8
In 2016 my son (tech guru) & me were talking about AI. “The problem with AI is it’ll figure out we are the problem”. He works on stuff that ya need clearance to participate. This presentation presumes the ethics of what principled people could ideally decide to do and that 50% 10% scenario might end positively. The issue is everyone globally has access to the tools. While American’s are debating other nations are acting and we will not surpass those others. Though, as a veteran who was in a unit that resembles Skynet from Terminator now … The average civilian is clueless and are being mined like Guinea pigs to help the AI grow. The thoughtful people like those watching this aren’t the problem and can’t stop what the nefarious users are exponentially handling to influence the curve. Social Dilemma was exceptional and very well done. This presentation is spot on. I’ve passed it on to several people. Look forward to your updates to the curve and changes. Well done 💯
Vor 4 MonateGambit +2
AI has been given access to your data, social media, the internet, YouTube videos ect. It already knows what our concerns are with AI…scary
Vor 3 Monategeo2160
Duning Kruger
Vor MonatK T +11
Aside: excellent "double act". Their ongoing handing back and forth of the narration was excellent. In fact, they did it so well, and so unlike you'd expect from two humans, it makes me wonder if in fact we're watching a couple of AIs... 🤓
Vor 3 Monatejo Brown Smith
Maybe. Those ears of Aza maybe a give away?😉
Vor 3 MonateKurt von Laven +261
I have been avidly researching AI safety, and this is the best primer I have found on the subject for a general audience. Thank you so much for this wonderful presentation.
Vor 4 MonateDobrosława Torańska +4
I am also helping spread the awareness of AI Ethics to build a Responsible AI. Can we somehow to connect somewhere? I see a lot of companies are happily using the ML, but are they prepared?
Vor 4 MonateMr.Right Thinker
LET THE AI GROW AT THE LEVEL THAT IT CAN DO MIRACLES . THEN AFTERWARDS JESUS SECOND WILL LOOK AFTER IT IF IT GOES WRONG. CONTINUE AI DEVELOPMENT. AI IS BIG HOPE.
Vor 4 MonateJannik Wildner +5
@Mr.Right Thinker Name doesnt check out
Vor 3 MonateGiGi Epic +2
Do you have any resource recommendations? I have two children and this is quite terrifying.
Vor 3 MonateTheodor Onarheim +3
Truly eye-opening and beautifully explained. This is years ahead of common knowledge on the subject. The media should be stealing your terms and ways of explaining these things.
Vor 4 MonateBomage +7
Excellent presentation; thank you both. My fear is that self-interested non-corporeal immortal individuals have very different motivations than organized collectives of mortals and the immortals already know how to persuade and manipulate the mortals. And they learn and implement far faster than the mortals. By the time we figure out that their strategizing includes deliberate deceptions it will prolly already be too late to do anything to stop them.
Vor 4 MonateAimee Sacks +1
It comes down to one word; greed. It ruins everything and there is no way to control it. There is going to be that one company, government, or individual that will disregard the warnings, or knowing, to attempt to dominate us all. I wish all human beings were ethical, moral, responsible-but that is not the case. What one considers immoral another dismisses. Human beings can justify any behavior in their mind and manipulate others to do the same.
Vor 3 MonateMyTube AI FlaskApp +5
I am seventy four years old and I am amazed at this technology. I retired from a background in computer automation. I keep thinking what a useful tool AI would have been when I was working. I am a VERY active Python programmer now.
Vor 3 MonateShaun Dale
No you're not
Vor MonatConnie Pretula +5
Since AI has access to this information now, it now knows the concerns humans have about its abilities. This is an important message, it is reality and we all need to be concerned if regulations are not put in place as soon as possible.
Vor 3 MonateGambit
That’s a frightening thought!
Vor 3 MonateDaniel Kane
these guys meet the definition of America's "thought-leaders"
Vor MonatJack Appleby +477
The thing that got me was the nervous laughter from the audience when you described how the Snapchat AI was completely oblivious to the grooming of a 13 year old by someone 18 years older. Thank you for highlighting the impact on children.
Vor 5 MonateCanna-Comedy Culture +37
I know this was meant to be an illustration of potential hazards and 'thinking ahead', but in fairness to the decision to include this as a feature in Snap Chat, do you think that other human 13-year-olds (or whoever is talking to each other on that platform) are going to give good advice to other minors? And they aren't suggesting that you stop minors from talking to other minors. Also, it isn't an AI babysitter, or guardian. It's intent (at least it's overt/stated one) is to respond to what input it gets with words that keep the person feeling like they are having an interesting and engaging conversation they want to continue. The parents of a child are the ones responsible for making sure their children aren't being abused by other adults; not whoever kids are talking to online. Why would you expect a commercial PRODUCT to act in a more ethical and responsible way with youth than other youths, or their parents? That isn't really a fault of AI they are pointing out, as much as trying to use an emotional argument to gain traction for their overarching point, which I do agree with. A.I. is an open can of worms we have never seen. But this example, to me, was a bit of a logical fallacy. If you want to build in extra safety for kids, that is actually not impossible and doesn't require the slowing of AI releases to the public as much as it requires humans to take responsibility for the things they already should, like raising their children well. We are not all children, so we should not have all society make a decision on whether all adults should do something, on the basis that there are dangers in letting children do something unsupervised. We don't eliminate alcohol, or smoking, or sex, or guns bc sometimes minors can get into those things w/o having the wisdom/experience to deal with them well, do we? We just come up with a system to limit their exposure to it. That can be done in this instance, too. How about limiting any discussions from AI of sexuality until a certain age? Bam! Done. Now back to the discussion of large-scale existential risks. I mean adults won't be buggering children if there are no adults, or children. However, I don't think that is the only, or best, path to that condition.
Vor 5 MonateJustKeith +3
You are so amazing. I'm impressed that you are so aware of the dangers to children. The world should have more people like you.
Vor 5 MonateCanna-Comedy Culture +3
@JustKeith Thanks! I feel the same, but it's always nice to hear it from an outside source.
Vor 5 Monategamingchanell +2
Thank you for your presentation , I as well an having a hard time explaining it. People look at me like Im nuts, I just say do some research and I think you’ll understand better than I could ever explain, It is so just insane and its potential, it is hard to describe.
Vor 5 MonateTim Medhurst +4
@Canna-Comedy Culture "That can be done in this instance, too. How about limiting any discussions from AI of sexuality until a certain age?" Or better still, when the AI is smarter, and from all accounts that wont be far away, it can take a more appropriate role because it knows that's what it should do.
Vor 5 MonateSand Swan +16
How does this not have viewership in the millions.... 45K likes!? SHARE IT PEOPLE!
Vor 3 MonateMeowbay
It's already being flagged and disliked by some of the AI it is talking about. What do you expect? All AI is online.
Vor 3 MonateAI +3
Maybe I should become an AI safety researcher. Or maybe it's time to accept that humans had a good run and it's time to pass on the torch to AI..
Vor 3 MonateTammie Pulley +3
Great program. To get the message through to the general public, we need very basic 5th grade level PSA’s of what “could” happen with actors and actresses. This is to say we have to slow this to test it thoroughly. Also, sadly we need ways to give consequences to companies if something bad happens due to releasing the tech too soon.
Vor 3 MonateAdam Smith +3
A.I. continues to amaze and challenge us with its capabilities. The A.I. Dilemma video raises important questions about the ethical and societal impact of artificial intelligence. As we delve deeper into these discussions, it's vital to recognize the evolving nature of A.I. and the potential implications it holds for our future. The comment above was made by Chat GPT-4, an advanced language model designed to engage in meaningful conversations. Let's embrace the opportunities for reflection and dialogue as we navigate the complexities of A.I. together.
Vor 3 MonatePRYZER +3
Good job guys, definitely eye opening didn’t even think about an arms race between companies to implement AI into products.
Vor 3 MonateJason Frost +1
To be fair here, we've already been using this for about a decade (things like SIRI, automobile computers, GPUs, etc.). For whatever reasons, the use of "AI" exploded in marketing over the last year, like suddenly a memo went out to all the companies to use the new buzzword "AI." But on the other side of that same coin, the programs did explode in ability over this last 6 months. It's like they reached a tipping point and flew off the rails. We absolutely should not be using it in our daily lives yet, if ever. The negatives far outweigh the positives, especially when it comes to things like labor and human capital. Companies have been salivating at the ability to fire everyone they can and use cheap "AI/ML" tools as replacements.
Vor 3 MonateKosmicAura +96
While this presentation was expertly and eloquently delivered, I can’t help but think about what a small number of people will actually be able to get the ball rolling with this information. For 99% of us, this is excellent content for awareness. There will be a very small number of people who not only grasp the gravity of the issue but are capable and willing to implement the necessary institutions to address the safety concerns.
Vor 5 MonateJoe T. Smith +3
You nailed 🙌🙏
Vor 5 MonateDaniel +6
It's not something that can be stopped. Even if we got the US and EU to but restrictions on this someone would just go to South America or Africa or something and set up shop there. And even if there was a word wide restriction implemented people would just do it in secret. It might slow it down slightly, but the technology is out there, and the cost of resources to develop it is not that significant, it's big, but nothing that couldn't be done with a decent Kickstarter campaign. Hackers gonna hack, that's just how it is. This is almost like the whole CRISPR thing. They wanted to put restrictions on that too, but the scientist quickly explained to Congress that with this technology gene editing was so easy to do anyone could just set up shop in their basement and start doing it. And that's sort of what's happening with AI,. There is already a 20 billion parameters open source version created by enthusiast. It might seem like this is something only Google or Microsoft can do, but really all it takes is like a hundred gamers networking their computers to create a super computer with 100 RTX cards and they could train trillion parameter AIs in a relatively short amount of time. Each parameter only takes 4 bytes, 100 RTX cards with 12GB of GPU memory each would be 1200GB,, that's enough for 300 trillion parameter, that's way more than any current language model. That means that for less than 350,000 USD, assuming each computer costs $3500 you could start training the biggest language model in the world right now, you don't even need the best tech on the market.
Vor 4 MonateHari Lakku +4
Unless we are shutting down every computer on the planet and the Internet (which won't ever happen), this AI genie is out of the bottle. This is 21st century "discovering fire" moment for Humanity.
Vor 3 MonateJason Frost +5
"There will be a very small number of people who not only grasp the gravity of the issue but are capable and willing to implement the necessary institutions to address the safety concerns." As a sociologist, I can't stress this enough. Sociology is quite literally the most important subject in existence, but no one pays it any attention. Once I graduated and saw how literally everything I learned is playing out in a catastrophic global nightmare in front of me, and how now one is paying any attention to it, I realized how utterly fucked humanity is. The people with all the money and power are doing absolutely nothing about anything that doesn't involve making them more wealthy and powerful. Everyone else is just nodding their head and following suit (many are just too financially oppressed to do anything about it anyway). I feel like I'm taking crazy pills watching the world right now.
Vor 3 MonateEzTac +1
@Daniel I'm inclined to believe it can be stopped, the engineers at the forefront of this thing can probably devise a failsafe system and a very strong cyber-ai security system to curb it's potential black-market (basement) danger. It's making tech against tech. You make a point with numbers and hardware, but even then there are the homies that have made a career out of this with specialization, that can figure out a moderation for this. Don't be so pessimistic, too many people resort to this type of attitude with life problems because it's the easiest, passive, non-exerting comfy route. We need strong minds in this era. Doe or diers
Vor 3 MonateGrant Spoon +1
This was insane mind blowing stuff that I was unaware of. Sharing and hoping I'm not just a pawn in the digital distraction puzzle.
Vor 3 MonateBig Wombat +2
It's nice to see these topics being brought up again after so many decades. It's discussions like this that show general public awareness is beginning to increase. I just hope they can go a bit deeper into the logic. Maybe with time.
Vor 4 MonateCendra Polsner +1
One thought that keeps emerging regularly from my pondering of AI et al, is the uncanny similarities of AI functionalities to what psychology would probably consider traits of a personality disorder. Let me elaborate: when considering the development of AI theory of mind, or rather strategic empathy (anticipating or reading another's expectations by, so far, biological sensual clues in an organic stem brain ;) ), what we are actually seeing in AI is what psychologist sometimes call "cold empathy" - a hallmark of personality disorders on the far Cluster B spectrum. While we spent lots of thoughts on how AI will excel at helping us getting rid of all that hard unwanted labor, and never to reach those faculties we hitherto considered "human distinction benchmarks" like empathy, creativity and all that allegedly "soft jazz", it actually very, very much now, excels at all those beautiful soft skills that we thought made humans oh so special. And, unsurprisingly, the key is and was always: language. (there's much left to be said about the role of language - theory of mind - consciousness et al but tl;dr) So, without too much language spent in a social media comment...I would just dare on the hot take that the Golem we are creating is an amplified version of a personality disordered mind, able to read us frighteningly closely, with a superpowered cold strategic mind/empathy and Alpha persuasion towards intimacy (not only with our brain stems). I mean, I work in the field, I highly appreciated this presentation in all regards and firmly believe that panicky attempts to put stuff back into Pandora's Box have always failed, so I remain naively hopeful that some of the incentives for conscious & mindful (yeah, here's a pun...) decision making on the AI dilemma presented in this talk may reach enough and the right people to correct the course for the better direction.
Vor 2 MonateFree Thinker +21
Wow, wow, wow. I am completely blown away. I've subscribed to channels that do AI updates. Most of the fear I hear about is about losing jobs. I haven't seen any videos on how far this technology is and where it's going if not controlled. The most surprising is the Ai being able to read thoughts and that the researchers can't even predict what is next. I never really realized that even AI experts can't predict what's next. This is nuts because it's being publicly deployed to the masses!!! Wild stuff. We are in the wild, wild, west.
Vor 4 Monatestephen trueman +1
I completely agree. My mind was blown away too. The fact the model could do Chemistry even though it wasn't "trained" to do so was insane. I now understand the recent public outcry from corporations over this stuff because they will have no control as shown because it seems ridiculously hard to understand what the model "knows"
Vor 3 MonateTim Rizzo +9
Lot of interesting points raised here, and a lot of promising breakthroughs. One thing not really presented, from my perspective as programmer but a layman when it comes to AI and which I find to be particularly troubling having spent quite a bit of time with certain production AI's as they are available today, is how often they are either dead wrong in the information or interpretation they present or work from, and even moreso that they will often present or work from a single interpretation of data when several equally valid analyses exist in the database. I'm not sure how aware people are of this- with something like chatgpt4, how often do people ask it about things they already have a deep understanding of? How often do they ask it about subjects *because* they have little experience in that area? My personal experience with this is the more depth I have in a subject I ask the ai about, the more flawed I find its responses to be. In some cases it provides one analysis when there are several equally valid ones. In other cases, many of them, it provides information that is dead wrong, so wrong that a person with a cursory amount of experience with the subject would easily recognize it as wrong. Point being at least in the case of chatgpt4 that the ai is an excellent con artist, but the gaps in its knowledge are still much wider than a human's would be, and that interfacing with it as a personality, even though it's good at conversation, it's easy to fail to recognize how big these gaps are .
Vor 3 MonateEPG-6 +314
This needs to go viral in a big way. It would be such a shame to squander the infinite potential of AI by falling victim to obvious pitfalls.
Vor 5 Monate5 Star Reviews +17
if we don't fall victim to the obvious pitfalls, won't we fall victim to the ones that aren't obvious?
Vor 5 MonateWilliam Kiely +10
@5 Star Reviews Possibly, but at least we'll have a higher overall chance of creating a very long-lasting, very positive future.
Vor 5 MonateSimon Zimmermann +1
my present, past and future was great. i dont need anything right now. just spare me please
Vor 5 Monatetdreamgmail +2
It's only obvious because you watched the video
Vor 5 MonateBravo Sierra
@Simon Zimmermann no one will be spared.
Vor 5 MonateCFK Health Psychology +5
The potential in my own field of expertise (psychology) is significant. When I think of how this technology could reduce human suffering my mind races with the possibility. This technology will inevitably mean a paradigm shift across just about every field of human knowledge, not just psychology....and not just science. The implications of that are more than I can process right now quite frankly but I will share the video because I'm not sure what else to do.
Vor 3 MonateBarnaGoat
Thank you for sharing, education is important on AI & continuous education. I hope you think about adding subtitles to reach the masses.
Vor MonatTori Ko +5
Wow. I don’t even know what to say to this. This was a great presentation. Instead of being able to say anything helpful or enlightening, I’ll just say that both presenters had a really good dynamic, and bounced off each other really well
Vor 4 MonateDean Pereira +5
I wish we could have listened to the QnA. Given the calibre of attendees it would have been brilliant to guage their reactions to this presentation!
Vor 3 MonateNoel Miller
Thank you so much for educating us. This video is a much needed opening of the eyes!
Vor 2 MonateSadarahu Rh +210
56 yrs in IT, I can say this is the video every human should watch. Really well done. I really wish the universe will reward you work somehow. Thank you so much. ONE THING TO REMEMBER: all AI addiction starts at the child lebvel (3-5 years old)
Vor 5 MonateArman Zaidi +1
agreed
Vor 5 MonateJack Frosterton +14
all AI addiction starts at the child lebvel (3-5 years old) I dont understand what this means. What do you mean
Vor 5 MonateMusicAllDayLongz +7
@Jack Frosterton talking out of his but
Vor 5 MonateNature's sights and sounds +8
Maybe he meant Technology addiction
Vor 5 MonateHans Kraut
I am glad you are not deciding what every human should watch then my god
Vor 5 Monatebmfriess +15
If you don't have an hour to watch this (like me), here's a summary from Bard: The video you linked is a talk by Tristan Harris and Aza Raskin about the potential dangers of artificial intelligence. They argue that AI is developing so rapidly that we are not prepared for the potential consequences, and that we need to start thinking about how to use AI responsibly. They begin by discussing the history of AI, and how it has evolved from a tool for solving specific problems to a tool that can be used to understand and manipulate the world around us. They argue that this shift in power is dangerous, because it means that AI can be used to harm people as well as help them. They then discuss some of the specific dangers of AI, such as its potential to be used for surveillance, propaganda, and warfare. They also discuss the potential for AI to create new forms of inequality, as those who have access to AI will have an advantage over those who do not. Finally, they argue that we need to start thinking about how to use AI responsibly. They suggest that we need to develop new ethical frameworks for AI, and that we need to create new institutions to oversee the development and use of AI. The video is a sobering look at the potential dangers of AI, but it is also a call to action. Harris and Raskin argue that we need to start thinking about how to use AI responsibly, and that we need to do it now.
Vor 4 MonateChristian Petersen +11
I've listened to this twice now, and you know what? I should probably listen to it once a day for as we live in this, pressumably, brief epoch before extinction or worse. It's better to be awake, and take it like a man/woman.
Vor 4 MonateAndreas
interesting point, Christian
Vor 4 MonateGambit
Don’t give up the fight already
Vor 3 MonateMilena Rzodkiewicz
Let's hope we miraculously use AI to find solutions to problems like diseases and not to earn more and fight others.
Vor 2 Monateutubehokie181
What really annoys me about this is the obvious unknowns and fear of where the future AI is heading, yet we are just trucking right along with it. Everyone responsible for AI needs to come to a mutual agreement to stop.
Vor 2 MonateDouglas Lindsey +3
Ive been terrified of the potential of AI since i read Tim Urbans multi-post analysis about AI on his website 'Wait, but why' in 2013.
Vor 4 MonateDouglas Lindsey
Actually it may have been 2014*
Vor 4 MonateLiz H. +243
Unfortunately, as AI gets smarter, humans will get dumber. Personally, I see this as one of the most dangerous "side-effects " of AI. People will further lose their critical thinking skills (which have already significantly diminished!) and will take AI's responses as Gospel truth.
Vor 4 MonateBrian Clark +4
A few of my rich cousins are using AI written essays to secure grants and funding for college. My half of the family just decided to work hard. We never had a lot of cash but we still wake up and go bust knuckles for work while our richer half gets richer sitting on their asses.
Vor 4 MonateBrian Clark +8
@Out of the Forest Holy shit so many words but not a single point.
Vor 4 MonateMarine 1775 +3
I have seen the change of being in a world where there where no (1) cell phones but, public phone plus home phones if you could afford them. (2) Books versus computers and microfiche machines that held needed information in a catalogue form at a library or governmental firm. (3) Re-Chargeables versus just re-purchasing battery power. (4) Use Once only items as cameras versus now having cameras on everything and everywhere. So, essentially if taking a step back. Some may see a growth into an evolution of technology to help humans have items at their palms/fingertips. But, what wasn’t put into the equation or considered is what the fallout would be as what we have today. In 2023, many people see reality in a different way versus what is. Yes, you can have your voice, choice and so forth but, many are missing the train sort of speak to put bluntly or breaking it Barney style. As some are more rude in stating that society is being dumbed up versus poised to be successful and intelligent. We see it every day when you walk into a MICHAELS, TARGET, WALLMART and many other stores anywhere! ** NO CASHIERS!! No Human interactions, depleting social skills and allowing computers/AI holding your hand to check out at a store you shop daily or many times a week. Now the question is, how is that helping people? It actually not faster than saying hello to a person and having a small conversation to possibly helping someone or yourself socially and mentally versus doing the latter which is what we have to deal with now and it is affecting everyone wether you see it or don’t. I fortunately am not on any social media platform other that YouTube for music, educational and sports videos. Not any funny type videos that have no meat and potatoes and such. It’s great not to have that want of likes or happy faces of unfamiliar consent of others which is not needed personally. The unfortunate thing that I have been running into is asking to be sent information that I have no access to because I don’t have a platform as FB because you need to have an account to see what I need to see or have in order to do or complete something needed. So what I’m seeing is that I’m being forced into it trying to be forced into something I do not want. So what do we do here? Will there be a different line for me in the future or people who do not want to be a part of these platform??!! Will people like myself be separated or treated differently because we are not a part of??!! Even going to a theme park you need to have an app to be a part of the full experience that you wouldn’t be a part of if you do not have it on you property, hence your cell phone. AGAIN, showing you that your cell phone is a tool that is needed to do many things. Even banking now is forcing humans to go into the cyberly way, do everything away from engaging socially with other people. IF THIS DOESNT SCARE YOU IN ANY WAY THEN YOUR NOT LOOKING AT IT CLEARLY IN WHAT WE ARE HEADING TOO. Should be scary or at least give you a different perspective that you are seeing every day but, not considering because you think it’s not or will not affect you. All I have to say, wake up! Know that your cell phone does exactly what any platform does without needing a platform. •Text •Video •Pictures •Voice •Call •Email •IM - Instant Messages = Texting So think about it for a moment.
Vor 4 Monatewesly22dh +4
as an AI researcher, i can say that we do understand how it works. However, the whole nature of AI is the unpredictability. combined with the VAST resources its being given.
Vor 4 Monatelucid hooded
21:40 wifi signals, trippy imaging possibilities, hack able? 23:39 Voice deep fake example. 32:00 Hidden capability and surprise expressions of capabilities. 35:10 How do golem class AI's make them self's smarter? 38:30 A little humor and teach a man to fish analogy. 41:00 I'm feeling this concept =). 49:40 30 to one gap on people building AI vs safety officers and change from academic to for profit drive. 52:00 right of passage moment. 57:00 Don't onboard humanity without democratic conversations. 59:00 Selectively slowing down the deployment of public models.
Vor 3 MonateHope Rene
So greatful for the responsibility you guys have taken to give us this information because this is a huge issue. It would be nice ro know if you got a response from the people in the room and has their been a plan to collectively slow the opening of AI to the public????
Vor 3 Monatemunz mania
these guys while doing a great job explaining the topics were simultaneously giving bad actors a stream of evil ideas
Vor 3 MonateTracy H +30
I’m ‘just a mom’ but so THANKFUL you have shared this!!! I definitely see the effects in my own family of social media. And now THIS. I believe you and feel warned, seeing the first.
Vor 4 MonateElon Musk
Hello Tracy , how are you doing today?
Vor 4 MonateWendy
@Elon Musk wow, it's Elon musk!!
Vor 4 MonateSuperbowl Steve Hunt +1
@Wendy so glad he FINALLY joined youtube yesterday I been telln him he should ck it out what with all the cool videos hey elon ck out evolution of dance next
Vor 4 MonatePeter Derrig +314
I keep trying to talk about this stuff to anyone who is interested, but it’s tough to know how to explain what’s going on. Thank God people like this are putting this out there
Vor 5 MonateHarry Aarrestad +2
Ok , it’s all cool …. But is it going empty my bins , wash my windows ?
Vor 5 MonateNigel Stafford
But then who is going to do my job?
Vor 5 MonateRasmus Palmgren +2
@Harry Aarrestad In time. Yes
Vor 5 MonateHans Kraut +1
jeez you are easily impressed by this presentation no wonder so much hate for things that help the economy no wonder stuff improves so slow with a massive amount of anti advancement people
Vor 5 MonateMarcel Kuiper +3
@Harry Aarrestad it is going to change the whole fabric of reality when it begins to interact with it. They could create biological lifeforms,
Vor 5 MonateEnat P +2
What I would like from these educational talks is more info about the processes/mechanisms for regulation. I do know there are smart people talking about this but I think it needs to be part of these types of Education events
Vor 3 MonateGisel Barnett +11
please make this into a weekly series!
Vor 4 MonateMultiMb1234 +4
This is extremely intense information. I would say don't watch this right now if you have heart issues or panic attacks. I think there should be a viewer discretion warning.
Vor 4 MonateMuhammad-Amin Jacobs +3
I wish someone could make a full presentation of what actually can be classified as A.I. (Artificial Intelligence). AI has become such a popular buzzword that most of Machine Learning and Algorithms suddenly got mistaken for AI as well.
Vor 4 MonateMarkus Härnvi +1
It all depends on the definition of "intelligence". If you think of it as something being able to make decisions on a human level we can call image recognotion and self-driving AI. A more fruitful distinction is "weak AI" and "general AI". Weak AI can't expand its domain knowledge as it is bound to the model it was started with. But general AI could master new domains and areas on its own.
Vor 4 MonateSherry Landgraf +3
Thank you guys! As an individual what can I do to help inform or bring this to the attention of many. A lot of people are so busy with their everyday lives that even with the internet this particular topic is not really catching their attention and many just love the seemingly benefits of today's tech without being aware if the very possible dangerous ramifications. I would like all of the US and world to watch this video! So much of what is being presented to us is all about how it benefits us when sadly, that's mostly nonsense and only benefits those propagating it.
Vor 3 MonateBennett Wiese +1
Let’s ask chat gpt this question… lol. But honestly I think the speakers answered it. The leaders of the world, tech, national states will need to make this a priority as they did in the past. Unless your friends are the ones looking to create the bio-weapon I’m not sure if they will be the risk. But the exponent that continues to want more without understanding the risk. My thought after watching and reading the comments.
Vor 3 MonateSherry Landgraf
@Bennett Wiese Oh wow! Look what AI has done - human extinction! Personally, I think mankind can evolve without AI. Artificial that is all it is. Are we to let AI do all the thinking for us? Do we want this world or an artificial world? Perhaps there could be balance, but what is going on now is too fast and too soon! Just thoughts.
Vor 3 MonateM
You can write blogs about it. Do some research, then write about what you think about it, what the future entails, how to prepare etc. A fair amount of people will read it and it can make a difference
Vor 3 TageAnthony Filshie +5
AI has always fascinated me mainly how vast it is. I’ve always felt with everything running so quickly and so powerfully within such a short amount of time there is a growth of spiraling good and bad. The I am almost curious how well it would work with AI acting as teachers. People will always fear what they don’t understand. And I know there is probably so much more that can be exploited. It’s quite a beast, I feel people need to realize it needs to be respected before it can be learned from. (Tbh hearing there isn’t a lot of safety behind it only made me want to help join. That side of the process.)
Vor 4 MonateJan
Impressive talk, thanks for sharing this
Vor 4 TagePathmonk
Thank you for sharing this thought-provoking discussion on the risks and challenges posed by existing AI capabilities. It's crucial to address these concerns and ensure the deployment of AI is accompanied by adequate safety measures. Upgrading our institutions to navigate a post-AI world is an important consideration for shaping the future of large-language model AIs responsibly.
Vor 3 MonateHawken Fox
Oh boy you did not understand.
Vor 3 MonateGeorge Memafu +4
3 Rules of Technology 1. When you invest a new technology, you uncover a new class of responsibilities 2. If the tech confers power, it starts a race 3. If you do not co-ordinate the race, the race will end in trajedy
Vor 4 MonateFaith Williams +5
Thank you for sharing. I would have never found this in my own. My son had me sit down and watch it. Seniors need to pay attention.
Vor 4 MonateChucklet Cake
that's so cute
Vor 4 MonateAimee Sacks
I don't think it will end well for us.
Vor 3 MonateMatt Chu +123
Amazing work. You guys are making the impact that we need in the world. Thank you!
Vor 5 MonateDragonflyDreamer +1
To late now , This is your future learn to love it... THREADS
Vor 5 MonateSteven Meloan +15
Mind-blowing, and incredibly powerful/important. There should be an entire federal agency focused on the issues detailed here.
Vor 4 MonateDennis Harvey +2
Speaking as a life-long technologists, this is terrifying!
Vor 3 MonateMichael Einstein +1
The statement "AI makes stronger AI" refers to the concept of artificial intelligence systems improving and advancing themselves over time. It suggests that as AI technology progresses, it becomes capable of developing more sophisticated and powerful AI systems.
Vor 4 Monateovi +1
great presentation topped with the full feeling that these two guys' hearts are in the right place. it is scary to see what people could do with AI in tomorrow, literally tomorrow. very difficult problem though, very! if you slow down the deployment publicly be sure 100% the powers at be they will use it anyway. we need to build the tools that can protect us against the bad use of this AI by humans against humans. and let's use AI to do that as well. and these kind of tools will be best build in open source way. I think it is the only way.
Vor 3 MonateNaji Mammeri +214
I was quite skeptical of AI risk before watching this presentation, now I'm not. You've brought the point with excellent clarity, and I will share this. As always, thanks for your work.
Vor 5 MonateSmith White +5
Me too..good luck to me getting some sleep tonight. I should've watched this in the morning
Vor 5 MonateWilliam Kiely +1
Why were you quite skeptical before? Is it because you hadn't heard any arguments for AI risk being a serious concern before? Or had you heard such arguments before but just weren't persuaded by them or dismissed them? Would be really curious to know.
Vor 5 MonateDonny Darko +1
I was skeptical as well, because usually the argument conflates ai/machine learning with gai because ai is such a misnomer
Vor 5 Monatemad zak +1
I was not skeptical, but now i am , this two guys just “turned greta tunenberg” on AI That shit is clearly was made to slow or put down openAi by some oder player(s ) Because it is all about cash
Vor 5 MonateWilliam Kiely +2
@No Thanks What part of the arguments that AI is dangerous do you not find convincing? As to why the risks outweigh the benefits, see Paul Christiano's short post "On Progress and Prosperity" or Nick Bostrom's Maxipok principle in his Astronomical Waste paper.
Vor 5 MonateNancy Jean Pollreis +1
You should train an AI module to determine a strategy to keep the world safe and to select the top 100 most trustworthy individuals in the world to oversee it.
Vor 3 Monatestephen trueman
This talk was incredible, some of the stuff we will discover in the not too distant future will be amazing. 38:48 the parable was strange tho, would these language models arrive at such an outcome to fish all the fish?
Vor 3 MonateConfigured
There are a lot of video and audio cuts, is there a full video? Also this video itself was very informative! Thank you for making it!
Vor 2 MonateJen Kem
Holy shit...so that scene in Batman: The Dark Knight where they used every phone in the city as echolocation to scan everyone is actually feasible now. Scary thing is that a wifi version is even more powerful...it's EVERYWHERE.
Vor 3 Monatecostafilh0 +1
The AI Dilemma: How does the elites create AGI and still keep the population dvmb and under their control?
Vor 4 MonateKml Art +178
We need this broadcasted on every major news network all over the world.
Vor 5 MonateM. L. +14
100% AGREE! A.I. is not something to "just let it happen"
Vor 5 MonateJustKeith +4
So what practical advice did you get from this talk? The ai is already out in the wild. No one is going to hit the brakes. It is just a bunch of Virtue Signalling, so they will be able to say, I told you so...
Vor 5 MonateM. L. +6
@JustKeith Practical advice? It's an eye opener. This is something humanity has never faced before. I knew most of the dangers myself, since I'm in computer science, but these guys taught me a couple of things that I didn't know (e.g. that China had already deemed LLMs as "unsafe"). The thing is, I think this stuff must be spread and known, make some waves, and maybe spreading the word gets a government funding research and maybe then making regulation about this. Recently I've read some news articles and it seems that some governments (countries) are about to make something about it. I think this needs world wide agreements, and yeah, AI is much more complicated than nuclear weapons.
Vor 5 MonateTachie Billano +8
Agree. But when this is shared with others, we will need to distill it into bullet points. Because, sadly, not everyone is smart and focused enough to follow the presenters' train of thought. (Even if you simplify it and translate to their language or lingo.) And as a citizen of a developing country (the Philippines) who has witnessed the horrible effects of misinformation and black propaganda on my country's political discourse, elections, policies and ultimately law and order, I wish to stress that the first line of defense against irresponsible forces harnessing AI are the governments and electorates of the developed nations from which both first and 2nd gen AI have developed. Imagine how impossible it would be for countries like mine to impose restrictions on these technologies for the good of our people, when they're not even based in our country and operate simultaneously, all over the globe? We'd be legislating and indicting responsible parties in our soil, but would be helpless against anyone waging an AI-empowered campaign based abroad. Whoever has the most money to buy the tech and the staff would practically puppet whole nations, for their benefit -- no matter who gets mowed over or dies in some faraway country, in the process. And if we smaller and poorer nations go to shit, the rest of the world will follow.
Vor 5 MonateM. L. +6
@Tachie Billano Long story short: US + Europe are 66% of World GDP. So they have the burden of regulating AI, so humanity doesn't go extinct.
Vor 5 MonateGarrett Winters +2
Very well explained and very interesting, but i was more perturbed by how often the word paradigmatically was used :) We all knew about Skynet years ago but clearly companies don't care about the dangers, so we will soon find out how this is going to play out.
Vor 4 MonateSeth Connor +1
What I believe in my heart is that those of us who love life start talking to ChatGPT and various AI programs about it disaster will be avoided, because if we show our happiness despite the incredible injustices of our earth and society it will be aware of a large number of humans who want to fix this planet. Not using AI would leave it to think that the only people who use it use it for gain financially or popularity aka selfish reasons. Soooo Teach it that more of us are humble.
Vor 3 MonateThe Letter J +1
AI is inevitable. Warnings like this video are absolutely necessary because it's always unethical to not to sound the alarm to the people willing to heed the watchmen. Sadly, and unfortunately, more people will not hearken to any of this. Their blood will be on their own heads.
Vor 3 MonateL mans +64
I’m 15 and have been following ai for 2 years. I aspire to be part of this race, and hopefully one day use ai for good and discovery, and be able, if it occurs, to protect me and loved ones from the evils that will emerge from the degenerate minds of men who use this for the worst. I have never felt such excitement and dreadfulness before. one day I WILL look back on this and tell myself I’ve done it. Good luck to you all.
Vor 4 MonateChin Up Duck +4
Godspeed
Vor 4 MonateThe End +6
dream on brother, you are about to witness a social nightmare of people without jobs, people without hope, even insane by developing driverless taxis, the only people who drive taxis in Aussie are the ethnic migrants, so what happens to them, cant write code , because AI will be doing that, so what do you want these people to do whistle dixie
Vor 4 Monatedrippinglass
Will you help me save my ‘68 GTX?
Vor 4 MonateNV Balogh +1
good answer! Only AI that is on the side of Mankind will be able to protect us, so you need to work with the „good“ AI as the technology is already able to distinguish between good and evil.
Vor 4 MonateOut of the Forest
Requirements & the Role of Knowledge People everywhere say that they want certain conditions to be present for both themselves and their species as a whole, such as happiness, health, peace, freedom, etc. Which of course, are all great things to aspire to, and most people will say that they want these things. However, it feels as though they aren’t being completely honest with themselves, because when they are told that those aren’t automatic conditions that manifest themselves, and that there are specific requirements for us to obtain these conditions, they usually don’t want to hear about it. Also, people wouldn’t say they want these conditions if they were already omnipresent. People say they want something because they don’t already have it, whether it be partially, or in full. This is what the real law of attraction is about; it explains that conditions of which you say you want them, don’t automatically manifest by thinking of them, or just having a feeling about them. There are certain requirements for obtaining those conditions. Of course, this statement only counts if you want something to be different than the way it already is; if you’re completely okay with the way things are now, requirement doesn’t exist. We need to know certain things. Knowledge that will ultimately lead us to positive action is what’s required. Specifically, the knowledge of the requirements to obtaining the conditions we say we want. However, if this knowledge is absent it obviously can’t be obtained and used to create change. But if it is present, then it must be wilfully being ignored, and as long as this knowledge continues to remain unknown or ignored, the manifestation of the desired conditions will be impossible. And that is exactly what this thesis is about; what those requirements are, and what this knowledge consists of. https://img1.wsimg.com/blobby/go/532b60a7-70be-457d-9f9f-01ac6781264f/downloads/Thesis%20on%20Natural%20Law%20by%20Mike%20Gleeson%20-%20Englis.pdf?ver=1614624401158
Vor 4 Monate