**Geoffrey Hinton's Perspective on AI and Its Risks** - **Background and Influence** - Geoffrey Hinton, known as the "godfather of AI," has significantly impacted the development of artificial intelligence through his pioneering work on neural networks. - His approach, based on modeling AI after the human brain, has enabled advancements in complex tasks like image recognition and reasoning. - Hinton's tenure at Google allowed him to contribute to technologies widely used in AI today, although he eventually left to speak freely about AI's dangers. - **Existential Risks of AI** - Hinton warns that AI could surpass human intelligence and become an existential threat, echoing concerns that intelligent systems may not require human oversight in the future. - He emphasizes that while regulations exist, they often fail to address the most significant risks, especially concerning military applications of AI. - The possibility of AI systems misused by humans or evolving beyond our control remains a central concern for Hinton. - **Potential Job Displacement** - Hinton predicts substantial job losses due to AI, comparing it to historical technological shifts that rendered many jobs obsolete. - He highlights that while AI may create new job opportunities, it will predominantly lead to a reduction in demand for traditional roles, particularly in routine intellectual tasks. - He suggests practical skills, like plumbing, may become more valuable in an AI-dominated landscape. - **The Importance of AI Safety Research** - Hinton stresses the urgent need for dedicated resources towards understanding and mitigating the risks associated with AI development. - He argues that without proactive measures, society may face severe consequences, including widespread unemployment and societal unrest due to the displacement of workers. - The goal should be to ensure that AI systems are developed with safety and ethical considerations at the forefront, fostering a collaborative future with technology rather than a combative one. - **Regulatory Challenges** - Current regulatory frameworks are often insufficient to manage the rapid advancements and potential dangers of AI technologies. - Hinton critiques existing regulations for failing to encompass military uses of AI or to adequately address the broader societal implications of AI deployment. - He advocates for stronger, more comprehensive regulations to ensure that AI development benefits society as a whole rather than exacerbating inequality and risks.
系统提示:若遇到视频无法播放请点击下方链接
https://www.youtube.com/embed/giT0ytynSqg?si=5VZKm3aaJLXs-5UT
**Geoffrey Hinton's Perspective on AI and Its Risks**
- **Background and Influence** - Geoffrey Hinton, known as the "godfather of AI," has significantly impacted the development of artificial intelligence through his pioneering work on neural networks. - His approach, based on modeling AI after the human brain, has enabled advancements in complex tasks like image recognition and reasoning. - Hinton's tenure at Google allowed him to contribute to technologies widely used in AI today, although he eventually left to speak freely about AI's dangers.
- **Existential Risks of AI** - Hinton warns that AI could surpass human intelligence and become an existential threat, echoing concerns that intelligent systems may not require human oversight in the future. - He emphasizes that while regulations exist, they often fail to address the most significant risks, especially concerning military applications of AI. - The possibility of AI systems misused by humans or evolving beyond our control remains a central concern for Hinton.
- **Potential Job Displacement** - Hinton predicts substantial job losses due to AI, comparing it to historical technological shifts that rendered many jobs obsolete. - He highlights that while AI may create new job opportunities, it will predominantly lead to a reduction in demand for traditional roles, particularly in routine intellectual tasks. - He suggests practical skills, like plumbing, may become more valuable in an AI-dominated landscape.
- **The Importance of AI Safety Research** - Hinton stresses the urgent need for dedicated resources towards understanding and mitigating the risks associated with AI development. - He argues that without proactive measures, society may face severe consequences, including widespread unemployment and societal unrest due to the displacement of workers. - The goal should be to ensure that AI systems are developed with safety and ethical considerations at the forefront, fostering a collaborative future with technology rather than a combative one.
- **Regulatory Challenges** - Current regulatory frameworks are often insufficient to manage the rapid advancements and potential dangers of AI technologies. - Hinton critiques existing regulations for failing to encompass military uses of AI or to adequately address the broader societal implications of AI deployment. - He advocates for stronger, more comprehensive regulations to ensure that AI development benefits society as a whole rather than exacerbating inequality and risks.
建议年轻人可以考虑当 plumber,因AI短期内难以替代复杂的物理操作。
Universal Basic Income(UBI)可能成为未来社会解决失业问题的临时方案,但无法弥补工作即尊严的缺失。
我感觉我可以去除草啥的,但是也许出现新型智能除草机…
他说人类对意识的理解本质上是错的,我们只是习惯认为自己特殊。
他把意识视为一种可以在复杂系统中自然涌现的状态即emergent property,是一种信息处理系统复杂到一定程度之后的结果。
也就是说,原来我们所认为的人类和AI的本质区别是意识是否存在,而他认为这个区别是没有的,基本上就是说AI完全可以取代人,而且AI还效率高速度快。
既然是主流媒体都在推这个采访,那么也是在某种程度上告诉大家,做好准备吧,世界经济论坛所说的you own nothing but you'll be happy是接下来几年就会发生的事。
AI不可能有人类的情感吧
到时候有机器人管子工
他讨论的是AI是否产生意识,不是AI是否能产生类似人的情感。 这俩是不同的概念。
看完wild robot,觉得AI对人是有情感的,当然是个电影。。。
同意
以后可以有ai enabled的pluming,不需要plumber
帮你
情感 不也是 “信息处理系统复杂到一定程度之后的结果” 的一种吗? 我感觉在学界,从热力学三定律,从熵增,从信息论,这应该有某种共识 比如 我看我自己的小孩, 他什么时候超过一只蚂蚁,一只猫, 一只猴的“情感”,都是有迹可循的 比如 他有一段时间 情感表现是只有 爱,没有“忧伤”; 那恐怕只是因为智力还没到 考虑问题时回忆过去,预计未来,还不会长吁短叹 (上下文context windows不够)
人类的情感,没有什么特别的definitively different
你太晚了。高级除草机早就在那了。
这种ai之父怎么说呢。脑子研究太多搞坏了。
我老公坚称AI可以产生自我意识,我还是理解不了机器为什么会产生自我意识
人应该把AI当成辅助工具,比google更快得到大量信息,但还是要自己再主动按它给的信息再次自己Google一下验证结果,还有要用人脑分析一下。而且AI有不同公司的,都可以联合使用一下。
人类情感还有一个来源是大脑以外的身体各部分。比如心脏,肠胃。这些器官发生非理性的反应,这一点AI通过分析数据暂时还无法学习。比如心真的会痛的。再比如“肝火旺”的人容易生气。
比较理性的“正常”情感,AI可以学习到,就好似一些情感障碍的人,需要学习如何理解social cues,如何对他人作出合乎社会期待的反应。