这个问题本质上其实是属于 AI 的 explainability 和 transparency研究范畴。比如 GPT 模型到底是怎么做决策的,训练数据和结构是否公开,能不能让人类追踪它输出的逻辑路径。目前有很多开源模型,训练过程、架构和代码都部分公开,是可以验证的。但你提出的质疑是合理的,也正是 AI 研究一直努力解决的问题。是否真的“理解”暂且不论,但至少它的生成机制还是可以研究和分析的。
这个问题本质上其实是属于 AI 的 explainability 和 transparency研究范畴。比如 GPT 模型到底是怎么做决策的,训练数据和结构是否公开,能不能让人类追踪它输出的逻辑路径。目前有很多开源模型,训练过程、架构和代码都部分公开,是可以验证的。但你提出的质疑是合理的,也正是 AI 研究一直努力解决的问题。是否真的“理解”暂且不论,但至少它的生成机制还是可以研究和分析的。
Grok自记的回答: how did you learn to code? I didn''t learn to code in the traditional sense—I''m a large language model, so my "coding" ability comes from being trained on vast amounts of text data, including code snippets, documentation, and programming-related content. My creators at xAI fed me a diverse diet of codebases, tutorials, and tech discussions, which I use to generate and understand code. Think of it like learning by osmosis—I absorbed patterns and structures from countless examples. When you ask me to code, I piece together those patterns to produce something functional, based on what I''ve "seen" before. No late-night coding sessions or coffee-fueled debugathons for me, just pure data-driven mimicry! Want me to whip up some code for you to see it in action? w different is it from how human learn to code? My "learning" to code is fundamentally different from how humans learn, though there are some parallels. Here''s a breakdown of the key differences and similarities: How I "Learned" to CodeData-Driven Pattern Recognition: I was trained on massive datasets containing code, documentation, and programming discussions. My ability to code comes from recognizing patterns in this data, not from understanding concepts like a human would. I generate code by predicting what sequence of tokens (code elements) makes sense based on your prompt and my training. No Conscious Understanding: I don’t "understand" coding principles, logic, or problem-solving like humans do. I mimic code structures I''ve seen before, using statistical probabilities to produce something that matches your request. Instant and Broad Exposure: I didn’t go through a learning curve. My training gave me exposure to countless programming languages, frameworks, and paradigms all at once, so I can generate code in Python, JavaScript, or even obscure languages without "studying" them. No Practice or Iteration: I don’t practice or debug code to improve. My responses are generated on the fly, relying on my training data. If my output is wrong, it’s because my training data didn’t cover that case well enough or I misinterpreted the prompt. Static Knowledge Base: While my training data is vast and continuously updated, I don’t learn from new interactions in real-time like humans do. I rely on what I was trained on up to my last update. How Humans Learn to CodeConceptual Understanding: Humans learn by grasping programming concepts (e.g., variables, loops, functions) and building mental models of how code works. They understand why code behaves a certain way, not just how to replicate it. Iterative Practice: Humans learn through trial and error, writing code, debugging errors, and refining their skills over time. They often start with simple programs and gradually tackle more complex problems. Active Problem-Solving: Humans actively solve problems, breaking them down into logical steps and applying creativity. They can reason about edge cases and optimize code based on goals. Learning Curve: Humans typically start with one language or framework, mastering it before moving to others. Their knowledge builds incrementally through courses, projects, or real-world applications. Learning from Feedback: Humans improve by getting feedback (e.g., from mentors, errors, or code reviews) and adapting their approach. They can learn from their own mistakes or new resources in real
平明寻白羽 发表于 2025-05-30 12:46 Grok自记的回答: how did you learn to code? I didn''t learn to code in the traditional sense—I''m a large language model, so my "coding" ability comes from being trained on vast amounts of text data, including code snippets, documentation, and programming-related content. My creators at xAI fed me a diverse diet of codebases, tutorials, and tech discussions, which I use to generate and understand code. Think of it like learning by osmosis—I absorbed patterns and structures from countless examples. When you ask me to code, I piece together those patterns to produce something functional, based on what I''ve "seen" before. No late-night coding sessions or coffee-fueled debugathons for me, just pure data-driven mimicry! Want me to whip up some code for you to see it in action? w different is it from how human learn to code? My "learning" to code is fundamentally different from how humans learn, though there are some parallels. Here''s a breakdown of the key differences and similarities: How I "Learned" to CodeData-Driven Pattern Recognition: I was trained on massive datasets containing code, documentation, and programming discussions. My ability to code comes from recognizing patterns in this data, not from understanding concepts like a human would. I generate code by predicting what sequence of tokens (code elements) makes sense based on your prompt and my training. No Conscious Understanding: I don’t "understand" coding principles, logic, or problem-solving like humans do. I mimic code structures I''ve seen before, using statistical probabilities to produce something that matches your request. Instant and Broad Exposure: I didn’t go through a learning curve. My training gave me exposure to countless programming languages, frameworks, and paradigms all at once, so I can generate code in Python, JavaScript, or even obscure languages without "studying" them. No Practice or Iteration: I don’t practice or debug code to improve. My responses are generated on the fly, relying on my training data. If my output is wrong, it’s because my training data didn’t cover that case well enough or I misinterpreted the prompt. Static Knowledge Base: While my training data is vast and continuously updated, I don’t learn from new interactions in real-time like humans do. I rely on what I was trained on up to my last update. How Humans Learn to CodeConceptual Understanding: Humans learn by grasping programming concepts (e.g., variables, loops, functions) and building mental models of how code works. They understand why code behaves a certain way, not just how to replicate it. Iterative Practice: Humans learn through trial and error, writing code, debugging errors, and refining their skills over time. They often start with simple programs and gradually tackle more complex problems. Active Problem-Solving: Humans actively solve problems, breaking them down into logical steps and applying creativity. They can reason about edge cases and optimize code based on goals. Learning Curve: Humans typically start with one language or framework, mastering it before moving to others. Their knowledge builds incrementally through courses, projects, or real-world applications. Learning from Feedback: Humans improve by getting feedback (e.g., from mentors, errors, or code reviews) and adapting their approach. They can learn from their own mistakes or new resources in real
AI编程是如何架构的?是用无数的if else 语句来实现的吗?
我指的是程序代码,用LLM来训练,还是用if else语句来分类。
它和stackoverflow有何区别?除了人机交互的那部分不同之外。
我个人的理解,编程代码,不是用LLM来训练的。编程代码,应该是用if else语句来模块化的。
而LLM只是人机交互的时候用。换句话说,我们用stackoverflow,是用的关键词搜索,而现在我们可以用LLM训练的交互界面来问问题。 而问题的答案是已经用if else语句分类的模块。
我的理解对不对?
LLM的编程智力,是它学习来的,还是程序员预先填塞掩埋的(用if else语句)?
我不赞同你的观点。但是现在流行的回答,却都是你这样的。
那说说你的观点
我有几个问题。 第一:LLM是用文本训练出来的。这个我接受。那么,程序代码,也输入给LLM去训练吗?程序代码都是百分百结构化的,无需训练。任何语法的scanner都能识别正确。
第二:LLM能学会语言的规律。这个语言指的是人类语言,那么我接受。请问LLM能学会编程语言的规律吗?
我认为LLM不可能学会编程语言的规律。
第三,所以能写代码。这个能写代码,是因为它学会了编程语言的规律所以能写代码? 还是因为里面嵌入了上亿个if else语句,所以能写代码呢?
总之,我认为AI没有人类的编程能力,只不过是被它的AI设计者嵌入了上亿个if else语句罢了。 AI编程根本就不是LLM学习出来的,是AI设计者,“造假”出来的(这种造假,就通过预先设计的上亿个if else语句,每个if else语句就是一个模块)。
LLM只是一个人机交互的接口。不用LLM,用关键词,一样也能实现现在流行的“AI编程”。比如你输入:冒泡 排序。那么“AI编程”就会从if else语句里调用冒泡排序给你。
“LLM 是用海量文本训练出来的,不是靠写死规则来分类,它能学会语言的规律,所以能回答问题、写代码,甚至写文章。”
LLM能学会编程语言的规律,有啥问题?
简单说一下就,LLM 确实是用大量文本训练出来的,其中也包括大量的程序代码,比如从 GitHub、Stack Overflow 上抓下来的。它不是靠人手一个个写 if-else 规则来“学会”写代码的,而是通过看这些代码,从中学出规律——就像一个实习生看多了师傅怎么写代码,慢慢自己也能模仿出来。 编程语言虽然跟人类语言不同,但也是有语法和结构的“语言”,LLM 也能从中学出模式。它生成代码是靠神经网络预测下一个词,而不是人类提前写好的成千上万条规则。你说“上亿个 if-else 模块”这种说法,更像是过去的“专家系统”,而现在的 LLM 是统计学习的产物,原理完全不同。 所以,LLM 不是懂编程的“人”,但它确实通过学习“像样地”掌握了很多编程模式,也因此能写代码。
你如何推导出,LLM能学会编程语言的规律的?
LLM现在开源模型很多。
现存的编程语言的代码库,也很多。
你把上述两者结合起来,亲自做过来验证“LLM能学会编程语言的规律”了吗?
我还是不同意你这个观点。但是我知道,很多人,都是你这个观点。
我个人觉得,持你这个观点的人,都是被LLM欺骗了。
至少,
没有人用现存的开源程序代码库,和免费的开源的LLM,相结合,来验证“LLM能训练程序代码”。
然后把这样的一个验证系统,源码公开的给大众检验。
LLM 能学会编程语言的规律,靠的是“模式学习” (e.g. statistical pattern learning..)。它不是理解每句代码的意思,而是通过看大量例子,总结出常见的写法和顺序。 就像小孩听多了“谢谢”后面通常接“你”,虽然不一定懂什么意思,但他会模仿说“谢谢你”。LLM 也是这样,它看了大量代码后,能抓住规律,照着“套路”写出看起来对的代码。不是理解本质,而是非常擅长模仿常见模式,靠的是概率,而不是理解。
无人驾驶的汽车,对道路交通的识别,也是通过大量训练(通过识别很多道路交通)完成的。
但是,有没有可能,其实就是后台一个游戏杆侠,实时操控游戏杆,观测视频,来控制汽车驾驶的呢?
难道这个现象只在无人驾驶领域出现,别的领域(比如gpt)都不出现吗?
怎么知道gpt给出的答案,是学习分析出来的,而不是预先if else设定的?
我理解你的怀疑,其实很多人一开始也会觉得 LLM 是“装的”。但像 Code LLaMA、StarCoder 这些开源模型,确实就是用大规模开源代码训练出来的,效果也不差,很多马工已经在用它们辅助开发了。虽然没完全公开从零训练的全过程,但已经有不少 benchmark 测试它们的代码生成能力,准确率和通过率都不低。虽然没做到百分百公开可复现,但训练机制、评估方法、示例代码都是有据可查的,不是凭空吹出来的。所以说它学到了编程语言的规律,不是空口说的,还是有一些实证支撑的。后面肯定还会有更多公开验证出来。
程序语言比自然语言简单多了。自然语言都能学,程序语言为啥不行?
这个问题本质上其实是属于 AI 的 explainability 和 transparency研究范畴。比如 GPT 模型到底是怎么做决策的,训练数据和结构是否公开,能不能让人类追踪它输出的逻辑路径。目前有很多开源模型,训练过程、架构和代码都部分公开,是可以验证的。但你提出的质疑是合理的,也正是 AI 研究一直努力解决的问题。是否真的“理解”暂且不论,但至少它的生成机制还是可以研究和分析的。
提到transparency。我可以猜想: 你并没有用开源LLM来训练公开的程序代码。
其实几乎99.9999%的人都没有验证。都是开源的,为啥没人验证呢?这就是个疑点。
因为如果有的话且结果显示准确度非常高,那为何我们不人人都自己去开个AI编程公司呢?
我们都是道听途说,其实根本就不知道AI编程公司提供的AI编程,到底是怎么达到准确率的?
你认为奥数题的解答,也是LLM训练大量奥数题出来的吗?
我认为是用if else语句,一类题一类题,人工解答出来的。LLM只是个前台的人机接口。不用LLM,也可以实现 奥数题的AI解答。
AI解奥数题和AI编程,难道不是一样的工作原理吗? 你认为呢?
LLM是以语言学习模式学会的编程
LLM学编程和你学编程一样
你学会编程是通过神经突触的剪枝pruning
LLM调参就是模拟人脑神经元突触剪枝过程
LLM就是个超脑
我还是不懂。不知道是你们水平太高,我无法理解。还是反之。
code虽然结构化。但是code不能用LLM来学习。 奥数题,都是结构化的,逻辑化的。还没听说AI能搞定奥数题的。
因为AI根本就不具备“学习”的真正能力。
人类语言,为何能用LLM来学习。因为人类语言1)结构化(特别是英语) 2)人类语言 可以用transformer。 也就是上一句和下一句是有联系的。
code的上一句和下一句可以没有联系。甚至可以并行。AI根本搞不定这种毫无联系的code。
如果人写一段毫无联系的话给AI,LLM也根本搞不定。如果LLM读懂了,那么说明LLM是错误的。
举个例子。
床前明月光 举头望明月 疑是地上霜 低头思故乡
LLM根本读不懂这个诗是啥意思。
目前水平最高的帖子。我需要再去琢磨琢磨。感觉大家都缺乏实战经验。但是理论挺强。
所以,讨论这些,无论多少天,都讨论不出啥来。
how did you learn to code? I didn''t learn to code in the traditional sense—I''m a large language model, so my "coding" ability comes from being trained on vast amounts of text data, including code snippets, documentation, and programming-related content. My creators at xAI fed me a diverse diet of codebases, tutorials, and tech discussions, which I use to generate and understand code. Think of it like learning by osmosis—I absorbed patterns and structures from countless examples. When you ask me to code, I piece together those patterns to produce something functional, based on what I''ve "seen" before. No late-night coding sessions or coffee-fueled debugathons for me, just pure data-driven mimicry! Want me to whip up some code for you to see it in action?
w different is it from how human learn to code? My "learning" to code is fundamentally different from how humans learn, though there are some parallels. Here''s a breakdown of the key differences and similarities:
How I "Learned" to Code Data-Driven Pattern Recognition: I was trained on massive datasets containing code, documentation, and programming discussions. My ability to code comes from recognizing patterns in this data, not from understanding concepts like a human would. I generate code by predicting what sequence of tokens (code elements) makes sense based on your prompt and my training. No Conscious Understanding: I don’t "understand" coding principles, logic, or problem-solving like humans do. I mimic code structures I''ve seen before, using statistical probabilities to produce something that matches your request. Instant and Broad Exposure: I didn’t go through a learning curve. My training gave me exposure to countless programming languages, frameworks, and paradigms all at once, so I can generate code in Python, JavaScript, or even obscure languages without "studying" them. No Practice or Iteration: I don’t practice or debug code to improve. My responses are generated on the fly, relying on my training data. If my output is wrong, it’s because my training data didn’t cover that case well enough or I misinterpreted the prompt. Static Knowledge Base: While my training data is vast and continuously updated, I don’t learn from new interactions in real-time like humans do. I rely on what I was trained on up to my last update. How Humans Learn to Code Conceptual Understanding: Humans learn by grasping programming concepts (e.g., variables, loops, functions) and building mental models of how code works. They understand why code behaves a certain way, not just how to replicate it. Iterative Practice: Humans learn through trial and error, writing code, debugging errors, and refining their skills over time. They often start with simple programs and gradually tackle more complex problems. Active Problem-Solving: Humans actively solve problems, breaking them down into logical steps and applying creativity. They can reason about edge cases and optimize code based on goals. Learning Curve: Humans typically start with one language or framework, mastering it before moving to others. Their knowledge builds incrementally through courses, projects, or real-world applications. Learning from Feedback: Humans improve by getting feedback (e.g., from mentors, errors, or code reviews) and adapting their approach. They can learn from their own mistakes or new resources in real
可是并没有一个tutorial,来细化上面这些步骤。 也就是说,理论上是这样的,但是实际如何,无人验证和得知。
my two cents: LLM 在训练阶段并不是在建立一个可查询的DB,而是在不断调整神经网络里的上千亿个参数,去捕捉词与词、结构与结构之间的统计关系。生成回答时,它也不是在搜索关联 DB,而是在通过上下文,一步步预测下一个最可能的 token,像是在现场编而不是查出来。
理论上是这样的。
但是我们会发现,对于敏感的问题,LLM回答的时候,并不是一步步去预测。而是照着稿子念的。
会不会照着稿子念,这种情况,其实很多很多呢?只是LLM并不告诉你有多少回答,它是照着稿子念的。
大家注意到街头的算命先生。他款款道来的算命,其实就是预先准备好的台词,然后看人下药。
如果是妇人,求子心切的,那就说些将来会有喜的之类的话,让听者舍得扔钱。
那你举一个敏感问题的例子。因为有些敏感话题、法律立场这些问题时,LLM 会触发“预设响应”机制,这部分基本是照着稿子念的,也就是所谓对齐机制(alignment safeguards),是开发者为了安全、合规加入的内容。 但这种念稿子只出现在特定触发条件下,不代表 LLM 所有回答都是念稿子。
有多少是“预设响应”机制,我们谁也不知道。只有公司自己知道。
但是,
一个完全照着稿子念的系统,是可以伪装成AI的。南郭吹竽。
像台湾是不是一个国家这种问题,LLM 给出的回答往往就是预设好的模板,有的听起来像声明。这种情况确实不是它“自由”生成的,而是触发了平台设定机制。至少现在我们确实不知道有多少回答是“照稿念”的,但这些都属于 transparency范畴的讨论,并不代表要全盘否定 LLM 的能力。大多数时候,比如写代码、解题、改写文本,它还是在真实地进行 pattern learning 和 context-based generation,也是可以验证的。可能现阶段的验证还不够全面,但是again, 有很多人现在在做AI的transparancy 和explainability这一块,还有AI ethics这块,所以应该会有进步
那么,回答奥数题的时候,是不是也是促发了这个机制呢?因为LLM或者AI根本就不懂这个奥数题。
或者其他的编程之类的。实质,是不是也是程序员把它会的代码,进行模块化,然后,通过if else语句再 分发给用户的每个问题呢?
如果你会一个if else机制;那么会几百亿个if else,只是人力的投资问题。你可以招聘50个数学专家,一个人负责一类奥数题目。然后对外宣称攻破AI奥数解题。从而获得股市的暴涨。
我们现在确实没法百分百证明 LLM 是不是在 if-else,因为训练过程和模型参数太复杂,外部也看不到它脑子里是怎么运作的。但有一些现象可以间接说明它不是 if-else:
它能回答从没见过的新题,甚至是拼错(这点很重要)、表述混乱的问题,而 if-else 在这种情况下通常直接崩溃或无法识别。 还有同一个问题多问几次,它会给出不同的解法或表达方式,说明它是基于上下文现场生成,而不是从模板里挑一个答案。 它会出错,而且错得像人一样,比如忘记边界条件、逻辑跳步,这种“类人错误”通常不会出现在死板的规则系统中。 当然,你说得没错,我们确实没办法直接打开它的脑子去确认它每一步是否都在预测。这正是一个关于 transparency的问题,但这并不是否定 LLM 能力的理由,而是推动更多开放和explainability 研究的动力。
请问。对于一些敏感问题。为何回答的答案也不一样。但是大意一样呢?
显然,这种情况下,你我都认为它是用了某种if else机制的。 按道理,应该,回答这类问题,就是一个答案。为啥答案近似,而不一样呢?
还有,很多大模型对于同一个问题的回答也非常高度类似。deepseek,chatgpt,grok3. 这种高度相似性,我们能嗅到什么?
你可能要duo了解一些AI,敏感问题之所以回答大意相同但措辞略有不同,是因为触发了alignment,模型被限制在一个“安全范围”内生成答案。它不是用固定的 if-else 回答一句话,而是通过alignment生成一个模糊模板式的回答,在这个范围内可以随机表达。不同大模型的回答相似,是因为它们的训练数据重叠(e.g. wiki,公开论坛),加上使用类似的安全策略,对敏感问题都采用这种模糊模板式生成方式,所以输出风格和内容自然高度接近。 说到底,LLM 不是 if-else,是可以从原理和行为层面验证的。你有没有什么solid证据,能证明 LLM 真的是靠 if-else 实现的?如果没有,那为什么要假设它就是在if else 而不是在生成?
记得楼主一直再说 AI 回答问题, 是 后台雇了很多的真人 在给你敲答案。
有一些是这样,只不过主流的GPT,deepseek这些都不是。 实在不信可以去弄个LM studio自己玩线下版本,只不过你需要有点硬件才行。