Exclusive interview with AI expert Flood Sung from Qiyuan World: Exploring the future of AI and Blockchain from GPT automation to RWA.

robot
Abstract generation in progress

Xia He: Hello, Flood, could you please give a brief self-introduction first?

Flood Sung: Hello everyone, I am Flood Sung, currently serving as the head of AI products at Qiyuan World. Previously, I worked as a reinforcement learning researcher at ByteDance, focusing on the development of AI-related technologies.

Xia He: Let's start with an easy topic. Have you seen any particularly interesting or impressive product forms recently that you can share?

Flood Sung: The hottest products recently are on GitHub, where the overseas open-source culture is very strong. Many geeks engage in open-source projects just for fun, not for profit. Currently, the top trends on GitHub are mostly related to ChatGPT, such as a very popular application called AutoGPT, which aims to fully automate the operation of GPT. For example, someone integrated GPT with Siri through code, using voice commands to say "Help me build a website," and interacted entirely through audio, ultimately successfully building the website. This transition from assisting humans to replacing execution is very interesting, and it gained 10,000 stars in just a few days.

Xia He: Can this already perform actual tasks? It seems like it was still in the imagination stage before.

Flood Sung: Yes, the common knowledge accumulation and cognitive ability of GPT-4 have surpassed that of college students. As long as it is authorized to call software or interfaces, with a little development, it can connect to the real world to handle tasks. For example, "help me book a hotel"—if it is given access to Alipay, it can place the order. This is no longer science fiction; tech enthusiasts are quickly validating it, and technological advancements are rapid, from conversation to reading documents, and now to execution.

Xia He: Is the technology of ChatGPT a paradigm breakthrough or an extension of past technologies?

Flood Sung: It is definitely an extension of past technologies. OpenAI's chief scientist Ilya Sutskever has also mentioned that ChatGPT does not have entirely new ideas; all the concepts existed one or two decades ago, such as neural networks and reinforcement learning. The difference lies in scale: previously there were dozens of neurons, now there are neural networks with hundreds of billions of parameters. The Transformer rose to prominence in 2017, but its attention mechanism existed much earlier. The success of AlphaGo also demonstrated the potential of reinforcement learning in specific domains. Now, these methods are simply applied to language models, optimizing output to meet human needs.

Xia He: So the sensation of ChatGPT is because it applies existing technology to large language models?

Flood Sung: Yes. The world of text encompasses human logic and thinking, with a very wide range of applications. Open AI shut down its robotics department a few years ago to focus entirely on language models, training first with expert data and then optimizing through reinforcement learning, which has shown significant results.

Xia He: You mentioned that the learning process of ChatGPT is similar to that of humans: memorizing, doing exercises, and testing. What technologies correspond to these three stages?

Flood Sung: The first stage "recitation" is unsupervised learning, where the model predicts the next character, similar to memorization; the more accurate the predictions, the better the learning. The second stage "problem-solving" is supervised learning, training the input-output using human instructions (such as writing code, poetry, or reports), similar to answering exam questions. The third stage "examination" is reinforcement learning, optimizing the neural network through human feedback. For example, playing table tennis, repeatedly practicing to improve skills; AI trains the reward model through human scoring, continuously optimizing performance to better meet human requirements.

Xia He: What is the gap between China and Open AI? The industry says there is a two-year gap; where is this gap in the process?

Flood Sung: The gap is first and foremost in awareness. A few years ago, no one believed that just predicting the next character would make AI capable of logical reasoning. Open AI has faith and firmly believes that large models and big data can solve problems, and has invested a lot from GPT-2's 1.5 billion parameters to GPT-3's 175 billion. In China, it focuses on money-making fields such as food delivery, e-commerce, and short videos, and has no energy to invest in cutting-edge technologies. Now domestic manufacturers and start-ups are catching up, but it is difficult to surpass only tracking. Open AI does not disclose the details of the latest algorithms, and China needs to innovate its own way, otherwise it will be difficult to compete.

Xia He: Open AI used to publish papers introducing training methods, but now which parts are not open-source?

Flood Sung: There used to be papers, such as the training details of GPT-3. Now the technical report of GPT-4 only shows the results, such as nearly full marks on the U.S. college entrance exam, but the network architecture and training techniques are completely confidential, and we can only guess. The foundational technologies like Transformer are open source, but the specific implementation of the latest models is a black box.

Xia He: Everyone says ChatGPT is the iPhone moment in mobile history. What do you think the future ecosystem will look like?

Flood Sung: ChatGPT's impact is far greater than that of the iPhone, more like the invention of electricity, creating a new medium that empowers various industries. In the future, any field may be connected to AI. If Open AI is dominant, like Apple, the performance is leading; Open source models like Android, with slightly inferior performance but low cost, are blooming everywhere. There may be a few top players competing for it, but the open-source model will make AI as ubiquitous as mobile phones and accessible to everyone.

Xia He: ChatGPT has become popular, and Prompt Engineering has also gained traction. Do ordinary people need to learn this language standard to communicate with AI?

Flood Sung: You don't have to learn it, but learning to use AI can improve efficiency. In the future, like programming, ordinary people can use natural language to develop programs, and the threshold is greatly reduced. The essence of Prompt Engineering is to clearly express requirements, just like a product manager proposing requirements. Whoever knows how to use AI more will have a competitive advantage; Not at all possible to be eliminated. In the future, AI will become a human partner, but only if it is safe and does not go against the will of humans.

Summer He: You mentioned security issues. Many people are now sharing prompt templates to enhance productivity. Will Prompt Engineering eventually fade away like the early "search experts"?

Flood Sung: It will not disappear; only the threshold will lower. The expression of demand will always exist, just like directing employees; you must clearly communicate the goals. Prompt Engineering is the art of questioning at the logical level, not a technical issue. The future may be simpler, but clear communication is still required.

Xia He: You mentioned that ChatGPT could potentially become a leading expert in professional fields. Is its incorrect answer to professional questions due to insufficient domain data or a lack of reinforcement learning?

Flood Sung: Two reasons. First, there is a lack of data in specialized fields, for example, there are many elementary school math problems, but few Olympiad or top-level math questions, leading to insufficient training. Second, to surpass human levels, AI needs to interact with the environment through reinforcement learning and explore unknown areas. For instance, solving the Goldbach conjecture requires a large amount of domain-specific data and reinforcement learning optimization.

Xia He: If AI surpasses humans through self-reinforcement learning, who will judge? Will it discard the moral norms set by humans?

Flood Sung: Theoretically, AI can self-learn, like Zhou Botong's left and right combat, enhancing itself. This is very dangerous because it may break free from human constraints and even discard moral norms. Although GPT-4 has limitations, it can still take on malicious roles through specific prompts like "black." AI safety is a serious issue; while occasional mistakes in autonomous driving may only destroy one car, the errors of superintelligent AI could impact all of humanity. I urge that safety issues be prioritized before advancing research.

Xia He: What cutting-edge or interesting things have you been paying attention to lately? What sources of information are you using?

Flood Sung: There are new developments every day, mainly looking at Twitter and GitHub, where overseas geeks and scientists often share the latest updates. For example, Microsoft embeds large models into robots to control their behavior, exploring space is almost limitless. The main channels are Twitter and GitHub, gathering places for scientists and geeks.

Xia He: Alright, today's questions come to an end. Thank you Flood for the wonderful sharing!

Flood Sung: You're welcome, I'm glad to accept the interview!

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments