The rise in popularity of artificial intelligence (AI) has probably led many to wonder if this is just the next tech fad that will be over in six months.
However, a recent benchmarking test conducted by Cat ID revealed just how far GPT-4 has come – suggesting it could be a game changer for the web3 ecosystem.
Debugging test for AI code
The data below shows several tests of available open-source Large Language Models (LLMs) similar to OpenAI’s ChatGPT-3.5 and GPT-4. Cat ID tested the same example of C+ code for each model and recorded false alarms for errors and the number of bugs identified.
LLaMa 65B (4-bit GPTQ) model: 1 false alarms in 15 good examples. Detects 0 of 13 bugs.
Baize 30B (8-bit) model: 0 false alarms in 15 good examples. Detects 1 of 13 bugs.
Galpaca 30B (8-bit) model: 0 false alarms in 15 good examples. Detects 1 of 13 bugs.
Koala 13B (8-bit) model: 0 false alarms in 15 good examples. Detects 0 of 13 bugs.
Vicuna 13B (8-bit) model: 2 false alarms in 15 good examples. Detects 1 of 13 bugs.
Vicuna 7B (FP16) model: 1 false alarms in 15 good examples. Detects 0 of 13 bugs.
GPT 3.5: 0 false alarms in 15 good examples. Detects 7 of 13 bugs.
GPT 4: 0 false alarms in 15 good examples. Detects 13 of 13 bugs.
The open-source LLMs caught only 3 of 13 bugs in six models and identified four false positives. Meanwhile, GPT-3.5 caught 7 out of 13, and OpenAi’s latest offering, GPT-4, caught all 13 out of 13 bugs without false alarms.
The leap forward in debugging could be groundbreaking for smart contract implementation in web3, beyond the myriad of other web2 industries that will greatly benefit from it. Web3, for example, connects digital activity and property with financial instruments, earning it the nickname “the Internet of Value.” Therefore, it is vital that all code running on the smart contracts powering web3 is free of all bugs and vulnerabilities. A single entry point for a bad actor can result in billions of dollars being lost in moments.
GPT-4 and AutoGPT
The impressive results of GPT-4 show that the current hype is justified. In addition, the ability of AI to help ensure the security and stability of the evolving web3 ecosystem is within reach.
Applications such as AutoGPT have gained momentum, allowing OpenAI to create other AI agents to delegate work tasks. It also uses Pinecone for vector indexing to access both long- and short-term memory storage, addressing GPT-4 token limitations. Last week, the app was trending globally on Twitter several times from people raising their own armies of AI agents worldwide.
By using AutoGPT as a benchmark, it may be possible to develop a similar or forked application to continuously monitor, detect bugs, and suggest solutions to the code in upgradable smart contracts. These edits can be manually approved by developers or even a DAO so that there is a “human in the loop” to authorize code implementation.
A similar workflow can also be created for implementing smart contracts through bug review and simulated transactions.
Reality check?
However, technical limitations need to be resolved before AI-managed smart contracts can be deployed in production environments. While Catid’s results reveal that the scope of the test is limited, he focuses on a short piece of code where GPT-4 excels.
In the real world, applications contain multiple files of complex code with numerous dependencies, which would quickly exceed the limitations of GPT-4. Unfortunately, this means that GPT-4’s performance in real-world situations isn’t as impressive as the test suggests.
Yet it is now clear that the question is no longer whether a flawless AI codewriter/debugger is feasible; the question now is what ethical, regulatory and agency issues arise. In addition, applications such as AutoGPT are already quite close to autonomously managing a codebase through the use of vectors and additional AI agents. The limitations mainly lie in the robustness and scalability of the application, which can get stuck in loops.
The game is changing
GPT-4 has only been out for a month and there is already a plethora of new public AI projects, such as Elon Musk’s AutoGPT and X.AI, that are reshaping the future conversation about technology.
The crypto industry seems ideally placed to leverage the power of models such as GPT-4 as smart contracts that provide an ideal use case to create truly autonomous and decentralized financial products.
How long will it take to see the first truly autonomous DAO without humans in the loop?
The post Here’s why GPT-4 outperforms GPT3.5, LLMs in code debugging appeared first on CryptoSlate.