A couple of months have passed since OpenAI and GitHub introduced Copilot, shaking up the entire software industry. The short video with which GitHub introduced Copilot to the world felt mold-breaking—by just inputting a couple of key terms, the AI pair programmer suggested a working chunk of code. The development community went wild.
— GitHub (@github) June 29, 2021
Yet, as time went on and engineers all around the world started to get their hands on the new tool, that enthusiasm started to dwindle. The reasons? Suggestions were lengthy, not necessarily functional, and sometimes included outdated practices or approaches. What’s more, as recent research by New York University shows, 40% of Copilot’s code shows severe security vulnerabilities.
That’s far from surprising. While I share the enthusiasm about Copilot’s (and other similar tools) potential, the reality is that we’re looking at a work in progress that’s far from perfect. And that’s not all: Copilot isn’t intended to replace developers altogether, but rather to empower them and boost their productivity.
Nevertheless, it’s interesting to take a look at what Copilot brings to the table as well as the reactions originating from its sudden appearance in the software industry. In a sense, Copilot allows us to get a glimpse of what the future of the software industry in the age of AI will look like, at least in the short and mid terms.
What’s Copilot—and What It Isn’t
As its official website defines it, Copilot is an AI pair programmer that suggests “whole lines or entire functions right inside your editor.” In other words, an engineer just needs to input a description or function signature for the tool to generate an entire block of code based on it. Copilot does so thanks to a deep learning model called Codex, which, in turn, is a version of the Generative Pre-Trained Transformer 3 (GPT-3) autoregressive language model.
That’s why Copilot works so much like GPT-3—both of them use inputs as prompts to create a sequence of characters as a result. The difference is that Copilot focuses on generating programming code, while GPT-3’s output is broader. Why does that matter? Because GPT-3’s own characteristics end up defining what Copilot is and what it isn’t.
If GPT-3 is about generating humanlike texts based on prompts, then Copilot can’t be more than that. And that’s something crucial to completely understand why Copilot isn’t the replacement for human software engineers that so many people think it is. That’s because GPT-3 aims to generate general-purpose texts, like articles or translations, a task that has proven to be quite tricky for even the most powerful AI applications.
The challenge for GPT-3 and any other general-purpose language applications is that it’s very hard to emulate the many complexities of human language. When we talk, we use a lot of abstractions, shortcuts, shared meanings, and multiple other nuances. Deep learning models have a hard time replicating all those aspects of human language, mainly because they are trained on statistical regularities. That’s why those solutions work best when their objective is narrower (like, for example, focusing on generating programming code rather than writing a poem).
That may have you believing that Copilot is more focused and, therefore, more precise than GPT-3, and to an extent it is. But it also delimits Copilot’s capabilities to generate new codes and workarounds for innovative solutions. By depending on context and past codes for its suggestions, Copilot can’t suggest solutions that consider the multiple intricacies of each particular project. That’s why Copilot is a pair rather than a complete engineering development machine on its own.
Generating Code Isn’t Building Software
In a very interesting article on OneZero, Thomas Smith says that “the system’s name is misleading.” And he goes on to explain that “a copilot is a fully qualified pilot who can take over control of an airplane from the captain if needed,” while an autopilot “can fly the plane automatically in certain contexts (like when cruising straight and level) but must hand over control to a human pilot when things get dicey.” Thus, Copilot looks more like an autopilot to Smith’s eyes—and I definitely agree with him.
Given the very nature of Copilot’s underlying technologies, you can’t just simply trust it with building an entire project from scratch. Even if you provided them with detailed context about what you want to do, Copilot wouldn’t be able to write functioning applications out of it.
GitHub acknowledges that in its own website, affirming that “Copilot tries to understand your intent and generate the best code it can, but the code it suggests may not always work or even make sense.” That’s due to how Copilot generates its code: from what it learned from source code from publicly available sources, including code in public repositories. So, it can generate code that hasn’t been written before or it can’t escape the code from where it was learned.
And that’s what truly defines Copilot—code generation, not software creation. In other words, Copilot is about suggestions that help engineers code faster and better. That, in turn, means that Copilot isn’t about creating new software, mainly because it’s not one of its objectives nor does that fall under its capabilities.
Thus, Copilot might hint at a path forward with functions and features of all kinds but can’t architect an entire solution, basically because it doesn’t understand the program’s goals. And that’s not all: Copilot also can’t come up with new programming ideas or approach a development project in the same way a software engineer can.
In short, Copilot isn’t about automated programming, but rather about smart assistance for human engineers to increase their productivity and relieve them from repetitive or tiresome tasks. Copilot is an assistant that needs supervision and a good one at that—but nothing else.
The Impact of AI in Programming in the Future
As such, it’s impossible to take seriously all those opinions that say that Copilot will put entire engineering teams out of their jobs. Given the flawed results it comes up with, Copilot desperately needs human supervision for it to be truly valuable. By pairing with this AI pair programming tool, your team can leverage insightful suggestions that can speed up their work, but they are still required to check and test them.
In that way, Copilot (or any other AI programming tool like it) won’t fundamentally change the way we develop our digital solutions. It will surely make our development processes more powerful, efficient, and focused. In fact, it can vastly reduce the amount of time the developers spend writing code, allowing them to focus on the design and architecture portions of the project.
That can be a healthy shift. Engineers would have the time to completely understand the project’s requirements and how they align with the business goals and objectives. Thus, they can better understand what they need to build and provide a better context for Copilot to work on. Naturally, they would also have to slip into a supervisory role, controlling that everything that Copilot suggests is valid, secure, and effective.
That’s not all. One of the main concerns around AI also emerges when applied to the development world: how AI can put people out of their jobs. The most realistic scenario surrounding this would mean that companies could build their projects with fewer engineers. But that would hardly leave them jobless.
With the current talent shortage in the industry, having smaller teams might come as a blessing in disguise, mainly because the available talent would be distributed more uniformly across companies. If Copilot allows bigger companies to go forward without that many engineers, SMEs could definitely benefit from the liberated professionals. This, in turn, would allow those companies to produce better digital solutions. All of that together elevates the overall quality of the tech ecosystem.
But, truth be told, the jobs for software engineers are safe for the short and mid term. Does that mean that developers can relax? Not necessarily. While they’ll keep their jobs for the foreseeable future, it’d be wise for them to start familiarizing themselves with Copilot and similar tools. While those applications won’t replace them, they will surely boost those who use them while leaving the rest behind. Thus, knowing their way around AI pair programming platforms will start to feel mandatory for most developers in the short term.
That might not be the case right now, especially because Copilot has been out for only a couple of months. But as we move forward, that tool (and its alternatives) will get more sophisticated and efficient. So, anyone worried about the future would do good in checking them out. Because that future might not be here just yet, but it surely is approaching fast.