Promoting Developer “Deep Learning”: Stack Overflow vs. AI by Ed Lyons

Image by Ed Lyons using Midjourney

All software developers have this problem repeatedly: you need to code something, but can’t come up with the right instructions. It’s often because you haven’t implemented this feature before. Or it’s a new framework. Or you have a bug you cannot figure out. So you google for it, and you are in the forest of Stack Overflow answers, trying to find a question just like yours, and then you search through the answers. And the searching can go on and on. It is often frustrating.

Generative AI technologies, such GitHub Copilot, are a way out of the Stack Overflow woods. You just open the chat, ask for the function you want, and then… voila! The answer appears. The answer. Not the “accepted answer”, or the “the accepted answer is wrong but another one looks ok”, or “the accepted answer does not compile for some reason but at least it is a different error” answer. But the answer. The AI answer feels like a big step forward for productivity.

When I talk to managers about AI code assistants, I often bring up this Stack Overflow comparison. I say, “If you’re worried your developers are pasting in code ‘from the internet,’ that’s happening now as we are all doing the same thing with Stack Overflow.” I have heard others say, “It’s just like Stack Overflow, but more efficient.” Still others call Generative AI a “Stack Overflow killer.”

Yet this week, while attending the Generative AI World 2024 conference with my EQengineered colleagues, we heard someone say this about their firm: junior developers who use AI are not getting better at writing software as quickly as they did before AI. Pasting in all this code is not increasing their core skills.

This feels so instantly believable that it requires contemplation and caution. Was there some value in noodling through google results that is being lost in the AI “pair programming” available right in your IDE? Is the time and energy we invest in evaluating various questions and answers online a form of learning?

I have taken a few classes in Cognitive Science in the past. My professors said that experimentation and contemplation is valuable in retaining information. Learning is not just being told the correct answers. It is about engaging the mind, testing it, forcing it to evaluate alternatives. For example, studies have shown that merely taking tests on material causes an increase in skill retention, and even if you don’t get any of the questions right. Engagement is that important.

Thinking of my many years of using Stack Overflow, I now realize that saying I was “pasting code” from the answers was a poor description. It was far more than pasting. It was providing me with a deeper learning experience than I give it credit for. Still, it’s hard for me to fully accept all the time spent, and the frustration.

For example, one of my long-running complaints about looking for answers online is, “I can’t stand having to learn something fundamentally when I only want to use it trivially.” This comes from the many times I wanted to do one small thing with a technology, but I could not figure it out without learning a great deal about it, in order to know how my little feature would work. Yet, this frequent problem has led me to master many smaller technologies, as well as crucial patterns and ideas that are relevant throughout the world of software. AI would have just told me how to implement my feature, and I would have moved on, without learning.

At the conference, I spoke to my colleagues about this. What do we need to know well? Certainly there are technologies that you don’t need to master, such as regex, string formatting, datetime functions, and other menial tasks. And perhaps there is some orphan framework that you only have to use once, and the instant correct answer would be a better use of time. But where do you draw the line? 

I spoke to another developer yesterday about this, who reminded me that you can also ask the AI assistant to explain things, and get help on improving code. It doesn’t have to only give you the answer. It can be more of a teacher than it gets credit for. He’s right.

I am going to demonstrate this with a very simple python program I just asked Github Copilot to generate for me. It had a “while true” loop to keep asking for user input until they typed “exit.” It worked great on the first try. I then asked it:

#file:app.py why did you use the statement 'while True'?

It said:

The `while True` statement is used to create an infinite loop that will keep running until it is explicitly broken out of. In the context of the `app.py` program, it allows the user to perform multiple mathematical operations without restarting the program. The loop continues to prompt the user for input until they type "exit", at which point the `break` statement is executed to exit the loop and end the program.

I decided to ask for an alternative.

#file:app.py could I use an alternative to the 'while True' statement?

It answered:

Yes, you can use a different approach to avoid the `while True` statement. One alternative is to use a `while` loop with a condition that checks if the user wants to exit. Here's how you can modify the code:

[It then updated the code with the alternative]

I tried other examples and questions, and it can be a teacher to some extent if you decide to engage it this way. I don’t think it’s as valuable as searching through answers and iterating through your own attempts.

But if you need to get your ticket closed, which method are you going to choose? And will developers who get a working answer right away stop and chat with the assistant about alternatives? I doubt that. We did not spend many years wandering around Stack Overflow for our education, we did it because we often had no alternative.

And there is pressure to move quickly! We are all using some form of agile where velocity really matters. AI improves velocity. But there must be time for contemplation and alternatives.

Perhaps this will make pull requests and team code reviews more important. Team leads can make more time there to discuss the code, and “well… the AI wrote that” should not be the accepted answer.

Of course, there are now AI tools to review pull requests and provide feedback. That is also very efficient, and I doubt that one LLM is going to have a big problem with code another LLM wrote. As the role of AI grows in all phases of software development, we must figure out how to keep the humans in the loop, as the saying goes, even if our roles change.

In conclusion, we should not seek developer velocity to the exclusion of all other considerations. We need the people that write and generate code to understand why it works, what the alternatives were, and what effect choices have on the overall product. We also need them to improve their skills over time.

Many senior developers have seen code quality decrease and tech debt increase when code is written too quickly and without proper reviews, even though “it works.” We must start thinking about the consequences of using AI to write it far faster, and what we will do if developers cannot improve their skills enough through coding tasks alone.

Ed Lyons