Home
  • >
  • ChatGPT
  • >
  • Prompt engineering can slash development time - in the right hands
Read time: 3 minutes

Prompt engineering can slash development time - in the right hands

By , Head of Architecture and Technology, Global Kinetic.
22 May 2024
Dewald Mienie, Head of Architecture and Technology at Global Kinetic.
Dewald Mienie, Head of Architecture and Technology at Global Kinetic.

The rise of prompt engineering has given software developers a unique opportunity to drive efficiencies. But only if they have the right skills to properly define requests, are very clear about how they manage the process, and formulate ruthlessly transparent policies that are shared with clients.

Prompt engineering – the function of creating AI inputs that result in optimal AI outputs, also referred to as in-context learning – is rapidly gaining traction with software development companies. And having people skilled in prompt engineering has become vital in a world where AI is fast impacting every aspect of our working lives.

Research company, McKinsey, estimates that half of today’s work activities could be automated between 2030 and 2060, which is almost a decade earlier than the company had previously predicted. It has also estimated that “gen AI and other technologies have the potential to automate work activities that absorb up to 70 percent of employees’ time today.”

Finding ways to leverage that trend and tap into the efficiencies that can come with the smart use of AI can significantly benefit companies. However, there are some challenges that should not be overlooked.

“One of the big challenges for software developers is getting stuck on an AI hamster wheel. Developers, especially less experienced developers, may ask ChatGPT to generate a piece of code. If the generated code is not exactly what they need, they get stuck in a loop refining the prompt in an effort to reach the right answer. In this case it would clearly have been easier to simply write or amend the code themselves. It takes some experience and insight to maximise the value from AI,” shares Dewald Mienie, Head of Architecture and Technology at Global Kinetic.

Mienie says that developers must focus on ensuring efficiency gains by finding the right balance between using AI and coding themselves. More experienced developers are in a better position to do this as they can compare their output using AI to what it was before.

He also warns that large language models (LLMs), like ChatGPT, can also find it difficult to self-correct. In these instances if an inexperienced developer had to ask for something and then realise an element of the response was incorrect and ask for a correction, they could also end up in a loop of incorrect responses. In each variation, the requested element is corrected, but then a new mistake is incorporated into the response. Again, if the developer does not timebox this process they will end up burning time, increasing the cost of development.

Building accountability and security with robust policies

Mienie says all businesses should have an AI policy, but it is especially important for software development businesses.

“It’s vital to have an accountability clause in your AI policy. This means that every person who uses AI acknowledges that they are responsible for what the AI generated. As the user you must be able to validate what the AI has generated for you and be able to back it up with the necessary references. AI must always attract the same quality checks and balances as human-generated code, going through the same DevSecOps process including vulnerability scanning to ensure the code is secure. Thorough code reviews must continue. AI is simply there to accelerate the coding process and teams must be aware that, just as if they were writing the code themselves, they are responsible for the quality of the work submitted,” he explains.

A balanced, risk-based approach when it comes to protecting IP should also be applied. Mienie says each case should be looked at individually and AI should be used only where appropriate, making sure that the IP of the client is protected at all times. This is especially important in sectors such as financial services, where code should never be shared on a public domain such as an LLM. He also says that the company’s AI policies should be shared up front with clients to help build trust between the partners from the beginning of any engagement.

The whole company can benefit from good prompt engineering

Mienie says AI is also being used successfully in many companies in all industries to assist with ideation and marketing campaigns, driving efficiencies into multiple areas of the business. And even in these functions, the rules of great prompt engineering apply.

“By giving the LLM information of your role as well as the kind of company you are working for you are giving it the context it needs to formulate an appropriate response. The AI can only provide a response based on the data it was trained on and therefore you need to augment this by providing the extra information in the prompt. At the same time you need to be cognisant not to leak sensitive information as one is prone to provide more and more information to improve the response ,” Mienie shares.

Summing up, Mienie says prompt engineering has a great deal to offer software development companies if teams are properly trained and have all the supporting policies and checks and balances in place.

“There is no doubt that prompt engineering requires a good understanding of the AI tool being used and a linguistic proficiency that enables good problem formulation. Without a well-formulated problem, even the most sophisticated prompts won’t deliver the right answers, and will certainly not save your business time and money. AI has the potential to be a game changer for software development, but only if it's used in the appropriate ways and with robust checks and balances built into the process,” Mienie says. 

Daily newsletter