In the third of a series of articles, Gareth Smith, General Manager, Software Test Automation at Keysight looks at key trends in AI in 2024.
As AI becomes increasingly embedded in software, the systems will become more autonomous, which increases risk and complexity and makes testing a real challenge. As a result, a fixed set of tests (scripts) will no longer suffice when evaluating intelligent systems. Instead, AI will be needed to automatically and continuously test applications. The future of software testing is autonomous test design and execution, says Smith.
As AI permeates every system and complexity and sophistication soar, there is a risk that quality will go down. This is a result of the sheer number of permutations, which makes testing everything impossible. This means decisions will need to be made around how, what, and when to test to ensure quality is maintained.
As the risks associated with AI are recognized, enterprises will need to appoint an AI and security compliance officer to the C-Suite. Over time, this role will merge with the CSO. With live learning, it will be vital to have guardrails in place to keep AI on track. Constant checks and balances will be essential to validate that an intelligent system is behaving and hasn’t gone rogue. Live surveillance will become standard. However, as these systems develop, it will also be necessary to test that they haven’t learned how to look like they are behaving while undertaking nefarious activity. Reinforcement learning and similar techniques can inadvertently drive the AI to cover its tracks to reach its goal and will be a huge challenge to address before the end of the decade. These problems will create a slew of new opportunities for companies that can help clean up, control, and put guardrails in place for AI.
As AI becomes increasingly embedded in software, the systems will become more autonomous, which increases risk and complexity and makes testing a real challenge. As a result, a fixed set of tests (scripts) will no longer suffice when evaluating intelligent systems. Instead, AI will be needed to automatically and continuously test applications. The future of software testing is autonomous test design and execution, says Smith.
As AI permeates every system and complexity and sophistication soar, there is a risk that quality will go down. This is a result of the sheer number of permutations, which makes testing everything impossible. This means decisions will need to be made around how, what, and when to test to ensure quality is maintained.
Automated, AI-driven test tools optimize experiences on 5G
As the risks associated with AI are recognized, enterprises will need to appoint an AI and security compliance officer to the C-Suite. Over time, this role will merge with the CSO. With live learning, it will be vital to have guardrails in place to keep AI on track. Constant checks and balances will be essential to validate that an intelligent system is behaving and hasn’t gone rogue. Live surveillance will become standard. However, as these systems develop, it will also be necessary to test that they haven’t learned how to look like they are behaving while undertaking nefarious activity. Reinforcement learning and similar techniques can inadvertently drive the AI to cover its tracks to reach its goal and will be a huge challenge to address before the end of the decade. These problems will create a slew of new opportunities for companies that can help clean up, control, and put guardrails in place for AI.
Keysight teams to add AI into 6G
Bosch rolls out generative AI in manufacturing
Sustainability