Detail view

Improved AI Methods to Detect Programming Errors

On behalf of the German Federal Office for Information Security (BSI), researchers from the University of Bremen and employees of the Bremen-based company team neusta investigated the possibilities of using artificial intelligence in code analysis. The study is available free of charge from the BSI.

Software manufacturers often examine their programs during the development phase in order to detect errors at an early stage. This process can be partly automated by using so-called SAST (static application security testing) tools. Even though the usefulness of such tools has been demonstrated in practice, they often overlook errors or report many false alarms, limiting their usefulness. Machine learning (ML), i.e. automatic learning from data, can help reduce the error rates of such SAST tools.

A large-scale study has now examined the effectiveness of the use of machine learning methods in this context, with a focus on the best possible implementation based on the latest research approaches. The Software Engineering working group at the Center for Computing Technologies (TZI) of the University of Bremen as well as the companies neusta software development and neusta mobile solutions have determined the state of the art on behalf of the German Federal Office for Information Security (BSI). As part of the project “Machine Learning in the Context of Static Application Security Testing – ML-SAST,” they carried out surveys, expert interviews, and a systematic literature search. They also examined commercially available SAST tools in terms of their ML functionalities and error detection rates.

Highest Potential in “Unsupervised Learning”

They summarized the results in a comprehensive study, in particular identifying the most promising approaches and the need for research in this area. A key finding is that supervised approaches to machine learning are used in most cases, although they have significant disadvantages. “If you want to use supervised learning approaches, you need good data sets for training the tools, and there are currently none,” explains TZI employee Lorenz Hüther. The development of the required data sets is “somewhat unrealistic” at least in the short term and can only be realized in the longer term with considerable effort.

In addition, supervised learning requires a high degree of explainability of the results – both developers and users of the tools need to be able to see whether the decision criteria of the system make sense.

The project team therefore currently sees the greatest potential in unsupervised learning with the help of clustering. The system first detects all similar functions of the program and bundles them in order to compare them. If a discrepancy is discovered at any point, the tool identifies it as a potential error.

Prototype to Be Released by the End of the Year

However, further research and development is needed to increase the potential of these methods for practical application. By the end of the year, the project participants want to develop a prototype that uses the best currently available methods in the field of ML-SAST. The prototype will be implemented as an open-source project so that all interested manufacturers can use it for their product development. The BSI is financing the development of the prototype.

 

Further Information:

The study “Machine Learning in the Context of Static Application Security Testing – ML-SAST” is available from the BSI at: https://www.bsi.bund.de/DE/Service-Navi/Publikationen/Studien/ML-SAST/ml-sast.html

 

Contact:

Lorenz Hüther
Center for Computing Technologies (TZI)
University of Bremen
Phone: +49 421 218-64476
Email: lorenz1protect me ?!uni-bremenprotect me ?!.de

 

 

 

Grafik
The best methods of automated code analysis can detect different dependencies within a program and translate them into graphs for further investigation.