Skip to content

Evaluating SAST Rule Performance: Autofix Behavior Under the Microscope

Original Post: Testing autofix behavior of SAST rules

The passage discusses the ease of writing custom rules for Semgrep and the introduction of autofix tests for these rules. Autofix is a feature that automatically transforms non-compliant code into compliant code. Previously, there was no way to automate testing for this feature, but the new autofix tests address this gap. The author shares their experience from 2016 when they criticized a similar tool for not having automated rule verification, which they revisited during their Ph.D. They express satisfaction with Semgrep’s existing testing functionalities, which simplify and integrate well into the rule-writing workflow.

To write and test a rule in Semgrep, developers write code fragments with annotations to indicate expected rule behavior. The recent autofix testing feature enables automated testing by using specific file naming conventions, allowing Semgrep to compare fixed code to expected outcomes, providing clear differences if the autofix fails.

The author, part of r2c’s security research team, highlights internal testing practices and ongoing improvements to the rules. They emphasize user feedback in improving rules and mention scanning open-source repositories and using vulnerable repositories for testing accuracy. Future plans include evaluating Semgrep’s performance against past CVEs and explore Java’s XML parsing vulnerabilities in an upcoming blog post.

Go here to read the Original Post

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version