AI-powered Bughunting
Alfredo Ortega

Dates

12th-15th of May 2025

Capacity

20

Price

4.800€

Overview

This course aims to equip students with the necessary skills to leverage modern AI tools for enhancing and automating vulnerability discovery and code analysis. The primary focus of the course will be on code analysis and auditing, with a particular emphasis on vulnerability research. Students will learn how to identify security flaws in software, utilizing cutting-edge AI techniques to streamline the process. By the end of the course, students will be proficient in using AI to automate and enhance various aspects of code analysis and vulnerability discovery, making them well-prepared for careers in cybersecurity, software development, and related fields.

Topics Covered / Objectives

Students will gain knowledge about:

  • Basic concepts of Large Language Models (LLMs), including open vs. closed-source models, context, parameters, evaluations, and limitations
  • Code-analysis oriented prompt engineering.
  • Available open-source tools.
  • Automatic triage of possible vulnerabilities.
  • AI-enhanced decompilers and recompilers.
  • Administration of private AI systems.

Who Should Attend / Prerequisite Knowledge

The primary audience for this training includes bug hunters and security auditors who wish to enhance their capabilities by utilizing modern Large Language Models (LLMs) as reasoning and code-analysis tools. This course may also be beneficial for professional developers seeking to learn how to quickly identify vulnerabilities in their own code.

Prerequisites:

Required:

  • Knowledge of vulnerability classes and taxonomy.
  • Intermediate experience in C/C++ programming.
  • Intermediate experience in python programming.

Desirable:

  • Experience in C/C++ code audit and vulnerability research
  • Experience in reverse-engineering compiled C code.

Required Materials / Hardware

Trainees will use APIs and Private LLMs provided by the trainer. No additional hardware is needed apart from a regular laptop with internet connection.

Schedule / Agenda

LLMs introduction

    • Open vs Closed LLMs
    • Parameters and performance
    • Strength and weakness of AIs in code analysis.
    • Special requirements of LLMs related to vulnerability research.

Prompt engineering

    • Optimization for inference speed
    • Refactor of code for optimal detection
    • Techniques to workaround LLM limitations
    • Vendor prompting recommendations

Automatic tools

    • Review of automatic State-Of-The-Art tools
    • Code-analysis tools
    • AI-powered fuzzing
    • AI-powered decompilers
    • Usage and demos

Private LLM administration

    • How to choose available LLMs
    • Hardware/power requirements
    • How to run inference software
    • State-of-the-art available models
    • Demo: Vuln-hunting using a local private LLMs

Bio

Alfredo Ortega, CEO and founder of Neuroengine.ai, brings over 20 years of professional experience as a cybersecurity expert and bug-hunter. He holds a PhD in Computer Science from the Instituto Tecnológico de Buenos Aires. Throughout his career, Alfredo has made significant contributions to the field by discovering and publishing numerous high-impact vulnerabilities in prominent software systems such as OpenBSD, Signal, and voting machines. Additionally, he has published the open-source software Autokaker, one of the first AI-assisted frameworks for automatic vulnerability discovery and annotation.

Limited Seats - Remember to reserve your ticket!

register now