Skip to content

Latest commit

 

History

History
9 lines (5 loc) · 539 Bytes

File metadata and controls

9 lines (5 loc) · 539 Bytes

CodeAttack

Generating adversarial examples for pre-trained code models

This repo is the artifacts of adversarial attack section of the paper An Extensive Study on Pre-trained Models for Program Understanding and Generation published in ISSTA'22.

After preparing the model and the dataset, first run dataset/*/transform_*.py to generate adversarial input samples. Then run run.sh to start all experiments.

This project is fork from TextAttack.