It stores the source code of 13 applications that we use in the experiment. They are from FSE19-submission-material-DIG and FSE19-submission-material-TEDD.
It stores the files that crawled from 13 applications by Crawljax.
It stores the files that contain labels manually assigned by us. Each file contains:
id: unique id assigned by Crawljaxdetails: dictionary format of the clickablecurrent dom: dom format of the clickablecontext dom: dom forst of the context of the clickablesource state url: the url that stores the screenshot of source statetarget state url: the url that stores the screenshot of target statestate diff url: the text difference between two statesmanual label: the label that represent the meaning of the clickablemanual label action: the action verb in manual labelmanual label context: to which degree we can extract meaningful information from the dom of clickable and its source and target state. It contains 3 different scores, 0 represents that the manual label is irrelevant to the clickable and states 1 represents that a part of manual label exists in the clickable and states 2 represents that the manual label exists completely in the clickable and states
It stores the model output of Supervised approach and its evaluation results.
heuristic labelin${project_name}-labels.xlsxmeans the model output${project_name}-labels-eval.xlsxcontains the scores of evaluation metrics for each clickable
It stores the model output of Unsupervised approach and its evaluation results.
pcfg labelin${project_name}-pcfg-labels.xlsxmeans the model output${project_name}-labels-eval.xlsxcontains the scores of evaluation metrics for each clickable${project_name}-pcfg-labels_no_context.xlsxcontains the model output after removing the usage of clickable context from input (use clickable only)${project_name}-pcfg-postag.xlsxcontains production rules and the probability of each production rule
It stores the model output of KeyBERT (one of baselines) and its evaluation result.
keybert labelin${project_name}-labels.xlsxmeans the model output.${project_name}-labels-eval.xlsxcontains the scores of evaluation metrics for each clickable
It stores the model output of PreProcess (one of baselines) and its evaluation result.
preprocess labelin${project_name}-labels.xlsxmeans the model output${project_name}-labels-eval.xlsxcontains the scores of evaluation metrics for each clickable
Folder labelled-testcasegenerator-plugin contains the source code of our tool CrawLabel to assign labels to test cases generated by Crawljax.
- Go to folder python, run
pip install -r requirements.txt(install necessary python libraries) - (1) If using supervised approach, go to folder python, run
python -m ui.main rank_attributes --projects={project_names}(here{project_names}is a string split by comma, such as "addressbook,jpetstore"), the output is a list of ranked attributes stored inresults/ranked_attributes.json. It is possible for you to add your own training file inresults/training. Remember to follow the format inresults/training. Then go to the folder, runpython -m ui.main labeled_tests --project={project_name} --crawlfolder={crawl_folder_path}(here{project_name}is the name for project,{crawlfolder_path}is the absolute path of crawl folder generated by Crawljax, examplepython -m ui.main labeled_tests --project=addressbook --crawlfolder=/Users/xyz/AST-2022-submission/crawl-results/addressbook. Make sure there is a file${crawl_folder_path}/Crawlpaths.json). The output is file with labels (column name is "heuristic label") inresults/heuristic/{project_name}-labels.xlsx
(2) If using unsupervised approach, go to folder python, run python -m ui.labels.PCFG --project_name {project_names} apply_pcfg_postag and python -m ui.labels.PCFG --project_name {project_names} apply_pcfg_postag_forest
Folder example contains the labeled test cases. Here the default approach is KeyBert.
-
example/jpetstore/src/generated/GeneratedTests.javais the test class generated by Crawljax -
example/jpetstore/src/generated_labels/GeneratedTests.javais the test class generated with our tool