Publication | Closed Access
Estimating Web Attack Detection via Model Uncertainty from Inaccurate Annotation
12
Citations
25
References
2019
Year
Unknown Venue
Abuse DetectionWeb Attack DetectionMachine LearningEngineeringInformation SecurityInformation ForensicsTargeted AttackData ScienceData MiningUncertainty QuantificationWeb SecurityAdversarial Machine LearningManagementStatisticsThreat DetectionPredictive AnalyticsKnowledge DiscoveryData PrivacyComputer ScienceDeep LearningData SecurityAttack ModelThreat HuntingAnnotation ErrorModel UncertaintyThreat Model
In the past decades, Machine Learning (ML) techniques have become a hot topic in the web security field. Deep learning (DL), as a sub-field of machine learning, has proved its effectiveness in concluding various attack patterns via raw input data. To reach high accuracy, DL models are usually trained with labelled data. However, in the security field, annotation error can have a significant impact on model training. Under such a premise, we introduced the model uncertainty to the DL-based web attack detection. The model uncertainty is used to estimate the credibility of the prediction made by the model. As far as we know, we are the first to introduce this concept to web security. In our work, the model uncertainty is provided in the form as the variance of a Bayesian model. By training our attack detection model on real web logs with annotation errors, we proved that the wrongly tagged web logs tended to gain a higher variance. Therefore, by analyzing the variance result, the security operators can easily locate these mistagged web logs. This helps to find unknown attacks neglected by data annotation and to refine the existing attack detection methods.
| Year | Citations | |
|---|---|---|
Page 1
Page 1