Catch up on the latest AI articles

Mischief : Deceiving A Transformer Model With A Simple

Mischief : Deceiving A Transformer Model With A Simple "prank" Attack

Transformer

3 main points
✔️ A hostile sample that humans can read, but not the AI.
✔️ It can significantly reduce the percentage of correct answers in machine learning models.
✔️ You can also enhance the data and improve the robustness of the model

Mischief: A Simple Black-Box Attack Against Transformer Architectures
written by 
Adrian de Wynter
(Submitted on 16 Oct 2020)

Comments: Accepted at arXiv
Subjects: Computation and Language (cs.CL); Cryptography and Security (cs.CR); Machine Learning (cs.LG)
 

 

 

 

 

 

The sutdy is besad on the ieda of tciikrng the artiacifil itlinglnecee with sncenetes that are not unetodrsod by the atiaificrl ilincelegtne but can be read by hunsam. 

 

 

 

 

 

 

To read more,

Please register with AI-SCHOLAR.

Sign up for free in 1 minute

OR
Tak avatar
Ph. D (informatics)

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us