As AI Technology Shifts To Smartphones, Security Becomes Increasingly Important: The technology that we use in our regular lives is about to get an intelligence upgrade in the near future. Soon, electronic devices such as smartphones, security cameras, and speakers will all be able to run software that is powered by artificial intelligence. It is anticipated that the integration of AI and these gadgets will accelerate the actions of processing images and voice respectively.
Do you have any idea what is causing this to take place? It’s a compression technique called quantization. By condensing deep learning models into a more manageable size, the method is laying the groundwork for a reduction in the amount of processing and energy that is required. On the other hand, smaller models become an ideal target for cybercrime because of their accessibility.
It is made easier for hostile attackers to intrude into the AI system and influence the chores when this convenience is provided. A new study that was conducted by experts from IBM and MIT demonstrates how vulnerable compressed models are.
In addition, the study provides a potential remedy to the problem. It advises including a mathematical constraint during the quantization process in order to reduce the likelihood of the AI being vulnerable to the attack.
What Kind Of Malfunctioning Can Happen To AI Model?
Because the model has a shorter bit length, it is more likely to incorrectly categorize photos that have been edited. This may occur as a result of a process known as error amplification. Each subsequent phase of processing results in an increasingly warped representation of the transformed image.
At the conclusion of the entire processing, it is anticipated that the model will incorrectly identify a frog as a deer. • Artificial intelligence models that have been reduced to 8 bits or fewer are more vulnerable to being exploited by adversarial assaults. However, by manipulating the Lipschitz limits that are imposed during the quantization process, it is possible to reinstall some flexibility or recovery.
Quantized models have the potential to surpass 32-bit models if they are able to avoid being compromised by malicious attacks or if they survive them.
“Our technique limits error amplification and can even make compressed deep learning models more robust than full-precision models,” said Song Han, an assistant professor in the Department of Electrical Engineering and Computer Science at MIT and a member of MIT’s Microsystems Technology Laboratories. We are able to put a cap on the inaccuracy if we use quantization correctly.”
Chuang Gan, who was also involved in the research and is a co-author on the paper, stated that “the team plans to further improve the technique by training it on larger datasets and applying it to a wider range of models.” As we progress toward a world where everything is connected to the internet, deep learning models will need to be both quick and secure.
The application of our Defensive Quantization strategy is beneficial on both fronts.” At the MIT-IBM Watson Artificial Intelligence Lab, Chuang Gan is a researcher. Ji Lin, a recent graduate of MIT, is a member of the research team that is getting set to present the findings of the study at the International Conference on Learning Representation in the month of May.
The limitations of quantization model compression technology are being pushed forward by Han himself through the use of artificial intelligence. In recent work, Han and his colleagues have displayed that reinforcement learning can be employed for automatically discovering the smallest bit length for each step of the process in a quantized model. Han said, “This flexible bit width approach reduces latency and energy use by as much as 200 percent compared to a fixed, 8-bit model.”