Media Summary: High Accuracy and High Fidelity Extraction of DeepHammer: Depleting the Intelligence of DEEPVSA: Facilitating Value-set Analysis with

Usenix Security 20 Interpretable Deep Learning Under Fire - Detailed Analysis & Overview

High Accuracy and High Fidelity Extraction of DeepHammer: Depleting the Intelligence of DEEPVSA: Facilitating Value-set Analysis with Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries Fnu Suya, Jianfeng Chi, David Evans, and ... Justinian's GAAvernor: Robust Distributed

Shiqi Wang Columbia University Abstract: Due to the increasing deployment of

Photo Gallery

USENIX Security '20 - Interpretable Deep Learning under Fire
USENIX Security '20 - High Accuracy and High Fidelity Extraction of Neural Networks
USENIX Security '21 - You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
USENIX Security '20 - DeepHammer: Depleting the Intelligence of Deep Neural Networks through Target
USENIX Security '19 - DEEPVSA: Facilitating Value-set Analysis with Deep Learning for
USENIX Security '21 - Blind Backdoors in Deep Learning Models
USENIX Security '24 - Fast and Private Inference of Deep Neural Networks by Co-designing...
USENIX Security '24 - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models
USENIX Security '24 - AI Psychiatry: Forensic Investigation of Deep Learning Networks in Memory...
USENIX Security '22 - Label Inference Attacks Against Vertical Federated Learning
USENIX Security '20 - Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks
USENIX Security '23 - Rethinking White-Box Watermarks on Deep Learning Models under Neural...
View Detailed Profile
USENIX Security '20 - Interpretable Deep Learning under Fire

USENIX Security '20 - Interpretable Deep Learning under Fire

Interpretable Deep Learning under Fire

USENIX Security '20 - High Accuracy and High Fidelity Extraction of Neural Networks

USENIX Security '20 - High Accuracy and High Fidelity Extraction of Neural Networks

High Accuracy and High Fidelity Extraction of

USENIX Security '21 - You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

USENIX Security '21 - You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion

USENIX Security

USENIX Security '20 - DeepHammer: Depleting the Intelligence of Deep Neural Networks through Target

USENIX Security '20 - DeepHammer: Depleting the Intelligence of Deep Neural Networks through Target

DeepHammer: Depleting the Intelligence of

USENIX Security '19 - DEEPVSA: Facilitating Value-set Analysis with Deep Learning for

USENIX Security '19 - DEEPVSA: Facilitating Value-set Analysis with Deep Learning for

DEEPVSA: Facilitating Value-set Analysis with

USENIX Security '21 - Blind Backdoors in Deep Learning Models

USENIX Security '21 - Blind Backdoors in Deep Learning Models

USENIX Security

USENIX Security '24 - Fast and Private Inference of Deep Neural Networks by Co-designing...

USENIX Security '24 - Fast and Private Inference of Deep Neural Networks by Co-designing...

Fast and Private Inference of

USENIX Security '24 - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

USENIX Security '24 - SecurityNet: Assessing Machine Learning Vulnerabilities on Public Models

SecurityNet: Assessing

USENIX Security '24 - AI Psychiatry: Forensic Investigation of Deep Learning Networks in Memory...

USENIX Security '24 - AI Psychiatry: Forensic Investigation of Deep Learning Networks in Memory...

AI Psychiatry: Forensic Investigation of

USENIX Security '22 - Label Inference Attacks Against Vertical Federated Learning

USENIX Security '22 - Label Inference Attacks Against Vertical Federated Learning

USENIX Security

USENIX Security '20 - Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks

USENIX Security '20 - Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks

Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in

USENIX Security '23 - Rethinking White-Box Watermarks on Deep Learning Models under Neural...

USENIX Security '23 - Rethinking White-Box Watermarks on Deep Learning Models under Neural...

USENIX Security

USENIX Security '20 - Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited

USENIX Security '20 - Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited

Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries Fnu Suya, Jianfeng Chi, David Evans, and ...

USENIX Security '23 - Differential Testing of Cross Deep Learning Framework APIs: Revealing...

USENIX Security '23 - Differential Testing of Cross Deep Learning Framework APIs: Revealing...

USENIX Security

USENIX Security '20 - Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

USENIX Security '20 - Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

USENIX Security

USENIX Security '24 - How Does a Deep Learning Model Architecture Impact Its Privacy?...

USENIX Security '24 - How Does a Deep Learning Model Architecture Impact Its Privacy?...

How Does a

USENIX Security '20 - Justinian's GAAvernor: Robust Distributed Learning with Gradient Aggregation

USENIX Security '20 - Justinian's GAAvernor: Robust Distributed Learning with Gradient Aggregation

Justinian's GAAvernor: Robust Distributed

USENIX Security '20 - Exploring Connections Between Active Learning and Model Extraction

USENIX Security '20 - Exploring Connections Between Active Learning and Model Extraction

Exploring Connections Between Active

USENIX Security '18 - Formal Security Analysis of Neural Networks using Symbolic Intervals

USENIX Security '18 - Formal Security Analysis of Neural Networks using Symbolic Intervals

Shiqi Wang Columbia University Abstract: Due to the increasing deployment of