(ICLR) *Measures the distance between two remote models using LIME.*
(ICML) *Studies of query-based auditing algorithms that can estimate the demographic parity of ML models in a query-efficient manner.*
(Neural Computing and Applications) (Alternative implementation) *Check if a remote machine learning model is a "leaked" one: through standard API requests to a remote model, extract (or not) a zero-bit watermark, that was inserted to watermark valuable models (eg, large deep neural networks).*
(KDD) *Reverse engineering of remote linear classifiers, using membership queries.*
(AAAI Workshop on Deep Learning on Graphs: Methodologies and Applications) *Introduces GNN model extraction and presents a preliminary approach for this.*
(Security and Privacy) *Introduces measures that capture the degree of influence of inputs on outputs of the observed system.*
(IEEE S&P) *Evaluate the individual, joint and marginal influence of features on a model using shapley values.*
(WWW) (Code) *Develops a methodology for detecting algorithmic pricing, and use it empirically to analyze their prevalence and behavior on Amazon Marketplace.*
(WebSci).
(AIES) *A practical audit for a well-being recommendation app developed by Telefónica (mostly on bias).*
(ICDM) *Evaluate the influence of a variable on a black-box model by "cleverly" removing it from the dataset and looking at the accuracy gap*
(NeurIPS) *Measures the level of data minimization satisfied by the prediction model using a limited number of queries.*
(Neurips) [[Code]](https://github.com/bchugg/auditing-fairness) *Sequential methods that allows for the continuous monitoring of incoming data from a black-box classifier or regressor.*
(Information Processing & Management) *Shows how to unveil whether a black-box model, complying with the regulations, is still biased or not.*
(NeurIPS) *Gives the (prohibitive) query complexity of auditing explanations.*
(ICWSM) *Audit study of Apple News as a sociotechnical news curation system (trending stories section).*
(FAT*) *Studies the reachability of radical channels from each others, using random walks on static channel recommendations.*
(WWW) *A Chrome extension to survey participants and collect the Search Engine Results Pages (SERPs) and autocomplete suggestions, for studying personalization and composition.*
(Arxiv) *Audits the fairness of Yelp’s business
(Transactions on Recommender Systems) *What it takes to “burst the bubble,” i.e., revert the bubble enclosure from recommendations.*
(NIPS) *Learns from a binary classifier paying only for negative labels.*
(Security and Privacy) *Black-box analysis of sanitizers and filters.*
(ICML) *A budget constrained and Bayesian optimization procedure to extract properties out of a black-box algorithm.*
(dat workshop) *Measures the TaskRabbit's search algorithm rank.*
(NeurIPS) *Replicates the functionality of a black-box neural model, yet with no limit on the amount of queries (via a teacher/student scheme and an evolutionary search).*
(AAAI) *Auditing as a black-box optimization problem where the goal is to automatically uncover input-output pairs of the target LLMs that exhibit illegal, immoral, or unsafe behaviors.*
(SIGKDD) *Proposes SVM-based methods to certify absence of bias and methods to remove biases from a dataset.*
(ICLR) *Proposes fair decision tree learning algorithms along with zero-knowledge proof protocols to obtain a proof of fairness on the audited server.*
(IJCNN) (Code) *Stealing black-box models (CNNs) knowledge by querying them with random natural images (ImageNet and Microsoft-COCO).*
(Harvard Journal of Law & Technology) *To explain a decision on x, find a conterfactual: the closest point to x that changes the decision.*
(Neurocomputing) *Reverse engineers remote classifier models (e.g., for evading a CAPTCHA test).*
(AIES) *Treats black box models as teachers, training transparent student models to mimic the risk scores assigned by black-box models.*
(CHI) *Makes the case for "everyday algorithmic auditing" by users.*
(USENIX Security) *Extract verbatim text sequences from the GPT-2 model’s training data.*
(arxiv) *Performs a training data extraction attack to recover individual training examples by querying the language model.*
(Information Processing & Management) *Presents a pipeline to detect and explain potential fairness issues in Clinical DSS, by comparing different multi-label classification disparity measures.*
(ECAI) *Considers multiple
(Arxiv) *Proposes an alternative paradigm to traditional auditing using crytographic tools like Zero-Knowledge Proofs; gives a system called FairProof for verifying fairness of small neural networks.*
(CVPR) (Code) *Crafts adversarial examples to fool models, in a pure blackbox setup (no gradients, inferred class only).*
(arXiv) *A method for identifying the underlying GPU architecture and software stack of a black-box machine learning model solely based on its input-output behavior.*
(CAEPIA) *Determines which kind of machine learning model is behind the returned predictions.*
(ICLR) *Presents a framework for running membership inference attacks against classifier, in audit mode.*
(FATML Workshop) *Performs feature ranking to analyse black-box models*
(Arxiv) *Proposes a way to extend the shelf-life of auditing datasets by using language models themselves; also finds problems with the current bias auditing metrics and proposes alternatives -- these alternatives highlight that model brittleness superficially increased the previous bias scores.*
(CVPR) *Ask to what extent can an adversary steal functionality of such "victim" models based solely on blackbox interactions: image in, predictions out.*
(NIPS) *Reversing graphs by observing some random walk commute times.*
(complex networks) *Queries LLMs for known graphs and studies topological hallucinations. Proposes a structural hallucination rank.*
(NeurIPS) *Sobol indices provide an efficient way to capture higher-order interactions between image regions and their contributions to a (black box) neural network’s prediction through the lens of variance.*
(arXiv) *Investigates how an adversary can optimally use its query budget for targeted evasion attacks against deep neural networks.*
(WWW) *Develops a methodology for measuring personalization in Web search result.*
(Symposium on Security and Privacy) *Given a machine learning model and a record, determine whether this record was used as part of the model's training dataset or not.*
(SNAM) *Models the trapping dynamics of users in rabbit holes in YouTube, and provides a measure of this enclosure.*
(CCS) *Model inversion approach in the adversary setting based on training an inversion model that acts as aninverse of the original model. With no fullknowledge about the original training data, an accurate inversion is still possible by training the inversion model on auxiliary samplesdrawn from a more generic data distribution.*
(arxiv) *Through the acquisition of memory access events from bus snooping, layer sequence identification bythe LSTM-CTC model, layer topology connection according to the memory access pattern, and layer dimension estimation under data volume constraints, it demonstrates one can accurately recover the a similar network architecture as the attack starting point*
(KDD) *Provides an adaptive process that automates the inference of probabilistic guarantees associated with estimating fairness metrics.*
(WWW) *Measures the incentive compatible- (IC) mechanisms (regret) of black-box auction platforms.*
(Flairs-32) *Audit of the Google's Top stories panel that pro-vides insights into its algorithmic choices for selectingand ranking news publisher*
(ECAI) *Proposes a mutually beneficial collaboration for both the auditor and the platform: a privacy-preserving and non-iterative audit scheme that enhances fairness assessments using synthetic or local data, avoiding the challenges associated with traditional API-based audits.*
(IMC) *Infer implementation details of Uber's surge price algorithm.*
(Asia CCS) *Understand how vulnerable is a remote service to adversarial classification attacks.*
(NeurIPS - best paper) *A scheme for auditing differentially private machine learning systems with a single training run.*
(CCS) *Privacy Oracle: a system that uncovers applications' leaks of personal information in transmissions to remoteservers.*
(AAAI) *Divides model fingerprinting into three core components, to identify ∼100 previously unexplored combinations of these and gain insights into their performance.*
(JMLR) *Evasion methods for convex classifiers. Considers evasion complexity.*
(Nature Machine Intelligence volume 2, pages529–539) (Code) *Shows the impossibility (with one request) or the difficulty to spot lies on the explanations of a remote AI decision.*
(ICML) *Formally establishes the conditions under which an auditor can prevent audit manipulations using prior knowledge about the ground truth.*
(ICLR) *Considers backdoor detection under the black-box setting in machine learning as a service (MLaaS) applications.*
(Journal of Information Science) (Code) *Audits multiple search engines using simulated browsing behavior with virtual agents.*
(INFOCOM) (Code) *Considers the possibility of shadow banning in Twitter (ie, the moderation black-box algorithm), and measures the probability of several hypothesis.*
(ICNN) *Composite method which can be used to attack and extract the knowledge ofa black box model even if it completely conceals its softmaxoutput.*
(Usenix Security) (Code) *Aims at extracting machine learning models in use by remote services.*
(arXiv) *Stealing/approximating a model through timing attacks usin queries.*
(CCS) *Steal the type and hyperparameters of the decoding algorithms of a LLM.*
(ISSRE) *Algorithms to craft inputs that can detect the tampering with a remotely executed classifier model.*
(Cambridge Forum on AI: Law and Governance) *Aims to simulate the evolution of ethical and legal frameworks in the society by creating an auditor which sends feedback to a debiasing algorithm deployed around an ML system.*
(Netys) (Code) *Parametrize a local recommendation algorithm by imitating the decision of a remote and better trained one.*
(Complex Networks) *Proposes a bias detection framework for items recommended to users.*
(ICLR) (Code) *Infer inner hyperparameters (eg number of layers, non-linear activation type) of a remote neural network model by analysing its response patterns to certain inputs.*
(ICWSM) *Performs an adversarial audit on multiple systems APIs and datasets, making a number of concerning observations.*
(CSCW) *Aims at identifying which centrality metrics are in use in a peer ranking service.*
(SATML) *Relates the difficulty of black-box audits
(FAccT) *Do Amazon private label products get an unfair share of recommendations and are therefore advantaged compared to 3rd party products?*
(Arxiv) *Formalizes the role of explanations in auditing and investigates if and how model explanations
(arXiv) *Searches bias in the black box model by training an unsupervised implicit generative model. Thensummarizes the black-box model behavior quantitatively by perturbing data samples along the data manifold.*
(USENIX Security) *Audits which user profile data were used for targeting a particular ad, recommendation, or price.*
(arxiv) *Infers a link between the Amazon Echo system and the ad targeting algorithm.*
(arXiv) (Code) *Explains a blackbox classifier model by sampling around data instances.*