Specializations

  • Machine Learning and Natural Language Processing
  • Artificial Intelligence Ethics
  • Artificial Intelligence for Social Good

Biography

Aylin Caliskan's research interests lie in artificial intelligence (AI) ethics, bias in AI, machine learning, and the implications of machine intelligence on privacy and equity. She investigates the reasoning behind biased AI representations and decisions by developing theoretically grounded statistical methods that uncover and quantify the biases of machines. Building these transparency enhancing algorithms involves the use of machine learning, natural language processing, and computer vision to interpret AI and gain insights about bias in machines as well as society. Caliskan was selected as a Rising Star in EECS at Stanford University in 2017 and was a Postdoctoral Researcher and a Fellow at Princeton University's Center for Information Technology Policy. In 2021, they were named a Nonresident Fellow at the Brookings Institution.

Publications and Contributions

  • Journal Article, Academic Journal
    A Set of Maximally Distinct Facial Traits Learned by Machines is not Predictive of Appearance Bias in the Wild (2021)
    AI and Ethics Authors: Ryan Steed, Aylin Caliskan
  • Conference Paper
    Automatically Characterizing Targeted Information Operations Through Biases Present in Discourse on Twitter (2021)
    15th IEEE International Conference on Semantic Computing (ICSC) Authors: Autumn Toney, Ashkat Pandy, David Broniatowski, Wei Guo, Aylin Caliskan
  • Journal Article, Academic Journal
    Bias in Natural Language Processing (2021)
    Author: Aylin Caliskan
  • Conference Paper
    Image Representations Learned With Unsupervised Pre-Training Contain Human-like Biases (2021)
    The 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) Authors: Ryan Steed, Aylin Caliskan
  • Conference Paper
    If I Tap It, Will They Come? An Introductory Analysis of Fairness in a Large-Scale Ride Hailing Dataset (2020)
    Academy of Marketing Science Annual Conference (AMS) Authors: Aylin Caliskan, Begum Kaplan
  • Docket
    Comments in response to the National Institute of Standards and Technology Request for Information on Developing a Federal AI Standards Engagement Plan (2019)
    National Institute of Standards and Technology (NIST) Authors: David Broniatowski, Aylin Caliskan, Valerie Reyna, Reva Schwartz
  • Conference Paper
    Git Blame Who?: Stylistic Authorship Attribution of Small, Incomplete Source Code Fragments (2019)
    19th Privacy Enhancing Technologies Symposium (PETS) Authors: Edwin Dauber, Aylin Caliskan, Michael Weisman, Richard Harang, Gregory Schrearer, Frederica Nelson, Rachel Greenstadt
  • Conference Poster
    Privacy and Security via Machine Learning and Natural Language Processing. (2018)
    Cybersecurity Retreat, Princeton University Author: Aylin Caliskan
  • Journal Article, Academic Journal
    Semantics derived automatically from language corpora contain human-like biases (2018)
    Science Authors: Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan
  • Conference Paper
    Stylistic authorship attribution of small, incomplete source code fragments Authors. (2018)
    IEEE/ACM 40th International Conference on Software Engineering: Companion Authors: Edwin Dauber, Aylin Caliskan, Richard Harang, Rachel Greenstadt
  • Journal Article, Academic Journal
    Stylometry of Author-Specific and Country-Specific Style Features in JavaScript. (2018)
    NDSS Authors: Dennis Rollke, Aviel J. Stein, Edwin Daub, Mosfiqur Rahman, Michael J. Weisman, Gregory G. Shearer, Frederica Nelson, Aylin Caliskan, Richard Harang, Rachel Greenstadt
  • Conference Paper
    When Coding Style Survives Compilation: De-anonymizing Programmers from Executable (2018)
    Network and Distributed System Security Symposium (NDSS) Authors: Aylin Caliskan, Fabian Yamaguchi, Edwin Dauber, Richard Harang, Konrad Rieck, Arvind Narayanan
  • Conference Paper
    A Story of Discrimination and Unfairness (2016)
    Hot Topics in Privacy Enhancing Technologies (HotPETs Authors: Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan
  • Conference Paper
    De-anonymizing Programmers via Code Stylometry (2015)
    USENIX Security Symposium (USENIX Security) Authors: Aylin Caliskan, Richard Harang, Andrew Liu, Fabian Yamaguchi, Arvind Narayanan, Clare Voss, Rachel Greenstadt
  • Conference Paper
    How do we decide how much to reveal? (Hint: Our privacy behavior might be socially constructed.) (2015)
    Special Issue on Security, Privacy, and Human Behavior, ACM Computers & Society Author: Aylin Caliskan
  • Conference Paper
    Doppelgänger Finder: Taking Stylometry To The Underground (2014)
    IEEE Symposium on Security and Privacy Authors: Sadia Afroz, Aylin Caliskan, Ariel Stolerman, Rachel Greenstadt, Damon McCoy
  • Conference Poster
    Doppelgänger Finder: Taking Stylometry To The Underground. (2014)
    Computer Science PhD Open House Author: Aylin Caliskan
  • Conference Workshop Paper
    Privacy Detective: Detecting Private Information and Collective Privacy Behavior in a Large Social Network (2014)
    Workshop on Privacy in the Electronic Society (WPES) Authors: Aylin Caliskan, Jonathan Walsh, Rachel Greenstadt
  • Conference Workshop Paper
    Approaches to Adversarial Drift (2013)
    6th ACM Workshop on Artificial Intelligence and Security (AISec) Authors: Alex Kantchelian, Sadia Afroz, Ling Huang, Aylin Caliskan, Brad Miller, Michael Carl Tschantz, Anthony Joseph, J. D. Tygar
  • Conference Workshop Paper
    From Language to Family and Back: Native Language and Language Family Identification from English Text (2013)
    Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop (NAACL-SRW) Authors: Aylin Caliskan, Rachel Greenstadt
  • Conference Workshop Paper
    How Privacy Flaws Affect Consumer Perception (2013)
    3rd Workshop on Socio-Technical Aspects in Security and Trust (STAST) Authors: Aylin Caliskan, Jordan Santell, Aaron Chapin, Rachel Greenstadt
  • Conference Paper
    Translate once, translate twice, translate thrice and attribute: Identifying authors and machine translation tools in translated text (2012)
    6th IEEE International Conference on Semantic Computing (ICSC) Authors: Aylin Caliskan, Rachel Greenstadt
  • Conference Paper
    Use Fewer Instances of the Letter “i”: Toward Writing Style Anonymization (2012)
    The 12th Privacy Enhancing Technologies Symposium (PETS) Authors: Andrew McDonald, Sadia Afroz, Aylin Caliskan, Ariel Stolerman, Rachel Greenstadt
  • Conference Poster
    ENVOY: Exploration and Navigation Vehicle for geolOgY (2011)
    University of Pennsylvania’s Entry in NASA/NIA RASC-AL Authors: Arunkumar Byravan, Aylin Caliskan, Jonas Cleveland, Daniel Gilles, Jaimeen Kapadia, , Theparit Peerasathien, Bharath Sankaran, Alex Tozzo
  • Book, Chapter in Non-Scholarly Book-New
    Social Biases in Word Embeddings and Their Relation to Human Cognition
    The Atlas of Language Analysis in Psychology Authors: Aylin Caliskan, Molly Lewis Editors: Morteza Deghani, Ryan Boyd

Presentations

  • Algorithmic Measures of Language Mirror Human Biases (2020)
    Georgetown University
  • Algorithmic Measures of Language Mirror Human Biases and Widely Shared Associations (2020)
    Santa Fe Institute
  • Bias and AI Ethics (2020)
    DefCon28 AI Village
  • Bias in AI (2020)
    NIST AI Workshop
  • Bias in AI and Digital Humanities (2020)
    University of Pennsylvania
  • Gender Breakthrough (2020)
    AI for Good Global Summit
  • Gender Equity (2020)
    AI for Good Global Summit
  • Implications of Biased AI on Democracy, Equity, and Justice (2020)
    COLING Workshop on Natural Language Processing for Internet Freedom
  • Promises and Pitfalls of Big Data Approaches to Intersectional Equity in STEM (2020)
    NSF Workshop
  • AI for Social Good, Bias and Ethics Panel (2019)
    WeCNLP Summit at Facebook
  • Algorithmic Measures of Language Mirror Human Biases (2019)
    Symposium on Computer-Resident Language and Naturalistic Conversation as Windows Into Social Cognition
  • Algorithmic Mirrors of Human Biases (2019)
    University of Chicago
  • Algorithmic Mirrors of Human Biases (2019)
    Virginia Tech
  • Algorithmic Mirrors of Society (2019)
    University of Maryland
  • Bias in AI (2019)
    Social Science Foo Camp at Facebook
  • Hands-on Tutorial: AI Fairness 360 (2019)
    ACM Conference on Fairness, Accountability, and Transparency (ACM FAT*)
  • Human-like Bias in Machine Intelligence (2019)
    George Washington University, SEH WOW Talk Series
  • Monitoring Hate Speech in the US Media (2019)
    Workshop on Defining, Monitoring and Countering Hate Speech. George Washington University, School of Media and Public Affairs
  • Neural Networks for NLP (2019)
    George Washington University
  • NSF Workshop: Fairness, Ethics, Accountability, and Transparency (FEAT) (2019)
    NSF Workshop
  • Tutorial on Distributional Semantics via Word Embeddings (2019)
    Department of Psychology, Harvard University
  • AI & Equity (2018)
    MIT Media Lab
  • Bias in Machine Learning (2018)
    ACM & Women in Computer Science at GWU
  • De-anonymizing Programmers from Source Code and Binaries (2018)
    DEFCON
  • The Great Power of AI: Algorithmic Mirrors of Individuals and Society (2018)
    Brown University, Duke University, ETH Zurich, George Washington University, Tufts University, University of Maryland, University of Virginia, and Yale University
  • The Great Power of AI: Algorithmic Mirrors of Society (2018)
    DEFCON
  • Beyond Big Data: What Can We Learn from AI Models? (2017)
    AISec - CCS Workshop
  • A Story of Discrimination and Unfairness: Implicit Bias Embedded in Language Models (2016)
    HotPETS 2016 - PETS
  • De-anonymizing Programmers and Code Stylometry - Large Scale Authorship Attribution from Source Code and Executable Binaries of Compiled Code (2016)
    Princeton University CITP Luncheon Speaker Series
  • Natural Language Processing and Privacy: A Double Edged Sword (2016)
    Infer - PETS Workshop
  • Code Stylometry and Programmer De-anonymization (2015)
    Go¨ttingen University
  • De-anonymizing Programmers (2015)
    32C3 - Chaos Communication Congress
  • De-anonymizing Programmers via Code Stylometry (2015)
    Cornell Systems Lunch
  • Support Vector Machines, Kernel Methods, Random Forests, and Feature Projection (2015)
    CS613-Machine Learning
  • Security Review of Digital Privacy and the Underground: Miscreant Activity in the Internet Guest Lecture (2014)
    CS475-Computer and Network Security
  • Source Code and Cross-Domain Stylometry (2014)
    31st Chaos Communication Congress
  • Stylometry and Online Underground Markets (2012)
    29th Chaos Communication Congress
  • Quantifying the Translator Effect: Identifying authors and machine translation tools in translated text (2011)
    Girl Geek Dinners Philly