I graduated with a Ph.D. in May 2012, when I
joined Symantec Research Labs, now known as
Norton Research Group (NRG).
I have been there ever since, where I continue to be a researcher and Senior Technical Director in our group, living in the Los Angeles area. My research has resulted in many new deployed detection algorithms that identify millions of malicious and benign software files each day on behalf of our customers. I am particularly proud of work we have done to protect survivors of intimate partner violence from mobile apps that abusers to harass them and to turn survivors' cellphones into sophisticated spying devices. I also do research that considers the human factors that influence consumer security and privacy.
Contact Information:
Email:
(Yes, I still receive mail sent to this account)
Publications:
Giovanni Apruzzese, Hyrum S. Anderson, Savino Dambra, David Freeman, Fabio Pierazzi, Kevin A. Roundy. "Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice IEEE Conference on Secure and Trustworthy Machine Learning (SATML 2023).
Feb 8-10, 2023. Raleigh, NC
[PDF]
In recent years numerous papers demonstrate powerful algorithmic
attacks against a wide variety of machine learning (ML) models, and
numerous other papers propose defenses that can withstand most
attacks. However, abundant real-world evidence suggests that actual
attackers use simple, effective tactics to subvert ML-driven systems,
and as a result security practitioners largely do not prioritize
adversarial ML defenses. Motivated by the apparent gap between
researchers and practitioners, this position paper aims to bridge the
two domains. We present three real-world case studies from which
we can glean practical insights unknown or neglected in research.
Next we analyze all adversarial ML papers recently published in top
security conferences, highlighting positive trends and blind spots.
Finally, we state positions on precise and cost-driven threat
modeling, collaboration between industry and academia, and
reproducible research. We believe that our positions, if adopted,
will increase the real-world impact of future endeavours in
adversarial ML, bringing both researchers and practitioners closer to
their shared goal of improving the security of ML systems.
Janet X. Chen, Allison McDonald, Yixin Zou, Emily Tseng, Kevin A. Roundy, Acar Tamersoy, Florian Schaub, Thomas Ristenpart, Nicola Dell. Trauma-Informed Computing: Towards Safer Technology Experiences for All ACM CHI Conference on Human Factors in Computing Systems (CHI 2022)
Apr 30 - May 6, 2022. New Orleans, LA.
[PDF]
Trauma is the physical, emotional, or psychological harm caused
by deeply distressing experiences. Research with communities
that may experience high rates of trauma has shown that digital
technologies can create or exacerbate traumatic experiences. We
discuss how considering the possible effects of trauma provides
insights into people’s technology experiences and present a
trauma-informed computing framework consisting of six key
principles: safety, trust, peer support, collaboration,
enablement, and intersectionality. Through specific examples, we
describe how to apply trauma-informed computing in four areas of
computing research and practice: user experience research &
design, security & privacy, artificial intelligence & machine
learning, and organizational culture in tech companies. We
discuss how adopting trauma-informed computing will lead to
benefits for all users, not only those experiencing trauma.
Yufei Han, Kevin A. Roundy, Acar Tamersoy. Towards Stalkerware Detection with Precise Warnings Annual Computer Security Applications Conference (ACSAC 21)
Dec 6-10, 2021.
[PDF]
Android devices are a particularly fertile ground for
stalkerware, most of which spy on a single communication
channel, sensor, or category of private data, though 27% of
stalkerware surveil multiple private data sources. We present
DOSMELT, a system that enables stalkerware warnings that
precisely characterize the types of surveillance conducted by
Android stalkerware so that surveiled individuals can take
appropriate mitigating action. Dosmelt leverages the observation
that stalkerware differs from other categories of spyware in its
open advertising of its surveillance capabilities, which we
detect on the basis of the titles and self-descriptions of
stalkerware apps that are posted on Android app stores. Dosmelt
has detected hundreds of new stalkerware apps that we have added
to the Stalkerware Threat List.
Yixin Zou, Allison McDonald, Julia Narakornpichit, Nicola Dell, Thomas Ristenpart, Kevin Roundy, Florian Schaub, Acar Tamersoy. The Role of Computer Security Customer Support in Helping Survivors of Intimate Partner Violence Usenix Security 2021
August, 2021. Conference held online.
[PDF]
Survivors of Intimate Partner Violence (IPV) routinely contact
the customer support lines of computer security vendors to ask
for help when targeted by technology-enabled abuse. To
understand the state of customer support in the
computer-security industry and work towards improving it, we
conducted five focus groups with professionals who work with IPV
survivors (n= 17). IPV professionals made numerous suggestions,
such as using trauma-informed language, avoiding promises to
solve problems, and making referrals to resources and support
organizations. To evaluate the practicality of these
suggestions, we conducted four focus groups with customer
support practitioners (n= 11). We developed and disseminated
training materials and a set of recommendations to help improve
the preparedness and response of customer support personnel at
security companies.
Kevin A. Roundy, Paula Bermaimon Mendelberg, Nicola Dell, Damon McCoy, Daniel Nissani, Thomas Ristenpart, Acar Tamersoy.
The Many Kinds of Creepware Used for Interpersonal Attacks IEEE Symposium on Security and Privacy
May 18-20, 2020. San Francisco, California.
[PDF]
Technology increasingly facilitates interpersonal attacks such
as stalking, abuse, and other forms of harassment. While prior
studies have examined the ecosystem of software designed for
stalking, our study uncovers a larger landscape of apps---what
we call creepware---used for interpersonal attacks. We discover
and report on apps used for harassment, impersonation, fraud,
information theft, concealment, hacking, and other attacks, as
well as creative defensive apps that victims use to protect
themselves.
Yixin Zou, Kevin A. Roundy, Acar Tamersoy, Saurabh Shintre, Johann Roturier, Florian Schaub Examining the Adoption and Abandonment of Security, Privacy, and Identity Theft Protection Practices. ACM CHI Conference on Human Factors in Computing Systems (CHI 2020)
April 26-29, 2020. Honolulu, HI.
[PDF]
Our online survey of 902 individuals studies the reasons for
which users struggle to adhere to expert-recommended security,
privacy, and identity-protection practices. We examined 30 of
these practices, finding that gender, education, technical
background, and prior negative experiences correlate with
practice adoption levels. We found that practices were abandoned
when they were perceived as low-value, inconvenient, or when
overridden by subjective judgment. We discuss how tools and
expert recommendations can better align to user needs.
Molly Davies, Daniel Marino, Amelia Nash, Mahmood Roundy, Kevin A., Sharif, Acar Tamersoy. Training Older Adults to Resist Scams with Fraud Bingo and Scam Detection Challenges Designing Interactions for the Ageing Populations Workshop at CHI
April 25, 2020. Honolulu, HI.
[PDF]
Older adults are disproportionately affected by scams, many of
which target them specifically. We present two educational
interventions targeted at older adults, which includes Fraud
Bingo, an intervention designed by WISE & Healthy Aging Center
in Southern California prior to 2012, that has been played by
older adults throughout the United States. We also present the
Scam Defender Obstacle Course (SDOC), an interactive web
application that tests a user’s ability to identify scams, and
subsequently teaches them how to recognize the scams.
Saurabh Shintre, Kevin A. Roundy, and Jasjeet Dhaliwal. Making Machine Learning Forget. ENISA Annual Privacy Forum (APF)
June 13-14, 2019. Rome, Italy.
[PDF]
We specifically analyze how the “right-to-be-forgotten” provided by the European Union General Data Protection Regulation can be implemented on current machine learning models and which techniques can be used to build future models that can forget. This document also serves as a call-to-action for researchers and policy-makers to identify other technologies that can be used for this purpose.
Mahmood Sharif, Kevin A. Roundy, Matteo Dell'Amico, Christopher Gates, Daniel Kats, Lujo Bauer, Nicolas Christin. A Field Study of Computer-Security Perceptions Using Anti-Virus Customer-Support Chats. Conference on Human Factors in Computing Systems (CHI)
Glasgow, Scotland, UK. May 4-9, 2019.
[PDF]
To identify needs for improvement in security products, we study security concerns raised in Norton Security customer support chats. We found that many consumers face technical support scams and are susceptible to them. Findings also show the value of customer support centers in that 96% of customers that reach out for support in relation to scams have not paid the scammers.
David Silva, Matteo Dell'Amico, Michael Hart, Kevin A. Roundy, Daniel Kats. Hierarchical Incident Clustering for Security Operation Centers. Interactive Data Exploration and Analytics (IDEA) @ KDD.
London, England, UK. 19 August 2018.
[PDF]
We enable security incident responders to dispatch multiple similar security incidents at once through an intuitive user interface. The heart of our algorithm is a visualized hierarchical clustering technique that enables responders to identify the appropriate level of cluster granularity at which to dispatch multiple incidents.
Kevin A. Roundy, Matteo Dell'Amico, Michael Hart, Daniel Kats, Robert Scott, Michael Spertus, Acar Tamersoy. Smoke Detector: Cross-Product Intrusion Detection With Weak Indicators. Annual Computer Security Applications Conference (ACSAC)
Orlando, Florida, USA. December 4-8, 2017.
[PDF]
Smoke Detector significantly expands upon limited collections of hand-labeled security incidents by framing event data as relationships between events and machines, and performing random walks to rank candidate security incidents. Smoke Detector significantly increases incident detection coverage for mature Managed Security Service Providers.
Shang-Tse Chen, Yufei Han, Duen Horng Chau, Christopher Gates, Michael Hart, Kevin A. Roundy. Predicting Cyber Threats with Virtual Security Products. Annual Computer Security Applications Conference (ACSAC)
Orlando, Florida, USA. December 4-8, 2017.
[PDF]
We set out to predict which security events and incidents a security product would have detected had it been deployed, based on the events produced by other security products that were in place. We discovered that the problem is tractable, and that some security products are much harder to model than others, which makes them more valuable.
Robert Pienta, Fred Hohman, Alex Endert, Acar Tamersoy, Kevin Roundy, Chris Gates, Shamkant Navathe, Duen Horng (Polo) Chau. VIGOR: Interactive Visual Exploration of Graph Query Results IEEE Transactions on Visualization and Computer Graphics (VAST)
Phoenix, Arizona, USA. 1-6 October, 2017.
[PDF]
We present VIGOR, a novel interactive visual analytics system, for exploring and making sense of graph query results. VIGOR contributes an exemplar-based interaction technique and a feature-aware subgraph result summarization. Through a collaboration with Symantec, we demonstrate how VIGOR helps tackle real-world cybersecurity problems.
Kyle Soska, Chris Gates, Kevin A. Roundy, and Nicolas Christin Automatic Application Identification from Billions of Files Applied Data Science Paper. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD).
Halifax, Nova Scotia. August 13-17, 2017.
[PDF]
Mapping binary files into software packages enables malware detection and other tasks, but is challenging. By combining installation data with file metadata that we summarize into sketches, from millions of machines and billions of files, we can use efficient approximate clustering techniques to map files to applications automatically and reliably.
Bo Li, Kevin Roundy, Chris Gates, Yevgeniy Vorobeychik. Large-Scale Identification of Malicious Singleton Files Full Paper. 7th ACM Conference on Data and Application Security and Privacy (CODASPY), Acceptance Rate 16%
Scottsdale, AZ. March 22-24, 2017.
[PDF]
94% of the software files that Symantec saw in a 1-year dataset appeared only once on a single machine. We examine the primary reasons for which both benign and malicious software files appear as singletons, and design a classifier to distinguish between these two classes of singleton software files.
Sucheta Soundarajan, Acar Tamersoy, Elias Khalil, Tina Eliassi-Rad, Duen Horng Chau, Brian Gallagher and Kevin Roundy. Generating Graph Snapshots from Streaming Edge Data Poster Paper. 25th International World Wide Web Conference (WWW)
Montreal, Canada. Apr 11-15, 2016.
[PDF].
We study the problem of determining the proper aggregation granularity for a stream of time-stamped edges. To this end, we propose ADAGE and demonstrate its value in automatically finding the appropriate aggregation intervals on edge streams for belief propagation to detect malicious files and machines.
Acar Tamersoy, Kevin A. Roundy, and Duen Horng (Polo) Chau. Guilt By Association: Large Scale Malware Detection by Mining File-Relation Graphs ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Industrial Track (KDD)
New York City, NY. August 24-27, 2014
[PDF]
We present AESOP, a scalable algorithm that identifies malicious executable files by leveraging a novel combination of locality-sensitive hashing and belief propagation. AESOP attained early labeling of 99% of benign files and 79% of malicious files with a 0.9961 true positive rate at 0.0001 false positive rate.
Kevin A. Roundy and Barton P. Miller. Binary-Code Obfuscations in Prevalent Packer Tools ACM Computing Surveys (CSUR) Volume 46 Issue 1, October 2013
[PDF]
Kevin A. Roundy Hybrid Analysis and Control of Malicious Code Ph.D. Dissertation, deposited on May 2nd, 2012
[PDF]
Andrew R. Bernat, Kevin A. Roundy, and Barton P. Miller. Efficient, Sensitivity Resistant Binary Instrumentation International Symposium on Software Testing and Analysis (ISSTA)
Toronto, Canada, July 2011.
[PDF]
Kevin A. Roundy and Barton P. Miller.
Hybrid Analysis and Control of Malware Binaries Recent Advances in Intrusion Detection (RAID)
Ottawa, Canada, September 2010
[PDF]
Professional Service:
ACSAC Steering Committee and Posters / Works in Progress Chair in 2019-2020 and Program Committee Member from 2018-2020,
Deep Learning and Security Workshop Program Committee @ IEEE S&P 2018-2020,
SecDev Practitioners Program Committee 2018-2019,
and Interactive Data Exploration and Analytics Workshop (IDEA) @ KDD from 2016-2018.
Dissertation Topic:
I have a background in machine learning and database research,
but my Ph.D. research focuses on building tools that make
challenging malware analysis tasks easier to solve. Suppose you were
a security analyst at a big firm and someone dropped a nasty virus on
you. Your task would be to quickly understand that piece of malware,
to find out how it got into your system and what it did to you.
I have extended the Dyninst Application Programming Interface
so that you can take a program in its final binary version, without
needing source code, even if it's evasive, defensive malware like the
virus in our example, and: find its code, analyze it, modify it, and
control its execution.
Dyninst is a well-stocked toolbox that makes it easy for people like
software engineers and security analysts to quickly build customized
test suites and analysis tools. The most important tools in our
box are control- and data-flow analyses, and
binary instrumentation.
If you were the security analyst in my example and needed to
understand a nasty virus, you could build up an understanding of it
from first principles, but this is not an easy task, since most malware
strongly resists analysis. To find the code in the .exe file, you'd
have to separate out the code bytes from a mixture of junk and data
bytes. To understand the code, you have to clarify all
of the obfuscations that malware authors use to hide its meaning. To
monitor the malware's execution you have to circumvent its evasive
techniques. And to observe its nasty hidden behaviors, you need
mechanisms to help you control and manipulate the program's execution.
Today, security analysts really don't have the tools at their disposal
that can help them accomplish all of these tasks, so they often do end
up working out solutions from first principles, which requires them to
hire expert analysts. But Dyninst has long been able to find code in
uncooperative binaries, analyze and modify their code, and control
their execution. My primary research contribution has been to extend
Dyninst to make its full capabilities available on malware, even if
it's highly defensive and evasive.
Now, if analyzing a single nasty virus is a daunting task, consider
what it's like for analysts at security companies. One of their
biggest challenges is understanding and categorizing, literally
thousands of new malware samples that get created each day. Dyninst
provides the tools needed to build malware analysis factories that
automate this process. The analyst tells Dyninst what analyses it
wants to run, what behaviors to log, and how to control the
programs. The analysis factory then executes the malware samples in an
isolated environment and produces the desired reports for each
sample. These factories are really easy to build, and we've
implemented an example factory to serve as a starting point, that
security analysts can easily customize.