Education
Carnegie Mellon University
PhD, HCII 2001-01-01 - 2007-01-01University of Southern California
BA, Psychology, Art History 1995-01-01 - 1999-01-01Work Experience
Københavns Universitet - University of Copenhagen
Current
Københavns Universitet - University of Copenhagen
IT-Universitetet i København
2009-08-01 - 2020-03-01
IT-Universitetet i København
VIRT-EU
2017-01-01 - 2019-12-01
VIRT-EU
UC Irvine
2007-09-01 - 2009-06-01
UC Irvine
Carnegie Mellon University
2001-08-01 - 2007-07-01
Carnegie Mellon University
Intel Corporation
2004-01-01 - 2004-01-01
Intel Corporation
Skills
Summary
I want to understand the world, to distinguish real connections from mere apophenia, to map the questions to the answers. My works lives at the intersection of Computer Science and Social Science with a touch of humanities thrown in. I start out studying the role of technology in how people maintain relationships and playing around with networks and social structures. I still do that sometimes, but my current interests have shifted towards thinking about privacy, data and ethics in technology design and development. My current big questions focus on the outcomes of the EU AI Act on technology development. I study issues of data quality, explainability and synthetic data. Other projects consider what happens when technical infrastructures define legal and governance practices (often unintentionally). I am still very much concerned with the issues of privacy, although, if you ask me, privacy as a term is actually quite meaningless (really, how is YOUR privacy doing today? Can even you answer such a question?). It's a red herring that shifts our attention from the issues at stake - that data is a relational quantity and its creation and use come with obligations and responsibilities that businesses and governments like to ignore in favor of privacy posturing. I worry a lot about the ethics and AI debate and the way ethics has been positioned as a "problem" to be "solved" or as a "solution" to the "problem" of AI. Neither are useful approaches but they are so much simpler than really looking at issues of computing technologies (really, AI is just software). Yet the rush to make technologists and the technologies they make "more ethical," overlooks the fact that discussions about ethics of any practice are full of vulnerabilities and can cause what my team and I call "moral stress". None of this is simple. I suppose that's why I love what I do :)