Giovanna Massarotto (Pennsylvania University)
Regulating Tech Titans: What American Antitrust Can Learn from Europe
Abstract: In 2024, regulating tech giants like Google and Amazon has emerged as a key issue on the U.S. government’s agenda, with antitrust law returning to the forefront. Meanwhile, across the Atlantic, Europe has introduced a new law, the Digital Markets Act (DMA), which regulates large online platforms, identified as “gatekeepers”. The DMA requires gatekeepers to adhere to specific obligations and prohibitions, typically subject to antitrust case-by-case scrutiny, to ensure fairness and contestability in digital markets. The European historical intellectual framework underpins the core features of the DMA, including its legal framework, approach, scope, and purpose. Since 2021, several antitrust bills have proposed a U.S. version of the DMA, aiming to reform antitrust law by adopting a similar legal framework, approach, scope, and purpose. However, this raises critical questions: Does the U.S. antitrust historical intellectual framework support the adoption of the DMA? Would a DMA type approach be successful in the United States? The conclusion from my comparative historical analysis of the DMA’s foundations is no. In making this claim, this article lays out a roadmap for understanding the deep roots of the DMA in European history and tradition. This article makes three important contributions. First, it provides a historical comparative analysis between the U.S. and EU intellectual frameworks by mapping out the roots of two very different antitrust traditions. Second, the article unveils the ordoliberal ideology underlying the DMA which fundamentally differs from the neoclassical way of thinking about and enforcing competition in the United States. Third, it gleans insights that American antitrust could learn from contrasting European approaches to regulating competition. The article concludes by arguing that implementing a law like the DMA for U.S. antitrust law would be like forcing a square peg into a round hole. However, Europe does serve as a useful laboratory for the United States from which to draw important lessons. As Europe has adapted consistent with its framework, so too must the United States. Bio: Giovanna Massarotto is an Academic Fellow at the Center for Technology, Innovation & Competition (CTIC) at the University of Pennsylvania and an affiliate of the University College London’s Centre for Blockchain Technologies (UCL CBT). Massarotto’s scholarship focuses on how technology affects society and the intersection of law, economics, and computer science. She is an active scholar and author of Antitrust Settlements: How a Simple Agreement Can Drive the Economy, published by Wolters Kluwer. In addition to the book, she has published multiple articles that investigate antitrust and regulatory issues related to blockchain, digital markets and software. Massarotto attained her PhD at Bocconi University in Milan. |
Payal Arora (Utrecht University)
Building Inclusive Tech with the Global South
Abstract: What actions and innovations are needed to create an inclusive internet? In the last decade, affordable mobile phones and data plans have brought the next billion users online – mostly young people from the Global South who have fast come online to engage with the internet in ways that go beyond our common understandings. Today, 90 percent of the world’s youth today live outside the West. Just India and China alone are home to most users today. Despite having limited resources, they are increasingly becoming digital creators and innovators in this AI driven era. It is time we stop underestimating and instead, start understanding the creative potential of the Global South. We should seek ways to ethically engage with different cultures, contexts, and conditions to rethink digital opportunities, online safeguards, and creative economies with the world’s majority. Inclusion is not an altruistic act. It is an essential element if we are to build a global community to generate sustainable solutions in how we work, play, love, and live with the planet’s limited resources. Join Payal Arora for her talk as she lays out a pathway for inclusive digital futures. Bio: Payal Arora is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, a feminist futures of work initiative and the Inclusive AI Lab, a global south centered debiasing data initiative. She is a leading digital anthropologist with two decades of user-experience among underrepresented groups, especially in the Global South. She is the author of award-winning books including ‘The Next Billion Users’ with Harvard Press. Forbes called her the ‘next billion champion’ and the ‘right kind of person to reform tech.’ About 150 international media outlets have covered her work including The BBC, Financial Times, and The Economist. She sits on several advisory boards including for the UN EGOV, LIRNE-Asia, and UNICEF-UNESCO. Her new book From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech” is out with MIT Press. She is Indian, American, and Irish, and currently lives in Amsterdam. |
Vaibhav Garg (Comcast Cable)
Beyond Sticks and Carrots: A Vision for AI Swaraj
Abstract: Current approaches to regulate and manage the risks of AI revolve around two themes. First, treats it as a public good and strives to reduce harm via penalties, either ex ante or ex post. Second, treats it as a private good and hopes to engender safer AI via incentives, financial and otherwise. Yet a third way is to approach AI as a common pool resource, its ecosystem resources being data, algorithms, and compute. This framing offers the option of making possible policy interventions grounded is Ostrom’s framework and philosophically a Gandhian school of governance. Such interventions offer three benefits. First, these may be more sustainable, by being more capable of evolution at the speed of technological progress as they will be driven by inherent self-interest. Second, they are more likely to allow for a more equitable distribution of costs and benefits. Finally, they may lead to least amount of wasteful compliance as the underlying goal is to still maximize consumption without diminishing the resource. Bio: Vaibhav Garg is the Executive Director of Cybersecurity & Privacy Research and Public Policy Research at Comcast Cable. He has a PhD in Security Informatics from Indiana University and a M.S. in Information Security from Purdue University. His research investigates the intersection of cybersecurity, economics, and public policy. He has co-authored over thirty peer reviewed publications and received the best paper award at the 2011 eCrime Researcher’s Summit for his work on the economics of cybercrime. He previously served as the Editor in Chief of ACM Computers & Society, where he received the ACM SIGCAS Outstanding Service Award. |
Jeffrey Prince (Indiana University)
International Measurements of Data Privacy Preferences, with Implications for Business and Policy
Abstract: Through two separate projects, one published and one under review, we have assembled one of the most internationally expansive collections of privacy preference estimates to date, covering twelve countries that represent approximately one third of the global population. The first, which we administered in 2019, examined relative data privacy preferences across six countries, with a heavy focus on Latin America (United States, Germany, Brazil, Colombia, Mexico, and Argentina). These surveys pertained to respondents’ wireless carrier, Facebook use, checking account at a bank, and smartphone. The second set of surveys, administered in 2022, also allows for examination of relative data privacy preferences, with additional features that allow for measurement of preferences for data localization, across an even wider range of countries, both geographically and culturally (United States, United Kingdom, Italy, France, South Korea, Japan, and India). These surveys pertained to respondents’ financial institution, healthcare app, home smart device, smartphone, and social media. Together, the two sets of surveys cover a wide range of data types, with some overlap across platforms and years. Data categories include: financial, health, biometric, social, location, and tastes (e.g., music preferences). During this talk, I will summarize main findings across these two studies along with business and policy implications and insights. Bio: Jeff Prince is Professor and Chair of Business Economics and Public Policy at the Kelley School of Business, Indiana University. He is also the Harold A. Poling Chair in Strategic Management. His specialized fields of research include industrial organization, applied econometrics, strategy, and regulation. He served as Chief Economist at the Federal Communications Commission during 2019 and 2020. At the FCC, he advised the Commission on economic policy, auction design, data analytics, and antitrust matters. Professor Prince has been recognized for excellence in both his research and his teaching during his time at the Kelley School and while at Cornell. He is an author of multiple textbooks covering a range of core microeconomic and econometric principles in managerial economics and predictive analytics. His research focus is on technology markets and telecommunications, having published works on dynamic demand for computers, Internet adoption and usage, the inception of online/offline product competition, telecom bundling, the valuation of product features, digital platforms, and data privacy. His research also encompasses topics such as household-level risk aversion, airline quality competition, and regulation in healthcare and real estate markets. His works have appeared in top general interest journals in both economics and management, including the American Economic Review, the International Economic Review, Management Science, and the Academy of Management Journal. He has also published in top journals in industrial organization, including the Journal of Industrial Economics, Journal of Economics and Management Strategy, and the International Journal of Industrial Organization. He is currently a co-editor at the Journal of Economics and Management Strategy, and is on the board of editors at Information Economics and Policy. |
Doyne Farmer (University of Oxford)
The Universality and Predictability of Technology Diffusion
Abstract: Technology diffusion follows S-curves, in which deployment initially accelerates and then levels off. We collect data for 47 technologies ranging from canals to mobile phones and show that the shape of their S-curves is remarkably universal. On average, the Gompertz function explains more than half the variance in the level of technology diffusion at the point of maximum growth, suggesting that while each technology’s story is different, the similarities are bigger than the differences. We show that technology S-curve time series suffer from problems of nonstationarity, autocorrelation, heteroscedastic noise and severe estimation bias. We develop a time series model that takes these problems into account, formulate a method for probabilistically forecasting future deployment and study how its forecasting accuracy varies as a function of forecasting horizon and stage of development. Application to solar energy and wind indicates that the renewable energy transition will very likely happen quickly, displacing most fossil fuels within 20 years. Bio: J. Doyne Farmer is Director of the Complexity Economics program at the Institute for New Economic Thinking and Baillie Gifford Professor of Complex Systems Science at the Smith School of Enterprise and the Environment, University of Oxford. He is also an External Professor at the Santa Fe Institute and Chief Scientist at Macrocosm. |
Klaus Miller (HEC Paris)
Consumers’ Perceived Privacy Violations in Online Advertising
Abstract: In response to privacy concerns about personal data collection and use, the online advertising industry has developed privacy-enhancing technologies (PETs), of which Google’s Privacy Sandbox is a prominent example. In this research, we apply dual-privacy theory, which postulates consumers have intrinsic and instrumental preferences for privacy, to understand perceived privacy violations (PPVs) for current practices and proposals. The key idea is that different practices and proposals differ in whether individual data leaves the consumer’s machine or not and in how they track and target consumers; these affect, respectively, the intrinsic and instrumental components of privacy preferences differently, leading to different PPVs for different practices. We conducted online studies with U.S. consumers to elicit PPVs for various advertising practices. Our findings confirm the intuition that tracking and targeting consumers under the industry status quo of behavioral targeting results in high PPVs. While new technologies that keep data on users’ devices reduce PPV compared to behavioral targeting, the reduction is minimal. Group-level targeting does not significantly reduce PPV compared to individual-level targeting. However, contextual targeting, which involves no tracking, significantly lowers PPV. Notably, when tracking is absent, consumers show similar preferences for seeing untargeted ads and no ads. Our results indicate that consumer perceptions of privacy violations may differ from technical definitions. A consumer-centric approach, based on, for instance, the dual-privacy theory, is essential for understanding privacy concerns. At a time of significant privacy-related developments, these insights are crucial for industry practitioners and policymakers. (Joint work with Kinshuk Jerath) Bio: I am an Assistant Professor in the Marketing Department at HEC Paris and a Chairholder at the Hi!PARIS Center on Data Analytics and Artificial Intelligence for Science, Business and Society. My research interests meet at the interface between empirical quantitative marketing, management economics, and information systems – specifically, my research concerns customer management, pricing, advertising, and privacy issues in the digital economy. During my Ph.D., as a post-doctoral scholar and afterward, I have been a frequent visiting scholar at the Wharton School of the University of Pennsylvania and the Graduate School of Business at Stanford University. My research has been published in top-tier academic journals such as the Journal of Marketing Research, the International Journal of Research in Marketing, and management-oriented journals. In my research projects, I often collaborate with the industry to answer research questions at scale. In 2022, I was nominated as ISMS Early-Career Scholar. |
Andrei Hagiu (Boston University/Questrom School of Business)
The Emergence of a Platform Trap
Abstract: On platforms such as marketplaces and social networks, the existence of network effects can mean that not only do participating agents benefit when more agents join the platform, but agents’ outside option gets worse. We show that in such a setting, by pricing dynamically, a monopoly platform can induce rational forward-looking agents to join even though participating agents ultimately end up worse of as a result. Agents face a dynamic collective action problem. We explore the limits of such a platform trap, considering factors such as whether agents can observe other agents’ participation decisions and prices, whether the platform can price discriminate, and the relative bargaining power between the platform and individual agents. (Joint work with Julian Wright) Bio: Andrei Hagiu is an Associate Professor of Information Systems at Boston University’s Questrom School of Business. Previously, he was an Associate Professor in the Strategy group at Harvard Business School and in the Technological Innovation, Entrepreneurship, and Strategic Management at MIT Sloan. Andrei holds a PhD in economics from Princeton University. Andrei’s research and teaching are entirely focused on platform businesses (e.g. Airbnb, Alibaba, Amazon.com, Google, Grab, Facebook, iPhone, PlayStation, Uber, Upwork, etc.) and their unique strategic challenges. He leverages the insights from his research to advise and angel invest in startups attempting to build platforms and marketplaces and to consult with large companies seeking to turn their products into platforms. |
Ingmar Weber (Saarland University)
Collected for Profit, Repurposed for Research: Advertising Audience Esti-mates as a Data Source
Abstract: Facebook, Google, TikTok & Co. generate their revenue from advertising. To offer advertisers with targeting capabilities, these companies collect large amounts of user data to build elaborate profiles. Based on these profiles an advertiser can then choose to target only, say, female Facebook users living in Norte de Santander, Colombia, who are aged 18-24, who used to live in Venezuela, and who have access to an iOS device. To help advertisers in planning their advertising campaigns and the related budget needs, the advertising platforms provide so-called audience estimates on how many of their users match the provided targeting criteria. In the example above, Facebook estimates that there are 1,800 matching users. – I’ll describe how, we’re tapping into these audience estimates to (i) monitor international migration, (ii) track digital gender gaps, and (iii) map wealth inequalities. We consistently find that, despite fake profiles, sampling bias, and noise in the inference algorithms, data derived from the advertising platforms provides valuable information that is complementary to other data sources. At the same time, our work shows the risk of identifying vulnerable groups, rather than individuals, which is often not adequately considered in discussions focused on individual privacy. Bio: Ingmar is an Alexander von Humboldt Professor in AI at Saarland University where holds the Chair for Societal Computing. This interdisciplinary area comprises (i) computing of society, i.e. the measurement of different social phenomena, in particular using non-traditional data sources, and (ii) computing for society, i.e. working with partners on implementing solutions to help address societal challenges. Before joining Saarland University Ingmar was the Re-search Director for Social Computing at the Qatar Computing Research Institute. |
Jeanine Miklós-Thal (University of Rochester)
Digital Hermits
Abstract: When users share multidimensional data about themselves with a firm, the firm learns about the correlations between different dimensions of user data. We incorporate this type of learning into a model of a data market in which a firm acquires data from users with privacy concerns. Each user can share no data, only nonsensitive data, or their full data with the firm. As the firm collects more data and becomes better at drawing inferences about a user’s privacy-sensitive data from their nonsensitive data, the share of new users who share no data (“digital hermits”) grows. This growth of digital hermits occurs even though the firm offers higher compensation for a user’s nonsensitive data and a user’s full data as its ability to draw inferences improves. At the same time, the share of new users who share their full data also grows. The model thus predicts a polarization of users’ data-sharing choices away from nonsensitive data sharing to no sharing and full sharing. Our model suggests that recent privacy policies, which are focused on control of data rather than inferences, may be misplaced. Bio: Jeanine Miklós-Thal is the Fred H. Gowen Professor of Economics & Management at the Simon Business School, University of Rochester, and a Research Fellow at CEPR, DIW, and MaCCi. Jeanine’s primary research interests lie in industrial organization and digital economics. Her work has been published in leading academic journals in both economics and management, including the Journal of Political Economy, the RAND Journal of Economics, Management Science, and Marketing Science. Jeanine currently serves as Co-Editor at the International Journal of Industrial Organization and as Associate Editor at the RAND Journal of Economics and at Management Science. Jeanine holds a PhD in Economics from the Toulouse School of Economics. |
Stefano Puntoni
Offshoring, Automation, and the Legitimacy of Efficiency
Abstract: Collective layoffs can occur for many reasons, often related to a firm’s pursuit of greater efficiency and cost reduction, and they tend to trigger negative reactions among the public. Anecdotal evidence suggests that offshoring, one of the most controversial and politicized aspects of globalization, evokes particularly negative reactions. We propose a social contract account of consumer reactions to collective layoffs and demonstrate differential consumer responses to collective layoffs due to offshoring versus other reasons, such as automation. Layoffs due to offshoring are perceived as an especially egregious violation of the normative expectation that firms should support the local community. Data from eleven experimental studies (N = 6,773), public consumer responses to layoffs in a large online community (N = 29,045), and layoff announcements in the European Union (N = 1,261) confirm that consumers react more negatively to collective layoffs due to offshoring compared to other reasons. Supporting our social contract account, the negative effect of offshoring is stronger when offshoring affects workers in the consumers’ home (vs. foreign) country, when the firm is domestic (vs. foreign), and when most customers are domestic (vs. foreign). Bio: Stefano Puntoni is the Sebastian S. Kresge Professor of Marketing at The Wharton School. He holds a PhD in marketing from London Business School and a degree in Statistics and Economics from the University of Padova, in his native Italy. His research has appeared in several leading journals, including Journal of Consumer Research, Journal of Marketing Research, Journal of Marketing, Nature Human Behavior, and Management Science. He also writes regularly for managerial outlets such as Harvard Business Review and MIT Sloan Management Review. Most of his ongoing research investigates how new technology is changing consumption and society. He is currently an Associate Editor at the Journal of Consumer Research and at the Journal of Marketing. Stefano teaches in the areas of marketing strategy, new technologies, brand management, and decision making. |
Shayne Longpre
Consent in Crisis:The Rapid Decline of the AI Data Commons
Abstract: General-purpose artificial intelligence (AI) systems are built on massive swathes of public web data, assembled into corpora such as C4, RefinedWeb, and Dolma. To our knowledge, we conduct the first, large-scale, longitudinal audit of the consent protocols for the web domains underlying AI training corpora. Our audit of 14, 000 web domains provides an expansive view of crawlable web data and how codified data use preferences are changing over time. We observe a proliferation of AIspecific clauses to limit use, acute differences in restrictions on AI developers, as well as general inconsistencies between websites’ expressed intentions in their Terms of Service and their robots.txt. We diagnose these as symptoms of ineffective web protocols, not designed to cope with the widespread re-purposing of the internet for AI. Our longitudinal analyses show that in a single year (2023-2024) there has been a rapid crescendo of data restrictions from web sources, rendering ~5%+ of all tokens in C4, or 28%+ of the most actively maintained, critical sources in C4, fully restricted from use. For Terms of Service crawling restrictions, a full 45% of C4 is now restricted. If respected or enforced, these restrictions are rapidly biasing the diversity, freshness, and scaling laws for general-purpose AI systems. We hope to illustrate the emerging crises in data consent, for both developers and creators. The foreclosure of much of the open web will impact not only commercial AI, but also non-commercial AI and academic research. Link to paper: https://www.dataprovenance.org/Consent_in_Crisis.pdf (Longpre et al. 2024) Bio: Shayne Longpre is a PhD candidate at MIT. His research focus is on the data that trains AI models, as well their societal impact and governance. He leads the Data Provenance Initiative, a research collective of 50+ volunteers passionate about tracing, demystifying, and improving the data used to train AI systems. He also led the open letter encouraging companies to protect independent AI safety research into proprietary models. The letter was co-signed by 350+ researchers, journalists, and advocates in the field. His work has been covered by the New York Times, Washington Post, 404 Media, VentureBeat, MIT Tech Review, and IEEE Spectrum |
Annika Stöhr
Price Effects of Horizontal Mergers - A Retrospective on Retrospectives
Abstract The comprehensive review of ex-post merger studies presented assesses the price effects of horizontal transactions to determine whether there are common post-merger price effects, both overall and in specific markets with the aim to derive implications for policy makers and competition authorities in terms of effective merger enforcement and competition policy. The review combines and further analyzes the results of 52 retrospective studies on 82 mergers or horizontal transactions. Overall, it will be shown that the sector in which the respective transaction takes place alone is not a strong indicator of the direction of price-related merger effects. In contrast, the ‘size’ or ‘importance’ of a transaction, as well as market concentration seem to be correlated with post-transaction price increases, especially in already highly concentrated markets. The review and presentation are intended to demonstrate the overall importance of ex post evaluations of antitrust decisions for ex ante competition policy and enforcement.
Bio: After studying media economics, Annika Stöhr completed her PhD on “Economic Evaluation and Reform Implications of German Competition Policy” with a focus on merger control and in particular on non-economic effects and so-called public interests that (should) potentially influence competition regulation. Her research generally operates at the intersection of competition economics, competition law and competition policy, under the premise that innovation and dynamism are both facilitators and goals of functioning competition (regulation). Her current work deals with the regulation of large digital ecosystems, e.g. through Section 19a GWB and the DMA and DSA, as well as with the regulation of algorithmic recommender systems in particular.
Her research benefits from more than two years of professional experience at the German Federal Ministry of Economic Affairs and Climate Action. Since April 2023 she is working as a postdoctoral researcher at the Chair of Economic Theory at Ilmenau University of Technology.
Christian Peukert
Strategic Behavior and AI Training Data
Abstract: Human-created works represent critical data inputs to artificial intelligence (AI). Strategic behavior can play a major role for AI training datasets, be it in limiting access to existing works or in deciding which types of new works to create or whether to create new works at all. We examine creators’ behavioral change when their works become training data for AI. Specifically, we focus on contributors on Unsplash, a popular stock image platform with about 6 million high-quality photos and illustrations. In the summer of 2020, Unsplash launched an AI research program by releasing a dataset of 25,000 images for commercial use. We study contributors’ reactions, comparing contributors whose works were included in this dataset to contributors whose works were not included. Our results suggest that treated contributors left the platform at a higherthan-usual rate and substantially slowed down the rate of new uploads. Professional and more successful photographers react stronger than amateurs and less successful photographers. We also show that affected users changed the variety and novelty of contributions to the platform, with long-run implications for the stock of works potentially available for AI training. Taken together, our findings highlight the trade-off between interests of rightsholders and promoting innovation at the technological frontier. We discuss implications for copyright and AI policy.
Paper available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4807979 (joint work with Florian Abeillon, Jérémie Haese, Franziska Kaiser, and Alexander Staub)
Christian Peukert is an Associate Professor for Digitization, Innovation and Intellectual Property at the University of Lausanne, Faculty of Business and Economics (HEC Lausanne), Switzerland. He studies how digital technologies and their regulation affect firms, consumers and markets with a focus on the economics of data and artificial intelligence, and intellectual property. His work has been published in Management Science, Marketing Science, Information Systems Research, Strategic Management Journal, Research Policy, and other journals.
Tamar Meshulam
What happens when green technology meets reality: On the environmental impacts of the digital sharing economy
Abstract: The digital sharing economy is commonly seen as a promising circular consumption model that could potentially deliver environmental benefits through more efficient use of existing product stocks. Yet whether sharing is indeed more environmentally benign than prevalent consumption models remains unclear. First, sharing might not displace the product it is expected to. For example, Uber might displace walking rather than private cars. Second, sharing might necessitate additional products and services to support the sharing operation. Finally, economic incentives to participate in the sharing economy may raise demand for durable products. Our research suggests that the environmental impact of the sharing economy is more nuanced than previously thought.
Tamar Meshulam is a Ph.D. student at Ben-Gurion University, focusing on researching the environmental impacts of technology, particularly within the sharing economy, utilizing data science and industrial economy methodologies. With a background in both Environmental Management (M.Sc.) and Computer Science (B.Sc.) from Tel Aviv University, Tamar brings a multidisciplinary approach to her research. Prior to her academic pursuits, Tamar accumulated over 15 years of experience in the IT industry. Notably, she received recognition for her contributions, including the PLATE Best Student Paper Award and 3rd place in the ISIE Best Poster Award in 2021. Tamar’s research is generously supported by the Ben Gurion School for Sustainability and Climate Change, the Kreitman School of Advanced Graduate Studies, and the Israeli Science Foundation (ISF).
Georgios Petropoulos
Industrial Data Sharing: The Unintended Consequences of the EU's Data Act
Abstract: The Data Act is a new law forthcoming in the European Union that regulates access to the data produced by IoT devices, especially in an industrial context such as smart manufacturing or smart farming. It aims at facilitating the emergence of new, innovative data-driven services that ultimately yield more efficient market outcomes and higher consumer surplus.
We offer a first analytical study of the economic consequences of the Data Act. Our analysis suggests that due to the broad application scope of the Data Act, in many situations, the Data Act may likely reduce, and not increase market efficiency. In particular, the Data Act runs potentially contrary to its policy objective when new data-driven services are substitutes to the IoT device manufacturer’s own service, and the IoT manufacturer only has limited market power; or when the new service is a complement to the IoT device manufacturer’s own service, irrespective of market power. Our analysis suggests that the Data Act should adopt a more targeted approach, depending on the type of data-driven service seeking access to data, and the market power of the IoT manufacturer that is required to provide data access.
The paper is a joint work with Jan Krämer.
Georgios Petropoulos is a research associate at the Initiative on the Digital Economy of the MIT Sloan School of Management and a digital fellow at the Digital Economy Lab of Stanford University. In the summer of 2024, he will become an Assistant Professor at the University of Southern California’s Marshall School of Business.
His research focuses on the implications of digital technologies on innovation, competition policy, and labor markets. He is studying how we should regulate big digital platforms as well as how the adoption of robots and artificial intelligence affect labor productivity and work.
Previously, Georgios was a post-doctoral researcher at MIT Sloan. He holds a B.Sc. in Physics from Aristotle University of Thessaloniki, an M.Sc. in Mathematical Economics and Econometrics from Tilburg University, and a PhD in Economics.
Elizaveta Kuznetsova
Tackling Online Misinformation with Generative AI: A comparison of ChatGPT and Microsoft CoPilot
Abstract: The talk will cover a recent study on the ability of two large language model (LLM)-based chatbots, ChatGPT and Bing Chat, rebranded to Microsoft Copilot, to detect veracity of political information. The study uses AI auditing methodology to investigate how chatbots evaluate true, false, and borderline statements on five topics: COVID-19, Russian aggression against Ukraine, the Holocaust, climate change, and LGBTQ+ related debates. It compares how the chatbots perform in high- and low-resource languages by using prompts in English, Russian, and Ukrainian. Furthermore, it explores the ability of chatbots to evaluate statements according to political communication concepts of disinformation, misinformation, and conspiracy theory, using definition-oriented prompts. The discussion will focus on the potential of LLM-based chatbots in tackling different forms of false information in online environments, pointing at the substantial variation in terms of how such potential is realized due to specific factors, such as language of the prompt or the topic. It will also provide an outlook into the future studies using similar methodology on a larger set of misinformation items in more languages.
Elizaveta Kuznetsova is a senior researcher working at the intersection of Communication Studies and International Relations. She leads a research group ‘Platform Algorithms and Digital Propaganda’ at Weizenbaum Institute in Berlin. Her research focus is on digital propaganda, social media platforms and international media. Elizaveta holds a PhD in International Politics from City, University of London. She is a former fellow at the Davis Center, Harvard University and at the Center for the European Studies at Boston University.
Giulio Matarazzi & Germán Oscar Johannsen
Position Statement of the Max Planck Institute for Innovation and Competition on the Implementation of the Digital Markets Act
Abstract: The Max Planck Institute for Innovation and Competition published a position statement on the implementation of the Digital Markets Act (DMA), laying down harmonised rules for core platform services provided or offered by gatekeepers. The Institute raises awareness about the possible overly broad blocking effects of the DMA on national rules, which may have the unintended consequences of privileging gatekeepers by jeopardizing future national legislative initiatives. This ultimately obstructs the achievement of contestability and fairness in digital markets. A complementary application of competition rules and effective enforcement of the DMA is, against this backdrop, crucial. Yet there is uncertainty over administrative enforcement mechanisms, and it is unclear what role private enforcement plays in the current legal design of the DMA. The position statement identifies and examines challenges in the implementation of the DMA, along with recommendations for overcoming them.
Link to the position statement: https://doi.org/10.1093/grurint/ikad067
Joint work with: Josef Drexl, Beatriz Conde Gallego, Begoña González Otero, Liza Herrmann, Jörg Hoffmann, Lukas Kestler
Giulio Matarazzi is a Research Fellow at the Max Planck Institute for Innovation and Competition. His research is focused on competition law, and the regulation of digital platforms, the internet, and telecommunications, with a particular focus on the Digital Markets Act and the European Electronic Communications Regulatory Framework. His professional background includes a period as an associate at BonelliErede Law Firm at the Antitrust Department, where he dealt with competition law and unfair commercial practices cases.
Germán Oscar Johannsen is a PhD student at the University of Munich and a research fellow at the Max Planck Institute for Innovation and Competition. His research centers on competition law and policy in the digital markets. As a research fellow, he has also developed lines of research on big data merger control, Internet regulation, and data governance to achieve the Sustainable Development Goals. Germán is also a visiting lecturer of competition law at the Universidad Católica de Chile, and active blogger on tech and competition issues in Latin America.
Carlo Reggiani
Data sharing or algorithm sharing?
Abstract: Data combination and analytics can generate valuable insights for firms and society as a whole. Multiple firms can do so by means of new technologies that bring the algorithm to the data (“algorithm sharing”) or, more conventionally, by sharing the data (“data sharing”). Algorithm-sharing technologies are gaining traction because of their advantages in terms of privacy, security, and environmental impact. We present a model that allows us to study the economic incentives generated by these technologies for both firms and a platform facilitating data combination. We find that, first, the platform chooses data sharing unless algorithm sharing’s analytics are sufficiently superior to those associated to data sharing. Second, we identify the properties of the analytics benefit function that ensure that algorithm sharing results in a higher total data contribution. Third, we highlight scenarios in which, in presence of data externalities, there can be a mismatch between the choice of the platform and the preference of a social planner.
Carlo Reggiani is a Research Fellow at the European Commission’s Joint Research Centre, Seville, and a Lecturer in Microeconomics at the University of Manchester. His research focuses on Industrial Organization and the Digital Economy, with a particular focus on topics regarding the economic impacts of data and platforms. His research has been published in such journals as the European Economic Review, the Journal of Economics and Management Strategy, the International Journal of Industrial Organization, among others.
Jon McLoone
Synergy of Minds: Balancing Generative AI, Symbolic AI, and Human Intelligence in the Future of Education
Abstract: While the arrival of Generative AI has certainly changed the world it does not, and will not, provide for all the intelligence needs of the world. This talk will discuss the intrinsic limitations of Generative AI in comparison to Symbolic AI (computation) and human intelligence and how future technologies must leverage all three to be most effective.
The talk will then discuss how our current educational system is teaching the wrong content and skills to prepare students for the AI age. With particular focus on computational thinking, a roadmap for a future curriculum will be introduced.
Jon McLoone, Director of Technical Communication and Strategy at Wolfram, is central to driving the company’s technical business strategy and leading the consulting solutions team. With over 25 years of experience working with Wolfram Technologies, Jon has helped in directing software development, system design, technical marketing, corporate policy, business strategies and much more. Jon gives regular keynote appearances and media interviews on topics such as the Future of AI, Enterprise Computation Strategies and Education Reform, across multiple fields including healthcare, fintech and data science. He holds a degree in mathematics from the University of Durham. Jon is also Co-founder and Director of Development for computerbasedmath.org, an organisation dedicated to fundamental reform of maths education and the introduction of computational thinking. The movement is now a worldwide force in re-engineering the STEM curriculum with early projects in Estonia, Sweden and Africa.
Paul Nemitz
How can AI and its creators serve democracy?
Abstract: Plurality and homogeneity, being and ought, yesterday’s data and the imagination of the (as yet) non-existent, structural conservatism and the inertia of technology v. the drive of humans for political reform, centralisation of power v. division of powers with checks and balances: These are just a few themes on which the culture of global platform technology and AI on the one hand and and human visions of freedom and a democratic future on the other hand clash. But why are these clashes important ? Is it possible that Tech platforms, AI and the ideology of technological solutions as a collateral damage strengthen populist and authoritarian rule ? And that democracy is in a pincer movement between Tech platforms, AI and authoritarianism ?
While China is a dictatorship and the US Democracy in a deep crisis, these two powers are held out as models of technological leadership. But do we want to live in a world in which either global corporations or authoritarian political leaders rule, and freedom of individuals as well as democracy have no primacy over technology, business models and absolutist ideologies ?
In his talk, Paul Nemitz discusses how engineers and programmers can re- engage with democracy and stay clear in their work of both the neoliberal wet dream of a world in which technology and technological competition alone determine the rules of how we live, how power, opportunity and wealth is distributed in society and a world view which degrades technology to a tool of totalitarian government’s absolute rule over people.
What the world needs today are “Engineers for democracy”, thus people who design platforms and AI which strengthen and support Democracy rather than undermining and destroying it. At the beginning of any such project stands an Intention and an Understanding why democracy is worth developing for.