Showing posts with label algorithms. Show all posts
Showing posts with label algorithms. Show all posts

Monday, December 18, 2023

Algorithmic Mechanism Design With Investment, by Akbarpour, Kominers, Li, Li, and Milgrom,

 Mechanisms that are computationally complex may require approximation in implementation, which can change the incentive properties of the exact mechanism.   But progress can be made...

Algorithmic Mechanism Design With Investment, by Mohammad Akbarpour, Scott Duke Kominers, Kevin Michael Li, Shengwu Li, Paul Milgrom, Econometrica, First published: 07 December 2023, https://doi.org/10.3982/ECTA19559

Abstract: We study the investment incentives created by truthful mechanisms that allocate resources using approximation algorithms. Some approximation algorithms guarantee nearly 100% of the optimal welfare in the allocation problem but guarantee nothing when accounting for investment incentives. An algorithm's allocative and investment guarantees coincide if and only if its confirming negative externalities are sufficiently small. We introduce fast approximation algorithms for the knapsack problem that have no confirming negative externalities and guarantees close to 100% for both allocation and investment.

From the introduction:

"Approximation algorithms can be combined with pricing rules to produce truthful mechanisms, provided that the algorithm is “monotone” (Lavi, Mu'Alem, and Nisan (2003)). In this paper, we study the ex ante investment incentives created by such mechanisms.

"Suppose that one bidder can make a costly investment to change its value before participating in a truthful mechanism. As an initial result, we show that all truthful mechanisms using the same allocation algorithm entail the same investment incentives, so we can regard the investment incentives as properties of the algorithm itself.

"If an allocation algorithm exactly maximizes total welfare, then the corresponding truthful mechanism is a Vickrey–Clarke–Groves (VCG) mechanism. For VCG mechanisms, any single bidder's investment is profitable if and only it improves total welfare (Rogerson (1992)). In this respect, the VCG mechanisms are essentially unique. We find that a truthful mechanism aligns a bidder's investment incentives with welfare maximization only if there is some set of allocations such that, for generic valuation profiles, its allocation algorithm exactly maximizes welfare over that set. Many practical approximation algorithms do not have this structure and, as a result, lack efficient investment incentives.

"One might also hope that if an allocation algorithm approximately maximizes total welfare, then it generates approximately efficient investment incentives—but we show to the contrary that arbitrarily good approximations can have arbitrarily bad investment guarantees. To make this statement precise, we evaluate an algorithm's performance on any particular instance by the welfare it achieves divided by the maximum welfare. We refer to the worst-case ratio over all instances when values are exogenous as the allocative guarantee, and the worst-case ratio when one bidder's ex ante investment endogenously determines its value as the investment guarantee.1 (The investment guarantee measures welfare net of investment costs.)

"Because the investment guarantee is a worst case over instances and over investment technologies, it is never more than the allocative guarantee. We characterize the algorithms for which the allocative and investment guarantees are equal, and apply those results to evaluate and improve upon standard approximation algorithms."

Friday, December 1, 2023

Fairness in algorithms: Hans Sigrist Prize to Aaron Roth

 The University of Bern's Hans Sigrist Prize has been awarded to Penn computer scientist Aaron Roth, and will be celebrated today.

Here are today's symposium details and schedule:

Here's an interview:

Aaron Roth: Pioneer of fair algorithms  In December 2023, the most highly endowed prize of the University of Bern will go to the US computer scientist Aaron Roth. His research aims to incorporate social norms into algorithms and to better protect privacy.  by Ivo Schmucki 

"There are researchers who sit down and take on long-standing problems and just solve them, but I am not smart enough to do that," says Aaron Roth. "So, I have to be the other kind of researcher. I try to define a new problem that no one has worked on yet but that might be interesting."

"Aaron Roth's own modesty may stand in the way of understanding the depth of his contributions. In fact, when he authored his doctoral thesis on differential privacy about 15 years ago and then wrote on the fairness of algorithms a few years later, terms like “Artificial Intelligence” and “Machine Learning” were far from being as firmly anchored in our everyday lives as they are today. Aaron Roth was thus a pioneer, laying the foundation for a new branch of research.

"I am interested in real problems. Issues like data protection are becoming increasingly important as more and more data is generated and collected about all of us," says Aaron Roth about his research during the Hans Sigrist Foundation’s traditional interview with the prize winner. He focuses on algorithmic fairness, differential privacy, and their applications in machine learning and data analysis.

...

"It is important that more attention is paid to these topics," says Mathematics Professor Christiane Tretter, chair of this year's Hans Sigrist Prize Committee. Tretter says that many people perceive fairness and algorithms as two completely different poles, situated in different disciplines and incompatible with each other. "It is fascinating that Aaron Roth’s work shows that this is not a contradiction."

...

"The first step to improving the analysis of large data sets is to be aware of the problem: "We need to realize that data analysis can be problematic. Once we agree on this, we can consider how we can solve the problems," says Aaron Roth."





Monday, October 16, 2023

Refugee resettlement and the top trading cycles algorithm, by Farajzadeh, Killea, Teytelboym, and Trapp

 Here's a recent paper that (among other things) considers using the top trading cycles algorithm for matching refugees to sponsors (under a special program for Ukraine), to satisfy the location preferences of refugees.

Optimizing Sponsored Humanitarian Parole by Fatemeh Farajzadeh, Ryan B. Killea, Alexander Teytelboym, Andrew C. Trapp, working paper, 2023

Abstract: The United States has introduced a special humanitarian parole process for Ukrainian citizens in response to Russia’s 2022 invasion of Ukraine. To qualify for parole, Ukrainian applicants must have a sponsor in the United States. In collaboration with HIAS, a refugee resettlement agency involved in the parole process, we deployed RUTH (Refugees Uniting Through HIAS), a novel algorithmic matching system that is driven by the relocation preferences of refugees and the priorities of US sponsors. RUTH adapts Thakral [2016] Multiple-Waitlist Procedure (MWP) that combines the main First-In/First-Out (FIFO) queue with location specific FIFO queues in order to effectively manage the preferences of refugees and the supply of community sponsors. In addition to refugee preferences and sponsor priorities, RUTH incorporates various feasibility considerations such as community capacity, religious, and medical needs. The adapted mechanism is envy-free, efficient and strategy-proof for refugees. Our analysis reveals that refugee preferences over locations are diverse, even controlling for observables, by demonstrating the difficulty of solving a much simpler problem than modeling preferences directly from observables. We use our data for two counterfactual simulations. First, we consider the effects of increased waiting times for refugees on the quality of their matches. We find that with a periodic Top Trading Cycles algorithm, increasing period length from 24 days to 80 days, improves average rank of a refugee’s match from 3.20 to 2.44. On the other hand, using the available preference data RUTH achieved an average rank of 4.07 with a waiting time of 20 days. Second, we estimate the arrival rates of sponsors in each location that would be consistent with a long-run steady state. We find that more desirable locations (in terms of refugee preferences) require the highest arrival rates suggesting that preferences might be a useful indicator for investments in sponsorship capacity. Our study highlights the potential for preference-based algorithms such as RUTH to improve the efficiency and fairness of other rapidly-deployed humanitarian parole processes.

#######

Earlier:

Sunday, December 18, 2022


Wednesday, August 30, 2023

Minimal envy mechanisms, by Julien Combe

 Here's an article I missed when it came out online:

Reallocation with priorities and minimal envy mechanisms, by Julien Combe, Economic Theory volume 76, 551–584 (2023)

"Abstract: We investigate the problem of reallocation with priorities where one has to assign objects or positions to individuals. Agents can have an initial ownership over an object. Each object has a priority ordering over the agents. In this framework, there is no mechanism that is both individually rational (IR) and stable, i.e. has no blocking pairs. Given this impossibility, an alternative approach is to compare mechanisms based on the blocking pairs they generate. A mechanism has minimal envy within a set of mechanisms if there is no other mechanism in the set that always leads to a set of blocking pairs included in the one of the former mechanism. Our main result shows that the modified Deferred Acceptance mechanism (Guillen and Kesten in Int Econ Rev 53(3):1027–1046, 2012), is a minimal envy mechanism in the set of IR and strategy-proof mechanisms. We also show that an extension of the Top Trading Cycle (Karakaya et al. in J Econ Theory 184:104948, 2019) mechanism is a minimal envy mechanism in the set of IR, strategy-proof and Pareto-efficient mechanisms. These two results extend the existing ones in school choice."

Thursday, July 27, 2023

Kidney brouhaha in Israel: is a good deed still good when performed by a shmuck?

 Recently a three way kidney exchange was performed in Israel. This would have been unremarkable under most circumstances: Israel has an active kidney exchange system.  But it caused a strong reaction in the Israeli press, because one of the donors, a  well-known rightwing activist who wanted to donate a kidney so that his brother could receive one, announced that he wanted his kidney to go only to a Jew.

Here's the Ynet story (you can click to render it in English):

 kidney in a transplant marathon: "The condition was - only for a Jew

Here's the Times of Israel (already in English):

Right-wing journalist causes stir by announcing his kidney would go only to a Jew

There were many more, but you get the idea.  Some of the stories point out that the Israeli National Transplantation Center uses an algorithm* that doesn't see the religion of the recipient, so it's not clear that this was a declaration with consequences.  It was meant to provoke, and it did.

But it's a complicated issue.  In the U.S. (and in Israel), donations can be made to a specific individual, but not to a class of individuals.  With living donation, it means that the donor can choose a specific person to donate to, and it isn't an issue how they choose: no one has to donate an organ to anyone, and every donation saves a life (and maybe more than one, particularly since  living donation reduces competition for scarce deceased-donor kidneys). So if this donor had been able to donate to his brother, no one would have thought twice that he was glad to be donating to a fellow Jew.  What made his announcement provocative was that his kidney wasn't going to his brother: his brother was getting a kidney from an anonymous other donor. [Update clarification/correction: this donation was apparently an undirected (except for the 'only' condition) altruistic donation, not part of an exchange involving the donor's brother.]

Among the people I corresponded with about this is Martha Gershun, a kidney donor who thinks and writes clearly, and has given me permission to quote some of what she said.

"I’m wondering if we find the presentation of the story troubling:  “Right-wing journalist and Temple Mount activist causes stir by announcing his kidney would go only to a Jew.”  We would react badly to a story that said:  “Right-wing Trump supporter says he will only give his kidney to a white man.”

"What if instead the stories were:  “Observant Jewish father of 8 wants to donate to a fellow Jew” and “Rural man from West Virginia seeks to help another in his community”?  Would we find those stories more acceptable?"

Part of the feeling that this is a bit complicated has to do with the fact that we don't (and maybe shouldn't) look gift horses in the mouth, i.e. we don't and maybe shouldn't delve deeply into the motivation of altruistic acts that do a lot of good. We should applaud good deeds even if they aren't performed by saints. (I blogged yesterday, about paying it forward, an umbrella term for doing good deeds in a spirit of gratitude for having ourselves benefited  from past good deeds performed by others. We generally don't find it necessary to condition our approval on precisely who receives the forward-paid gifts.)

So, while I'm not sorry to see that this statement by a kidney donor is a much discussed provocation, I'm inclined to think that a good deed remains a mitzvah even if not performed by a tzadik, as we might have said in our New York English when I was growing up.

I'll give the last word to a Haaretz op-ed, also in English:

 Is It Kosher to Donate Kidneys Only to Other Jews?  A well-known religious journalist in Israel declared the " -only" donation of his kidney. His act is imperfect, but not immoral by Robby Berman

+++++++++++++++

*On the algorithm used in Israel and elsewhere, see e.g.

Wednesday, January 15, 2020 Kidney Exchange in Israel (supported by Itai Ashlagi)


and


************
Update: related subsequent post 


Monday, May 29, 2023

Further progress on course allocation, by Budish, Gao, Othman, Rubinstein and Zhang

 Here are some new developments in the course allocation mechanism used initially in Wharton and now elsewhere.  It turns out that strategy-proofness in the (very) large doesn't imply strategyproofness in samples of realistic size, but this seems to be fixable (and profitable manipulations were not easy to find). The paper concludes with some far ranging thoughts on the econ-cs interface.

Practical algorithms and experimentally validated incentives for equilibrium-based fair division (A-CEEI)   by ERIC BUDISH, RUIQUAN GAO, ABRAHAM OTHMAN  AVIAD RUBINSTEIN, and QIANFAN ZHANG

Abstract: "Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in asymptotically large markets:

•For incentives, the A-CEEI mechanism is Envy-Free-but-for-Tie-Breaking (EF-TB), which implies that it is Strategyproof-in-the-Large (SP-L).

•From a computational perspective, computing the equilibrium solution is unfortunately a computationally intractable problem (in the worst-case, assuming PPAD≠FP).

We develop a new heuristic algorithm that outperforms the previous state-of-the-art by multiple orders of magnitude. This new, faster algorithm lets us perform experiments on real-world inputs for the first time. We discover that with real-world preferences, even in a realistic implementation that satisfies the EF-TB and SP-L properties, agents may have surprisingly simple and plausible deviations from truthful reporting of preferences. To this end, we propose a novel strengthening of EF-TB, which dramatically reduces the potential for strategic deviations from truthful reporting in our experiments. A (variant of ) our algorithm is now in production: on real course allocation problems it is much faster, has zero clearing error, and has stronger incentive properties than the prior state-of-the-art implementation"

Here's an intriguing passage:

"In Section 6 we use our manipulation-finding algorithm in combination with our fast A-CEEI finding algorithm to explore the plausibility of effective manipulations for students bidding in ACEEI. Originally, we had expected that since our mechanism satisfies the EF-TB and SP-L properties, it would at least be practically strategyproof — if even we don’t really understand the way our algorithm chooses among the many possible equilibria, how can a student with limited information learn to strategically bid in such a complex environment? 

"Indeed, in 2 out of 3 schools that we tested, our manipulation-finding algorithms finds very few or no statistically significant manipulations at all. However, when analyzing the 3rd school, we stumbled upon a simple and effective manipulation for (the first iteration of) our mechanism. We emphasize that although the manipulation is simple in hindsight, in over a year of working on this project we failed to predict it by analyzing the algorithm — the manipulation was discovered by the algorithm

"Inspired by this manipulation, we propose a natural strengthening of envy-free (discussed below), which we call contested-envy free. We encode the analogous contested EF-TB as a new constraint in our algorithm (specifically, the integer program for finding optimal budget perturbations). Fortunately, our algorithm is still very fast even with this more elaborate constraint. And, when we re-run our manipulation-finding experiments, we observe that contested EF-TB significantly reduces the potential for manipulations in practice."

...

"Conclusion:  In this work, we give a significantly faster algorithm for computing A-CEEI. Kamal Jain’s famous formulation “if your laptop cannot find it then neither can the market” [Papadimitriou 2007] was originally intended as a negative result, casting doubt on the practical implications of many famous economic concepts because of their worst-case computational complexity results. Even for course allocation, where a heuristic algorithm existed and worked in practice, Jain’s formulation seemed to still bind, as solving A-CEEI involved an intense day-long process with a fleet of high-powered cloud servers operating in parallel. The work detailed in this paper has significantly progressed what laptops can find: even the largest and most challenging real course allocation problems we have access to can now be solved in under an hour on a commodity laptop. 

"This significant practical improvement suggests that the relationship between prices and demand for the course allocation problem—and potentially other problems of economic interest with complex agent preferences and heterogeneous goods—may be much simpler than has been previously believed and may be far more tractable in practice than the worst-case theoretical bounds. Recalling Jain’s dictum, perhaps many more market equilibria can be found by laptops—or, perhaps, Walras’s original and seemingly naive description of how prices iterate in the real world may in fact typically produce approximate equilibria. 

"Our fast algorithm also opens the door for empirical research on A-CEEI, because we can now solve many instances and see how the solution changes for different inputs. We took it in one direction: empirically investigating the incentives properties of A-CEEI for the first time. For course allocation specifically, this faster algorithm opens up new avenues for improving student outcomes through experimentation. For instance, university administrators often want to subsidize some 6 group of students (e.g., second-year MBA students over first-year MBA students), but are unsure how large of a budget subsidy to grant those students to balance equity against their expectations. Being able to run more assignments with different subsidies can help to resolve this issue."

*************

Earlier related posts:

Thursday, April 23, 2009

Sunday, October 4, 2009

Thursday, May 30, 2013

Monday, August 3, 2015

Tuesday, June 9, 2020

Thursday, December 8, 2022

Three way liver exchange in Pakistan, reported in JAMA Surgery by Salman, Arsalan, and Dar, in collaboration with economist Alex Chan

 Here's an exciting account, just published in JAMA Surgery, of a three way liver exchange in Pakistan, achieved in part by collaboration with economist and market designer Alex Chan (who is on the job market this year).

Launching Liver Exchange and the First 3-Way Liver Paired Donation by Saad Salman, MD, MPH1; Muhammad Arsalan, MBBS2; Faisal Saud Dar, MBBS2, JAMA Surg. Published online December 7, 2022. doi:10.1001/jamasurg.2022.5440 (pdf)

Here are the first paragraphs:

"There is a shortage of transplantable organs almost everywhere in the world. In the US, about 6000 transplant candidates die waiting each year.1 In Pakistan, 30% to 50% of patients who needed a liver transplant are unable to secure a compatible donor, and about 10 000 people die each year waiting for a liver.2 Kidney paired donations, supported by Nobel Prize–winning kidney exchange (KE) algorithms,3 have enabled living donor kidneys to become an important source of kidneys. Exchanges supported by algorithms that systematically identify the optimal set of paired donations has yet to take hold for liver transplant.

"The innovation reported here is the successful implementation of a liver exchange mechanism4 that also led to 3 liver allotransplants and 3 hepatectomies between 3 incompatible patient-donor pairs with living donor–patient ABO/size incompatibilities. These were facilitated by one of the world’s first documented 3-way liver paired donations (LPD) between patient-donor pairs.

"Since 2018 and 2019, we have explored LPD as a strategy to overcome barriers for liver failure patients in Pakistan in collaboration with economist Alex Chan, MPH.2 With LPD, the incompatibility issues with relative donors can be solved by exchanging donors. The Pakistan Kidney and Liver Institute (PKLI) adopted a liver exchange algorithm developed by Chan4 to evaluate LPD opportunities that prioritizes clinical urgency (Model for End-stage Liver Disease [MELD] scores) while maximizing transplant-enabling 2-way or 3-way swaps that ensures that hepatectomies for every donor within each swap has comparable ex ante risk (to ensure fairness). As of March 2022, 20 PKLI liver transplant candidates had actively coregistered living and related but incompatible liver donors. Evaluating these 20 incompatible patient-donor pairs with the algorithm,4 we found 7 potential transplants by two 2-way swaps and the 3-way swap reported. In contrast to ad hoc manual identification of organ exchange opportunities, the hallmark of a scalable organ exchange program is the regular deployment of algorithms to systematically identify possible exchanges. Regular deployment of LPD algorithms is novel.

"A total of 6 procedures took place on March 17, 2022. Patient 1, a 57-year-old man, received a right liver lobe from donor 2, a 28-year-old coregistered donor of patient 2 (56-year-old man), who in turn received a right liver lobe from donor 3, a 35-year-old woman who was a coregistered donor of patient 3. Patient 3, a 46-year-old man, received a right liver lobe from donor 1, a 22-year-old woman who was a coregistered donor of patient 1, completing the cycle (Figure). Five PKLI consultant surgeons and 7 senior registrars led the hepatectomies and liver allotransplants; 6 operating rooms were used simultaneously. One month postsurgery, all patients and donors are robust with no graft rejection. All the donors are doing well in the follow-up visits and have shown no psychological issues."



Here's a sentence in the acknowledgements:

"We thank Alex Chan, MPH (Stanford University, Palo Alto, California), whose initiative and expertise in economics were the key driving forces for launching liver exchange."

*********
NB: this is a "Surgical Innovation" article, for which the journal requires that there be no more than three authors.

And here are the references cited:

1.
Chan  A, Roth  AE. Regulation of organ transplantation and procurement: a market design lab experiment. Accessed April 28, 2022. https://www.alexchan.net/_files/ugd/a47645_99b1d4843f2f42beb95b94e43547083b.pdf
2.
Salman  S, Gurev  S, Arsalan  M, Dar  F, Chan  A. Liver exchange: a pathway to increase access to transplantation. Accessed April 1, 2022. http://www.hhpronline.org/articles/2021/1/14/liver-exchange-a-pathway-to-increase-access-to-transplantation
3.
Henderson  D. On marriage, kidneys and the Economics Nobel. Wall Street Journal. October 15, 2012. Accessed March 5, 2022. https://www.wsj.com/articles/SB10000872396390443675404578058773182478536
4.
Chan  A. Optimal liver exchange with equipoise. Accessed April 23, 2022. https://www.alexchan.net/_files/ugd/a47645_36e252f4df0c4707b6431b0559b03143.pdf
5.
Hwang  S, Lee  SG, Moon  DB,  et al.  Exchange living donor liver transplantation to overcome ABO incompatibility in adult patients.   Liver Transpl. 2010;16(4):482-490. doi:10.1002/lt.22017PubMedGoogle ScholarCrossref
6.
Patel  MS, Mohamed  Z, Ghanekar  A,  et al.  Living donor liver paired exchange: a North American first.   Am J Transplant. 2021;21(1):400-404. doi:10.1111/ajt.16137PubMedGoogle ScholarCrossref
7.
Braun  HJ, Torres  AM, Louie  F,  et al.  Expanding living donor liver transplantation: report of first US living donor liver transplant chain.   Am J Transplant. 2021;21(4):1633-1636. doi:10.1111/ajt.16396

 ********

Here's a Stanford story on this collaboration:

Stanford student devises liver exchange, easing shortage of organs. A rare three-way exchange of liver transplants in Pakistan was made possible with a new algorithm developed by a Stanford Medicine student.  by Nina Bai

"The liver exchange idea actually came out of a term paper in a first-year market design class at Stanford," Chan said.

"As he learned more about liver transplants, Chan realized there were important biological and ethical differences from kidney transplants. 

...

"Instead of just finding compatible swaps, we want to find swaps that prioitize the most urgent patients first in order to prevent the most deaths," Chan said.

*******

Here are some contemporaneous stories from March in the newspaper Dawn (now that the JAMA embargo on the story is lifted):

Mar 18, 2022 — A highly-trained team of the surgeons headed by PKLI Dean Prof Faisal Dar had performed liver transplants at the institute and other members ...

Monday, October 3, 2022

Choosing (as if) from a menu (by Gonczarowski, Heffetz and Thomas; and by Bó, and Hakimov)

What makes serial dictatorship so obviously strategy proof is that it gives each participant the opportunity to choose from a menu, and get what he/she picks.  So the dominant strategy is to pick what you want (and if you have to delegate the decision by submitting a list of preferences, it is a dominant strategy to state your true preferences.

Here are two papers differently inspired by that thought, which seek to reformulate matching mechanisms so that they look to each player like choice from a menu.

Strategyproofness-Exposing Mechanism Descriptions by Yannai A. Gonczarowski, Ori Heffetz, Clayton Thomas

Abstract: "A menu description defines a mechanism to player i in two steps. Step (1) uses the reports of other players to describe i's menu: the set of i's potential outcomes. Step (2) uses i's report to select i's favorite outcome from her menu. Can menu descriptions better expose strategyproofness, without sacrificing simplicity? We propose a new, simple menu description of Deferred Acceptance. We prove that -- in contrast with other common matching mechanisms -- this menu description must differ substantially from the corresponding traditional description. We demonstrate, with a lab experiment on two simple mechanisms, the promise and challenges of menu descriptions."

***************

Pick-an-object Mechanisms by Inácio Bó, Rustamdjan Hakimov

Abstract: "We introduce a new family of mechanisms for one-sided matching markets, denoted pick-an-object (PAO) mechanisms. When implementing an allocation rule via PAO, agents are asked to pick an object from individualized menus. These choices may be rejected later on, and these agents are presented with new menus. When the procedure ends, agents are assigned the last object they picked. We characterize the allocation rules that can be sequentialized by PAO mechanisms, as well as the ones that can be implemented in a robust truthful equilibrium. We justify the use of PAO as opposed to direct mechanisms by showing that its equilibrium behavior is closely related to the one in obviously strategy-proof (OSP) mechanisms, but implements commonly used rules, such as Gale-Shapley DA and top trading cycles, which are not OSP-implementable. We run laboratory experiments comparing truthful behavior when using PAO, OSP, and direct mechanisms to implement different rules. These indicate that agents are more likely to behave in line with the theoretical prediction under PAO and OSP implementations than their direct counterparts."

Thursday, September 29, 2022

What is needed to gain support for effective algorithms in hiring, etc?

 Here's an experiment motivated in part by European regulations on transparency of algorithms.

Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence  by Marie-Pierre Dargnies, Rustamdjan Hakimov and Dorothea Kübler

Abstract: "We run an online experiment to study the origins of algorithm aversion. Participants are either in the role of workers or of managers. Workers perform three real-effort tasks: task 1, task 2, and the job task which is a combination of tasks 1 and 2. They choose whether the hiring decision between themselves and another worker is made either by a participant in the role of a manager or by an algorithm. In a second set of experiments, managers choose whether they want to delegate their hiring decisions to the algorithm. In the baseline treatments, we observe that workers choose the manager more often than the algorithm, and managers also prefer to make the hiring decisions themselves rather than delegate them to the algorithm. When the algorithm does not use workers’ gender to predict their job task performance and workers know this, they choose the algorithm more often. Providing details on how the algorithm works does not increase the preference for the algorithm, neither for workers nor for managers. Providing feedback to managers about their performance in hiring the best workers increases their preference for the algorithm, as managers are, on average, overconfident."

"Our experiments are motivated by the recent debates in the EU over the legal requirements for algorithmic decisions. Paragraph 71 of the preamble to the General Data Protection Regulation (GDPR) requires data controllers to prevent discriminatory effects of algorithms processing sensitive personal data. Articles 13 and 14 of the GDPR state that, when profiling takes place, people have the right to “meaningful information about the logic involved” (Goodman and Flaxman 2017). While the GDPR led to some expected effects, e.g., privacy-oriented consumers opting out of the use of cookies (Aridor et al. 2020), the discussion over the transparency requirements and the constraints on profiling is still ongoing. Recently, the European Parliament came up with the Digital Services Act (DSA), which proposes further increasing the requirements for algorithm disclosure and which explicitly requires providing a profiling-free option to users, together with a complete ban on the profiling of minors. Our first treatment that focuses on the workers aims at identifying whether making the algorithm gender-blind and therefore unable to use gender to discriminate, as advised in the preamble of the GDPR and further strengthened in the proposed DSA, increases its acceptance by the workers. The second treatment is a direct test of the importance of the transparency of the algorithm for the workers. When the algorithm is made transparent in our setup, it becomes evident which gender is favored. This can impact algorithm aversion differently for women and men, for example if workers’ preferences are mainly driven by payoff maximization.

"The treatments focusing on the managers’ preferences aim at understanding why some firms are more reluctant than others to make use of hiring algorithms. One possible explanation for not adopting such algorithms is managerial overconfidence. Overconfidence is a common bias, and its effect on several economic behaviors has been demonstrated (Camerer et al. 1999, Dunning et al. 2004, Malmendier and Tate 2005, Dargnies et al. 2019). In our context, overconfidence is likely to induce managers to delegate the hiring decisions to the algorithm too seldom. Managers who believe they make better hiring decisions than they actually do, may prefer to make the hiring decisions themselves. Our paper will provide insights about the effect of overconfidence on the delegation of hiring decisions to algorithms. Similar to the treatments about the preferences of workers, we are also interested in the effect of the transparency of the algorithm on the managers’ willingness to delegate the hiring decisions. Disclosing the details of the algorithm can increase the managers’ trust in the algorithm."

Wednesday, December 1, 2021

School choice using deferred acceptance algorithms increases competition for selective schools, by Terrier, Pathak and Ren

 Here's a working paper from the LSE which concludes that making it safe for parents to truthfully report their preferences increases the competition for selective schools (called grammar schools, which prioritize students based on admission tests), with the unintended consequence of disadvantaging poorer families in England. The paper contains a good description of past and present school assignment regimes in England.

From immediate acceptance to deferred acceptance: effects on school admissions and achievement in England by Camille Terrier Parag A. Pathak and Kevin Ren,  Centre for Economic Performance Discussion Paper No.1815, November 2021


"Abstract: Countries and cities around the world increasingly rely on centralized systems to assign students to schools. Two algorithms, deferred acceptance (DA) and immediate acceptance (IA), are widespread. The latter is often criticized for harming disadvantaged families who fail to get access to popular schools. This paper investigates the effect of the national ban of the IA mechanism in England in 2008. Before the ban, 49 English local authorities used DA and 16 used IA. All IA local authorities switched to DA afterwards, giving rise to a cross-market difference-in-differences research design. Our results show that the elimination of IA reduces measures of school quality for low-SES students more than high-SES students. After the ban, low-SES students attend schools with lower value-added and more disadvantaged and low-achieving peers. This effect is primarily driven by a decrease in low-SES admissions at selective schools. Our findings point to an unintended consequence of the IA to DA transition: by encouraging high-SES parents to report their preferences truthfully, DA increases competition for top schools, which crowds out low-SES students."


And here are the paper's concluding sentences:

" In England, selective schools pick students based on test scores, which favors high-SES parents. After the transition to DA, high-SES parents enroll at these schools at higher rates. Selective admissions are widespread throughout education, so our results provide an important caution to equity rationales for DA over IA in settings where selective schools have large market share."

Wednesday, September 15, 2021

School choice in Latin America, in BBC News Mundo

 A news story with an unusually detailed account of school choice algorithms discusses some of the issues in Latin America, in Spanish, in BBC News Mundo. (Google translate does an adequate job...).  One of the issues discussed is that geographic priorities for schools give wealthier families an advantage, and perpetuate geographic segregation.

Qué es el Mecanismo de Boston y por qué expertos denuncian que hay segregación en las asignaciones de escuelas en América Latina  Analía Llorente  BBC News Mundo

[G translate: What is the Boston Mechanism and why experts denounce that there is segregation in school assignments in Latin America]

Some snippets:

"The Boston mechanism allows for a lot of parenting strategy and that means that it generates a lot of inequalities " [says] Paula Jaramillo, Associate Professor in Economics at the Universidad de los Andes in Colombia

...

"The criticism is against the Boston Mechanism because it continues to be applied, but it is also against the deferred acceptance method because it is generating segregation at the neighborhood level," says Caterina Calsamiglia, a leader in the investigation of these methods.

"The specialist refers to the fact that a student obtains more points for his proximity to the preferred school, therefore he has a greater chance of being admitted to it.

"This creates one of the main strategies is the moving neighborhood, decision can only carry out families with middle income to high, creating inequality."

...

"In many places in Latin America the selection method is not even regulated, and where it is, they are not very transparent with parents in communicating the methods.

"We know a lot about what is done by cities in the United States, in Europe, in Asia, we don't know so much about Latin America," says Paula Jaramillo.

...

"In conclusion, the experts believe that there is no magic method that can be applied uniformly in the countries of the region to avoid segregation and inequality in school selection.

"They agree that the deferred acceptance method is the "fairest" but not perfect. There are many factors to take into account from the quality of the schools to their location."

Sunday, August 15, 2021

Fair algorithms for selecting citizen assemblies, in Nature

 Here's a paper that grapples with the problem that not every group in society is equally likely to accept an appointment for which they have been selected, which complicates the problem of selecting representative committees while also giving each potential member approximately the same chance of being selected.

Fair algorithms for selecting citizens’ assemblies. by Bailey Flanigan, Paul Gölz, Anupam Gupta, Brett Hennig & Ariel D. Procaccia, Nature (2021). https://doi.org/10.1038/s41586-021-03788-6

Abstract: Globally, there has been a recent surge in ‘citizens’ assemblies’1, which are a form of civic participation in which a panel of randomly selected constituents contributes to questions of policy. The random process for selecting this panel should satisfy two properties. First, it must produce a panel that is representative of the population. Second, in the spirit of democratic equality, individuals would ideally be selected to serve on this panel with equal probability2,3. However, in practice these desiderata are in tension owing to differential participation rates across subpopulations4,5. Here we apply ideas from fair division to develop selection algorithms that satisfy the two desiderata simultaneously to the greatest possible extent: our selection algorithms choose representative panels while selecting individuals with probabilities as close to equal as mathematically possible, for many metrics of ‘closeness to equality’. Our implementation of one such algorithm has already been used to select more than 40 citizens’ assemblies around the world. As we demonstrate using data from ten citizens’ assemblies, adopting our algorithm over a benchmark representing the previous state of the art leads to substantially fairer selection probabilities. By contributing a fairer, more principled and deployable algorithm, our work puts the practice of sortition on firmer foundations. Moreover, our work establishes citizens’ assemblies as a domain in which insights from the field of fair division can lead to high-impact applications.

...

"To our knowledge, all of the selection algorithms previously used in practice (Supplementary Information section 12) aim to satisfy one particular property, known as ‘descriptive representation’ (that the panel should reflect the composition of the population)16. Unfortunately, the pool from which the panel is chosen tends to be far from representative. Specifically, the pool tends to overrepresent groups with members who are on average more likely to accept an invitation to participate, such as the group ‘college graduates’.  

...

"Selection algorithms that pre-date this work focused only on satisfying quotas, leaving unaddressed a second property that is also central to sortition: that all individuals should have an equal chance of being chosen for the panel.

...

"Although it is generally impossible to achieve perfectly equal probabilities, the reasons to strive for equality also motivate a more gradual version of this goal: making probabilities as equal as possible, subject to the quotas. We refer to this goal as ‘maximal fairness’

...

"the algorithms in our framework (1) explicitly compute a maximally fair output distribution and then (2) sample from that distribution to select the final panel (Fig. 1). Crucially, the maximal fairness of the output distribution found in the first step makes our algorithms optimal. To see why, note that the behaviour of any selection algorithm on a given instance is described by some output distribution; thus, as our algorithm finds the fairest possible output distribution, it is always at least as fair as any other algorithm."



Friday, August 13, 2021

Generalizing deferred acceptance in many to one matching with contracts, by Hatfield, Kominers and Westkamp in RESTUD

 Stability, Strategy-Proofness, and Cumulative Offer Mechanisms, by John William Hatfield, Scott Duke Kominers, Alexander Westkamp, The Review of Economic Studies, Volume 88, Issue 3, May 2021, Pages 1457–1502, https://doi.org/10.1093/restud/rdaa052

Abstract: We characterize when a stable and strategy-proof mechanism is guaranteed to exist in the setting of many-to-one matching with contracts. We introduce three novel conditions—observable substitutability, observable size monotonicity, and non-manipulability via contractual terms—and show that when these conditions are satisfied, the cumulative offer mechanism is the unique mechanism that is stable and strategy-proof (for workers). Moreover, we show that our three conditions are, in a sense, necessary: if the choice function of some firm fails any of our three conditions, we can construct unit-demand choice functions for the other firms such that no stable and strategy-proof mechanism exists. Thus, our results provide a rationale for the ubiquity of cumulative offer mechanisms in practice.


Tuesday, February 18, 2020

Market design and algorithmic criminal justice--by Jung, Kannan, Lee, Pai, Roth and Vohra

When fairness isn't your only goal, your other goals may help you choose among competing definitions of fairness.

Fair Prediction with Endogenous Behavior
Christopher Jung, Sampath Kannan, Changhwa Lee, Mallesh M. Pai, Aaron Roth,and Rakesh Vohra
February 17, 2020

Abstract: There  is  increasing  regulatory  interest  in  whether  machine  learning  algorithms  deployed  in  consequential domains (e.g.  in criminal justice) treat different demographic groups “fairly.”  However, there are several proposed notions of fairness, typically mutually incompatible.  Using criminal justice as an example,  we study a model in which society chooses an incarceration rule.  Agents of different demographic groups differ in their outside options (e.g.  opportunity for legal employment) and decide whether to commit crimes.  We show that equalizing type I and type II errors across groups is consistent with the goal of minimizing the overall crime rate; other popular notions of fairness are not.
*********

And here's a blog post about the paper by one of the authors:

Fair Prediction with Endogenous Behavior
Can Game Theory Help Us Choose Among Fairness Constraints?

"...The crime-minimizing solution is the one that sets different thresholds on posterior probabilities (i.e. uniform thresholds on signals) so as to equalize false positive rates and false negative rates. In other words, to minimize crime, society should explicitly commit to not conditioning on group membership, even when group membership is statistically informative for the goal of predicting crime.

"Why? Its because although using demographic information is statistically informative for the goal of predicting crime when base rates differ, it is not something that is under the control of individuals --- they can control their own choices, but not what group they were born into. And making decisions about individuals using information that is not under their control has the effect of distorting their dis-incentive to commit crime --- it ends up providing less of a dis-incentive to individuals from the higher crime group (since they are more likely to be wrongly incarcerated even if they don't commit a crime). And because in our model people are rational actors, minimizing crime is all about managing incentives. "

Tuesday, September 24, 2019

Algorithms and intelligence at Penn

From Penn Today:
The human driver
As the ability to harness the power of artificial intelligence grows, so does the need to consider the difficult decisions and trade-offs humans make about privacy, bias, ethics, and safety.

"Already, some AI-enabled practices have raised serious concerns, like the ability to create deepfake videos to put words in someone’s mouth, or the growing use of facial recognition technology in public places. Automated results that turned out to reflect racial or gender bias has prompted some to say the programs themselves are racist.

"But the problem is more accidental than malicious, says Penn computer scientist Aaron Roth. An algorithm is a tool, like a hammer—but while it would make no sense to talk about an “ethical” hammer, it’s possible to make an algorithm better through more thoughtful design.

“It wouldn’t be a moral failure of the hammer if I used it to hit someone. The ethical lapse would be my own,” he says. “But the harms that algorithms ultimately do are several degrees removed from the human beings, the engineers, who are designing them.”

"Roth and other experts acknowledge it’s a huge challenge to push humans to train the machines to emphasize fairness, privacy, and safety. Already, experts across disciplines, from engineering and computer science to philosophy and sociology, are working to translate vague social norms about fairness, privacy, and more into practical instructions for the computer programs. That means asking some hard questions, Roth says.

“Of course, regulation and legal approaches have an important role to play, but I think that by themselves they are woefully insufficient,” says Roth, whose book, “The Ethical Algorithm,” with Penn colleague Michael Kearns will be published in November.

The sheer size of the data sets can make transparency difficult, he adds, while at the same time revealing errors more easily."
*************

Listen also to
The programming ethos
In a podcast conversation, Penn professors Michael Kearns, Aaron Roth, and Lisa Miracchi discuss the ethics of artificial intelligence.

Tuesday, September 3, 2019

Efficiency and Stability in Large Matching Markets, by Che and Tercieux in the JPE





We study efficient and stable mechanisms in matching markets when the number of agents is large and individuals’ preferences and priorities are drawn randomly. When agents’ preferences are uncorrelated, then both efficiency and stability can be achieved in an asymptotic sense via standard mechanisms such as deferred acceptance and top trading cycles. When agents’ preferences are correlated over objects, however, these mechanisms are either inefficient or unstable, even in an asymptotic sense. We propose a variant of deferred acceptance that is asymptotically efficient, asymptotically stable, and asymptotically incentive compatible. This new mechanism performs well in a counterfactual calibration based on New York City school choice data.
"...we develop a new mechanism, called DA with circuit breaker (DACB), that is both asymptotically efficient and asymptotically stable. This mechanism modifies DA to prevent participants from competing excessively. Specifically, all agents are ordered in some manner (for instance, at random), and following that order, each agent applies one at a time to the best object that has not yet rejected him.5 The proposed object then accepts or rejects the applicant, much as in standard DA. If, at any point, an agent applies to an object that holds an application, one agent is rejected, and the rejected agent in turn applies to the best object among those that have not rejected him. This process continues until an agent makes a certain “threshold” number κ of offers for the first time. The stage is terminated at that point, and all tentative assignments up to that point become final. The next stage then begins with the agent who was rejected at the end of the last stage applying to the best remaining object and the number of proposals for that agent being reset to zero. The stages proceed in this fashion until no rejection occurs.

"This “staged” version of DA resembles standard DA except for one crucial difference: the mechanism periodically terminates a stage and finalizes the tentative assignment up to that point. The event triggering the termination of a stage is an agent reaching a threshold number of offers. Intuitively, the mechanism activates a “circuit breaker” whenever the competition “overheats” to such an extent that an agent finds himself at the risk of losing an object he ranks highly to an agent who ranks it relatively lowly (more precisely, above the threshold rank). This feature ensures that each object assigned at each stage goes to an agent who ranks it relatively highly among the objects available at that stage."

Wednesday, August 28, 2019

Matching in Google's internal labor market

Bo Cowgill and Rembrand Koning have written a Harvard Business School case study called Matching Markets for Googlers

Abstract: "This case describes how Google designed and launched an internal matching market to assign individual workers with projects and managers. The case evaluates how marketplace design considerations—and several alternative staffing models—could affect the company’s goals and workers’ well-being. It discusses the details of implementation as well as the intended (and unintended) consequences of the internal match system. The case concludes with a debate about how the Chameleon marketplace could expand to include more Googlers and illustrates what to consider when thinking about launching new matching markets in organizations."

"Kuehn and her team launched the Chameleon program at the end of 2015 to optimize employees’ careers and Google’s business needs. Unlike most internal staffing paradigms, Chameleon did not rely on a central HR coordinator to assign the unit’s hundreds of individual contributors (ICs) to roles in its dozens of teams. Nor did Chameleon rely on self-initiated transfers, nor ad hoc, centrally planned reorganizations.

"Instead, under Chameleon, a staffing marketplace would open up three times during the year. At the start of each round, ICs would use Chameleon’s online platform to submit private rankings of the available roles. In turn, the ICs would be ranked by the manager responsible for each open role. The Chameleon platform would then turn these rankings into matches using a simple but robust marketplace algorithm, assigning ICs to roles for the next 6–18 months."
*******
Not a spoiler: It's a deferred acceptance algorithm...

A big role is played by a pseudonymous Googler who the case calls Bradford Preston, who was familiar with the market design literature and who "moved to a part-time role so that he could begin a PhD in economics."

There's much more, about getting this kind of internal marketplace adopted. And apparently new Googlers are called Nooglers.

Tuesday, July 30, 2019

Speed bumps for high frequency trading

From the WSJ:

More Exchanges Add ‘Speed Bumps,’ Defying High-Frequency Traders
Over a dozen financial markets plan mechanisms that impose a split-second delay before executing trades  by By Alexander Osipovich.

"By 2020, more than a dozen markets in stocks, futures and currencies from Toronto to New York to Moscow will slow down trading via speed bumps or similar features, if all the current planned launches are carried out. Five years ago, only a few markets had speed bumps.
...
"Among the exchanges set to debut their first speed bumps are the London Metal Exchange, which plans to add an eight-millisecond delay to gold and silver futures later this year. Chicago-based Cboe Global Markets hopes to add a speed bump on its EDGA stock exchange in 2020, if it wins regulatory approval.

"LME, Cboe and other markets adopting speed bumps say they want to neutralize “latency arbitrage,” a strategy in which a fast trader takes advantage of a moving price before other players can react.
...
"Cboe’s proposal would force the HFT firm to wait four milliseconds before buying the Ford shares. But the same delay wouldn’t apply if the investor sent Cboe an electronic message canceling his or her $10.00 sell order. That gives the investor a brief window of time to avoid being picked off by the faster trade.

"Most of the latest speed-bump plans have a similar, “asymmetrical” design, meaning they don’t apply equally to all trades. Such speed bumps are typically meant to favor players who publicly quote prices on an exchange, rather than those who attempt to buy or sell using those prices."


Thursday, April 25, 2019

Should NYC school choice diversify school assignments to match applicant demographics?

Some commentators are concerned that features closely correlated with race, for example, can be used in computerized algorithms that don't explicitly use race (see previous two posts here and here). But below is a proposal that sees using features correlated with race as an advantage for achieving diversity in NYC schools, with a view towards making admissions look as diverse as applications.

Here's a report from FastCompany:

How to fix segregation in NYC schools? Let students hack the algorithm
A Nobel Prize winner’s algorithm helps decide which students are placed in which New York schools. A team of students is trying to upgrade it.

"Many of the most desirable, highest-performing schools have a gross disparity between the racial breakdown of who applies versus who eventually attends those schools.

"Data acquired from the Department of Education by IntegrateNYC through a freedom of information request and provided to Fast Company bleakly demonstrates this point. For instance, while white students accounted for one quarter of students who applied in 2017 to Beacon High School, 42% of that Fall’s freshman class was white. At Eleanor Roosevelt High School, 16% of applicants that year were black, yet less than 1% of admitted students were black.
"Part of the problem is that the education children receive from kindergarten to eighth grade is not equal. Students who live in more affluent, largely white neighborhoods have better middle schools, which better prepare students for high school entrance exams. Students from wealthier families are also more likely to be able to afford private test prep for their students. But the city’s current admissions process does nothing to correct this.
...
"The solution students came up with was to create a new matchmaking algorithm that prioritizes factors highly correlated with race such as a student’s census tract, whether they receive free or reduced-price lunch, and whether English is their second language. Such an algorithm would boost disadvantaged students higher up in the matchmaking process, provided they have already passed a school’s screening process."
***********

In NYC, school principals have a lot of agency in determining the input of the school matching algorithm, in the form of preference lists for their schools. The city (i.e. the NYCDOE) provides guidelines for schools. So another approach to achieving more and different diversity would be to provide different guidelines and requirements for schools, that would change the inputs to the matching algorithm (the schools' rank order lists of students), rather than trying to modify the algorithm. My guess is that this would be a more effective, nuanced, and flexible approach.