Newswise — LinkedIn Recruiter – a search tool used by professional job recruiters to find candidates for open positions – would function better if recruiters knew exactly how LinkedIn generates its search query responses, possible through a framework called “contextual transparency.”

That is what a team of researchers led by NYU Tandon School of Engineering’s Mona Sloane, a Senior Research Scientist at the NYU Center for Responsible AI and a Research Assistant Professor in the Technology, Culture and Society Department, advance in a provocative new study published in Nature Machine Intelligence.

The study is a collaboration with Julia Stoyanovich, Institute Associate Professor of Computer Science and Engineering, Associate Professor of Data Science, and Director of the Center for Responsible AI at New York University, as well as Ian René Solano-Kamaiko, PhD student at Cornell Tech; Aritra Dasgupta, Assistant Professor of Data Science at New Jersey Institute of Technology; and Jun Yuan, PhD Candidate at New Jersey Institute of Technology.  

It introduces the concept of contextual transparency, essentially a “nutritional label” that would accompany results delivered by any Automated Decision System (ADS), a computer system or machine that uses algorithms, data, and rules to make decisions without human intervention. The label would lay bare the explicit and hidden criteria – the ingredients and the recipe – within the algorithms or other technological processes the ADS uses in specific situations. 

LinkedIn Recruiter is a real-world ADS example – it “decides” which candidates best fit the criteria the recruiter wants – but different professions use ADS tools in different ways.  The researchers propose a flexible model of building contextual transparency – the nutritional label – so it is highly specific to the context. To do this, they recommend three “contextual transparency principles” (CTP) as the basis for building contextual transparency, each of which relies on an approach related to an academic discipline.

  • CTP 1: Social Science for Stakeholder Specificity: This aims to identify the professionals who rely on a particular ADS system, how exactly they use it, and what information they need to know about the system to do their jobs better. This can be accomplished through surveys or interviews.
  • CTP 2: Engineering for ADS Specificity: This aims to understand the technical context of the ADS used by the relevant stakeholders. Different types of ADS operate with different assumptions, mechanisms and technical constraints. This principle requires an understanding of both the input, the data being used in decision-making, and the output, how the decision is being delivered back.
  • CTP 3: Design for Transparency- and Outcome-Specificity: This aims to understand the link between process transparency and the specific outcomes the ADS system would ideally deliver. In recruiting, for example, the outcome could be a more diverse pool of candidates facilitated by an explainable ranking model

Researchers looked at how contextual transparency would work with LinkedIn Recruiter, in which recruiters use Boolean searches – AND, OR, NOT written queries – to receive ranked results. Researchers found that recruiters do not blindly trust ADS-derived rankings and typically double-check ranking outputs for accuracy, oftentimes going back and tweaking keywords. Recruiters told researchers that the lack of ADS transparency challenges efforts to recruit for diversity.

To address the transparency needs of recruiters, researchers suggest that the nutritional label of contextual transparency include passive and active factors. Passive factors comprise information that is relevant to the general functioning of the ADS and the professional practice of recruiting in general, while active factors comprise information that is specific to the Boolean search string and therefore changes. 

The nutritional label would be inserted into the typical workflow of LinkedIn Recruiter users, providing them information that would allow them to both assess the degree to which the ranked results satisfy the intent of their original search, and to refine the Boolean search string accordingly to generate better results.

To evaluate whether this ADS transparency intervention did achieve the change that can reasonably be expected, researchers suggest using stakeholder interviews about potential change in use and perception of ADS alongside participant diaries documenting professional practice and A/B testing (if possible). 

Contextual transparency is an approach that can be used for AI transparency requirements that are mandated in new and forthcoming AI regulation in the US and Europe, such as the NYC Local Law 144 of 2021 or the EU AI Act.  


About the New York University Tandon School of EngineeringThe NYU Tandon School of Engineering is home to a community of renowned faculty and undergraduate and graduate students united in a mission to understand and create technology that powers cities, enables worldwide communication, fights climate change, and builds healthier, safer, and more equitable real and digital worlds. The school’s culture centers on encouraging rigorous, interdisciplinary collaboration and research; fostering inclusivity, entrepreneurial thinking, and diverse perspectives; and creating innovative and accessible pathways for lifelong learning in STEM, from K12 through executive education and new advances in digital learning. 

NYU Tandon dates back to 1854, the founding year of both the New York University School of Civil Engineering and Architecture and the Brooklyn Collegiate and Polytechnic Institute. Those institutions evolved independently before merging in 2014 to create what is now known as NYU Tandon. Located in the heart of Brooklyn, NYU Tandon is a vital part of NYU's New York campus and unparalleled global network. For more information, visit

Journal Link: Nature Machine Intelligence