You can now apply to the fourth round of AlgorithmWatch’s Algorithmic Accountability Reporting Fellowship. Running since January 2023, the program has connected journalists and researchers from across Europe and unveiled new stories about automated discrimination on the continent.
Starting in October 2024, the new cohort of fellows will have six months to work on their research. Candidates from fields other than journalism are welcome to apply. We expect at least one journalistic story, research report, audio or video feature, or similar to be included in the outcome of each proposal. The application deadline is 15 September 2024 23:59 CET.
In this round, we will focus on the political economy of Artificial Intelligence (AI) such as generative AI or recommender systems: our goal is to understand how the AI value chain is constructed and how it impacts society, both broadly and on specific population groups. Possible areas of research could be:
Conflicts around infrastructure: Data centers and communication cables are causing conflicts with local communities around water and electricity consumption, space, pollution. Who is impacted and how?
Data extractivism: AI models require massive amounts of data. How are European companies involved in the violation of rights for data extractivism along the value chain? Where is the data taken from? Who labels and polishes it? Are there any groups of people susceptible to having their rights violated in this line of work (e.g. young people or migrants)?
Rights violations of people along the value chain: How are the rights of people working to build AI (e.g. click workers) affected? What about people working on the other side of the AI value chain (i.e. platform workers)? Are there people overly susceptible to doing this work and to having their rights violated as a result (e.g. young people or migrants)?
The TESCREAL ideology: TESCREAL is a bundle of ideologies linked to the far-right and pervasive in the corporate world of AI (see Gebru and Torres, 2024). How are followers of the TESCREAL ideology organized in Europe? Who are their champions in politics and in the administration? And who is fighting them? In other words, how is the far right linked to AI policy in Europe?
AI as national security: Governments increasingly see themselves as running a new arms race, this time with AI. How does this thinking impacts society at large? Can data centers or other technological constructions be militarised? How is nationalism creeping into tech?
We encourage applicants to propose stories or research plans based on real-life cases in Europe related to any of these topics (we are not looking for theoretical approaches). This call for application is different from the automated decision-making angle we have previously focused on in our reporting fellowships, but we believe it is closely related to algorithmic accountability. The power structure, be it physical or ideological, underpinning the development of AI, needs to be scrutinized.
During the selection process, we will evaluate potential connections between the proposals and may suggest shortlisted candidates to work jointly with other fellows. This is a suggestion and not a requirement.
