Accessibility Tools

Article

Practicing consent

Technologies and culture to resist techno-fascism

Riccardo Silvi
a story by
Riccardo Silvi
 
 
Practicing consent

At AI Festival 2025, amidst innovation and technology, reflections emerged on ethics and rights related to the use of artificial intelligence. From the expansion of biometric recognition to the defence of digital human rights, and the role of design as a tool of resistance: this is the moment to reclaim our physical and digital identity and to practise our consent.

What does it mean to lose control over one’s own consent? The concept of techno-fascism, introduced 15 years ago by American historian Janis Mimura1, in her book Planning for Empire: Reform Bureaucrats and the Japanese Wartime State2, is finding new relevance today, particularly in the United States.

The New Yorker columnist Kyle Chayk recently used the term to define what we are experiencing: «A new convergence between Internet entrepreneurs and everyday government operations». In his article3published in February 2025 in the American magazine, Chayk explains that «American techno-fascism is no longer a philosophical abstraction that Silicon Valley can tinker with».

«It is a political programme whose constitutional limits are being tested».

The rise of AI is accelerating this deterioration in the relationship between citizens, governments, and corporations. In the paper Democratisation in the Age of Artificial Intelligence4 Cupac, Schopmans, and Tuncer-Ebeturk clearly describe what they call “digital authoritarianism“, or «the use of digital information technology by authoritarian regimes to monitor, suppress, and manipulate populations». Referring to large technology corporations, the researchers warn of the emergence of «a new quasi-governmental class that holds political power without democratic legitimacy or accountability». In other words, without any consent.

How can we reclaim the ability to choose and actively negotiate our relationship with technology in increasingly monitored and tracked physical and digital spaces?

Design that enables consent

«The use of technology without awareness or counterbalances is a problem. That is why we have created what can be defined as tools that enable people to make choices and regain critical thinking, rather than being subjected to technology without consent». Rachele Didero, designer and entrepreneur, is the founder and CEO of the fashion-tech startup CAP_ABLE,a fashion design studio developing projects and products at the intersection of design and technology. CAP_ABLE has patented and sold worldwide what they call “AI Clothing for data privacy”—garments designed using AI but intended to deceive AI, protecting individuals from biometric recognition5 systems such as facial recognition.

«These clothes safeguard identity, confuse algorithms, and give people back the power of consent».

«It all started a few years ago in New York, where biometric recognition systems have long been widespread in public spaces. Along with a group of friends, we imagined creating physical protection against cameras that infringed on citizens’ freedom of movement».

Rachele Didero is a fashion and textile designer. While studying between Milan, New York and Tel Aviv, she developed and patented a fabric to protect facial recognition. This research continues to develop thanks to her PhD between the Politecnico di Milano and MIT Boston and the fashion tech startup she founded, Cap_able.

CAP_ABLE’s garments6 are reversible—wearers can choose to be recognised as a “person” or to confuse AI with images of dogs, zebras, giraffes, or human figures embedded in the fabric’s design. Didero speaks of “Critical design tech products” and the “designer as a sense maker“. «Ours is, first and foremost, critical design, but at the same time, it is a product that must meet the requirements of scalability and reproducibility, as well as being tech-driven. It involves continuous research to keep up with new technologies, alongside high-quality raw materials and production processes».

On the market since 2023, CAP_ABLE’s products have customers worldwide. The primary market is the United States, particularly California, but they are also sold in Italy, New Zealand, Hong Kong, and many other countries.

Clothes by CAP_ABLE, a project by Rachele Didero. Credits: Marcello Chiesa, Marta Morini, Eugenio Chironna, Ana Alice Alves Santos, Katja Wellnitz. All rights reserved. Reproduced with the consent of the authors.

The biometric recognition market

For CAP_ABLE, consent is not a formal or isolated act but a dynamic process that permeates social interactions and the ways in which public spaces are experienced and regulated. Practising consent means being able to decide who can collect, use, and store biometric data, as well as determining the conditions under which we are visible, tracked, and monitored.

Under the pretext of public security, crime prevention, or, more broadly, the efficiency of urban administration, biometric recognition is rapidly expanding across squares, streets, and national borders worldwide.

According to consulting firm Morodor Intelligence7 «the FBI already conducts an average of 4,055 searches per month to identify individuals» using these systems and has a facial recognition database containing images of over 117 million Americans. This use is growing so significantly that, according to the same analysis, it is expected to increase by 14.3% in the US alone by 2030.

Another study8 by analytics firm Global Market Insight on the “security camera market size” also highlights how AI is accelerating the deployment of advanced surveillance systems in public spaces. «Companies are collaborating to integrate AI more seamlessly into security cameras, primarily used in public spaces such as academic campuses, airports, sports centres, commercial areas, and tourist hotspots. These public spaces are crucial for security, and AI solutions enable real-time tracking». Leading this evolution are often nations with authoritarian political systems, such as China: the “Sharp Eyes Project9, launched by China, represents one of the most extreme examples of technology-driven social control. Its declared goal by 2020 was to establish a widespread biometric recognition system, leveraging a network of hundreds of thousands of surveillance cameras across the country. The ambition? To remove any “blind spot” in the surveillance of key roads, ensuring total coverage in densely populated areas and monitoring all strategic sites.

Moreover, initiatives such as the “Digital Silk Road”10, launched to expand China’s influence in the global technology sector, have made China a major provider of digital infrastructure to developing nations. The result is clear: of the 64 countries employing Chinese technology for “safe and smart cities”, 41 have been classified by Freedom House, an international non-governmental organisation, as “not free” or “partially free”11.

Sponsored ad
Sponsorizza con noi

Defending digital human rights

In Europe, efforts are being made to take an opposite approach: one focused on defending digital rights and informed consent. Hermes Center is a concrete example of a movement advocating for consent as practice, not only in physical environments but also in digital spaces. «Every day we are presented with a choice», explains Alessandra Bormioli, digital rights activist at Hermes Center.

«When we use digital tools—whether it’s a browser, an app, or communication software—we are deciding whether or not to protect our privacy».

«But the problem is that we often are not even aware of this choice, because the system is designed to obtain our consent without us realising it».

Hermes Center is an association founded in Milan in 2012 that advocates for a society in which technology is an enabling tool for freedom, not surveillance. The association is made up of people who share some universal values: freedom of expression and movement, protection of human rights, transparency and openness of software and algorithms, defense of vulnerable groups, and accountability for those in power. For almost two years Alessandra Bormioli has been working for Hermes Centre as a Digital Rights Activist.

Hermes Center operates in Italy and Europe through coordinated efforts in research, analysis, and education on digital human rights. It also develops free software aimed at enhancing online freedom of expression, from whistleblowing to investigative journalism. «We are the watchdogs of digital human rights». Hermes recently played a key role at AI Festival 2025, an event that, in its second edition, attracted over 10,000 attendees and more than 200 speakers. During a panel titled Artificial Intelligence: between progress and human rights protection, Alessandra Bormioli discussed the state of human rights in digital environments with journalists Kevin Carboni and Walter Ferri, following the implementation of the European Artificial Intelligence Act. «We often have an overly optimistic view of AI advancements without considering the associated risks. During the panel, we explored this issue, addressing the relationship between security, technology, and social control».

Alessandra Bormioli, Kevin Carboni e Walter Ferri during the AI Festival on the panel “Artificial Intelligence between progress and protection of human rights”. All rights reserved. Reproduced with the consent of the authors.

«As Hermes, we are constantly working to help society develop awareness on these issues, inviting people to inform themselves about the risks and opportunities of technology, but above all to believe in and defend their rights also and especially in the digital sphere». We also use art to do this, such as with the exhibition When they see us, in Bologna in September 2024».

A crucial and ongoing effort. «We are currently working12 on the AI Act’s legislative process in Italy, particularly through the Digital Human Rights Network, advocating for the government to establish an independent authority, separate from politics, to oversee AI use in Italy and ensure proper enforcement of European regulations».

«Change always happens at different speeds», concludes Bormioli, «the one that starts from the bottom, from communities, is inevitably slower than the change driven by business as in big tech systems. But we can say with certainty that awareness of digital rights is growing. AI does not exist without data. But if citizens cannot choose, losing the power of informed consent, AI inevitably becomes a mechanism of control».

 

  1. The essay aims to reveal the modern roots of the interaction between technology and ideology by starting with the account of the establishment of Japanese fascism in the 1930s. ↩︎
  2. Mimura, J. (2011). Planning for empire: reform bureaucrats and the Japanese wartime state. Choice Reviews Online49(04), 49–2235. https://doi.org/10.5860/choice.49-2235 ↩︎
  3. Chayka, K. (2025, February 26). Elon Musk, and how Techno-Fascism has come to America. The New Yorker. https://www.newyorker.com/culture/infinite-scroll/techno-fascism-comes-to-america-elon-musk ↩︎
  4. Cupać, J., Schopmans, H., & Tuncer-Ebetürk, İ. (2024). Democratization in the age of artificial intelligence: introduction to the special issue. Democratization, 31(5), 899–921. https://www.tandfonline.com/doi/full/10.1080/13510347.2024.2338852 ↩︎
  5. Technologies developed for the identification and authentication of people based on unique physical or behavioural characteristics. In addition to facial recognition, examples include fingerprinting, iris and retina scanning, voice recognition, hand geometry, and gait analysis. ↩︎
  6. To learn more about the project: https://www.capable.design/pages/chi-siamo ↩︎
  7. Dimensioni del mercato del riconoscimento facciale negli Stati Uniti | Mordor Intelligence. (n.d.). https://www.mordorintelligence.it/industry-reports/united-states-facial-recognition-market ↩︎
  8. Rapporto sulle dimensioni e sulle quote del mercato delle telecamere di sicurezza, 2025-2034. (n.d.). Global Market Insights Inc. https://www.gminsights.com/it/industry-analysis/security-cameras-market ↩︎
  9. Thompson, A. (2021, March 2). China’s ‘Sharp Eyes’ program aims to surveil 100% of public space. Center for Security and Emerging Technology. https://cset.georgetown.edu/article/chinas-sharp-eyes-program-aims-to-surveil-100-of-public-space/ ↩︎
  10. Fava, C. A. (2024, January 11). La Geopolitica delle Connessioni Globali: la Cina e la Nuova via della Seta digitale. Istituto Analisi Relazioni Internazionali. https://iari.site/2024/01/11/la-geopolitica-delle-connessioni-globali-la-cina-e-la-nuova-via-della-seta-digitale/#google_vignette ↩︎
  11. Cupać, J., Schopmans, H., & Tuncer-Ebetürk, İ. (2024). Democratization in the age of artificial intelligence: introduction to the special issue. Democratization, 31(5), 899–921. https://www.tandfonline.com/doi/full/10.1080/13510347.2024.2338852 ↩︎
  12. Center, H. (2025, February 11). Le nostre raccomandazioni sull’Autorità Nazionale per l’intelligenza artificiale. Hermes Center. https://hermescenter.org/le-nostre-raccomandazioni-per-listituzione-di-unautorita-nazionale-per-lintelligenza-artificiale/ ↩︎

Newsletter

Where culture branches out and evolves

Sign up to receive our free newsletter every Saturday