Vue lecture

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.

The U.S. National Security State is Here to Make AI Even Less Transparent and Accountable

The Biden White House has released a memorandum on “Advancing United States’ Leadership in Artificial Intelligence” which includes, among other things, a directive for the National Security apparatus to become a world leader in the use of AI. Under direction from the White House, the national security state is expected to take up this leadership position by poaching great minds from academia and the private sector and, most disturbingly, leveraging already functioning private AI models for national security objectives.

Private AI systems like those operated by tech companies are incredibly opaque. People are uncomfortable—and rightly so—with companies that use AI to decide all sorts of things about their lives–from how likely they are to commit a crime, to their eligibility for a job, to issues involving immigration, insurance, and housing. Right now, as you read this, for-profit companies are leasing their automated decision-making services to all manner of companies and employers and most of those affected will never know that a computer made a choice about them and will never be able to appeal that decision or understand how it was made.

But it can get worse; combining both private AI with national security secrecy threatens to make an already secretive system even more unaccountable and untransparent. The constellation of organizations and agencies that make up the national security apparatus are notoriously secretive. EFF has had to fight in court a number of times in an attempt to make public even the most basic frameworks of global dragnet surveillance and the rules that govern it. Combining these two will create a Frankenstein’s Monster of secrecy, unaccountability, and decision-making power.

While the Executive Branch pushes agencies to leverage private AI expertise, our concern is that more and more information on how those AI models work will be cloaked in the nigh-impenetrable veil of government secrecy. Because AI operates by collecting and processing a tremendous amount of data, understanding what information it retains and how it arrives at conclusions will all become incredibly central to how the national security state thinks about issues. This means not only will the state likely make the argument that the AI’s training data may need to be classified, but they may also argue that companies need to, under penalty of law, keep the governing algorithms secret as well.

As the memo says, “AI has emerged as an era-defining technology and has demonstrated significant and growing relevance to national security.  The United States must lead the world in the responsible application of AI to appropriate national security functions.” As the US national security state attempts to leverage powerful commercial AI to give it an edge, there are a number of questions that remain unanswered about how much that ever-tightening relationship will impact much needed transparency and accountability for private AI and for-profit automated decision making systems. 

Triumphs, Trials, and Tangles From California's 2024 Legislative Session

California’s 2024 legislative session has officially adjourned, and it’s time to reflect on the wins and losses that have shaped Californians’ digital rights landscape this year.

EFF monitored nearly 100 bills in the state this session alone, addressing a broad range of issues related to privacy, free speech, and innovation. These include proposed standards for Artificial Intelligence (AI) systems used by state agencies, the intersection of AI and copyright, police surveillance practices, and various privacy concerns. While we have seen some significant victories, there are also alarming developments that raise concerns about the future of privacy protection in the state.

Celebrating Our Victories

This legislative session brought some wins for privacy advocates—most notably the defeat of four dangerous bills: A.B. 3080, A.B. 1814, S.B. 1076, and S.B. 1047. These bills posed serious threats to consumer privacy and would have undermined the progress we’ve made in previous years.

First, we commend the California Legislature for not advancing A.B. 3080, “The Parent’s Accountability and Child Protection Act” authored by Assemblymember Juan Alanis (Modesto). The bill would have created powerful incentives for “pornographic internet websites” to use age-verification mechanisms. The bill was not clear on what counts as “sexually explicit content.” Without clear guidelines, this bill will further harm the ability of all youth—particularly LGBTQ+ youth—to access legitimate content online. Different versions of bills requiring age verification have appeared in more than a dozen states. We understand Asm. Alanis' concerns, but A.B. 3080 would have required broad, privacy-invasive data collection from internet users of all ages. We are grateful that it did not make it to the finish line.

Second, EFF worked with dozens of organizations to defeat A.B. 1814, a facial recognition bill authored by Assemblymember Phil Ting (San Francisco). The bill attempted to expand the use of facial recognition software by police to “match” images from surveillance databases to possible suspects. Those images could then be used to issue arrest warrants or search warrants. The bill merely said that these matches can't be the sole reason for a warrant to be issued—a standard that has already failed to stop false arrests in other states.  Police departments and facial recognition companies alike both currently maintain that police cannot justify an arrest using only algorithmic matches–so what would this bill really change? The bill only gave the appearance of doing something to address face recognition technology's harms, while allowing the practice to continue. California should not give law enforcement the green light to mine databases, particularly those where people contributed information without knowledge that it would be accessed by law enforcement. You can read more about this bill here, and we are glad to see the California legislature reject this dangerous bill.

EFF also worked to oppose and defeat S.B. 1076, by Senator Scott Wilk (Lancaster). This bill would have weakened the California Delete Act (S.B. 362). Enacted last year, the Delete Act provides consumers with an easy “one-click” button to request the removal of their personal information held by data brokers registered in California. By January 1, 2026. S.B. 1076 would have opened loopholes for data brokers to duck compliance. This would have hurt consumer rights and undone oversight on an opaque ecosystem of entities that collect then sell personal information they’ve amassed on individuals. S.B. 1076 would have likely created significant confusion with the development, implementation, and long-term usability of the delete mechanism established in the California Delete Act, particularly as the California Privacy Protection Agency works on regulations for it. 

Lastly, EFF opposed S.B. 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act authored by Senator Scott Wiener (San Francisco). This bill aimed to regulate AI models that might have "catastrophic" effects, such as attacks on critical infrastructure. Ultimately, we believe focusing on speculative, long-term, catastrophic outcomes from AI (like machines going rogue and taking over the world) pulls attention away from AI-enabled harms that are directly before us. EFF supported parts of the bill, like the creation of a public cloud-computing cluster (CalCompute). However, we also had concerns from the beginning that the bill set an abstract and confusing set of regulations for those developing AI systems and was built on a shaky self-certification mechanism. Those concerns remained about the final version of the bill, as it passed the legislature.

Governor Newsom vetoed S.B. 1047; we encourage lawmakers concerned about the threats unchecked AI may pose to instead consider regulation that focuses on real-world harms.  

Of course, this session wasn’t all sunshine and rainbows, and we had some big setbacks. Here are a few:

The Lost Promise of A.B. 3048

Throughout this session, EFF and our partners supported A.B. 3048, common-sense legislation that would have required browsers to let consumers exercise their protections under the California Consumer Privacy Act (CCPA). California is currently one of approximately a dozen states requiring businesses to honor consumer privacy requests made through opt–out preference signals in their browsers and devices. Yet large companies have often made it difficult for consumers to exercise those rights on their own. The bill would have properly balanced providing consumers with ways to exercise their privacy rights without creating burdensome requirements for developers or hindering innovation.

Unfortunately, Governor Newsom chose to veto A.B. 3048. His veto letter cited the lack of support from mobile operators, arguing that because “No major mobile OS incorporates an option for an opt-out signal,” it is “best if design questions are first addressed by developers, rather than by regulators.” EFF believes technologists should be involved in the regulatory process and hopes to assist in that process. But Governor Newsom is wrong: we cannot wait for industry players to voluntarily support regulations that protect consumers. Proactive measures are essential to safeguard privacy rights.

This bill would have moved California in the right direction, making California the first state to require browsers to offer consumers the ability to exercise their rights. 

Wrong Solutions to Real Problems

A big theme we saw this legislative session were proposals that claimed to address real problems but would have been ineffective or failed to respect privacy. These included bills intended to address young people’s safety online and deepfakes in elections.

While we defeated many misguided bills that were introduced to address young people’s access to the internet, S.B. 976, authored by Senator Nancy Skinner (Oakland), received Governor Newsom’s signature and takes effect on January 1, 2027. This proposal aims to regulate the "addictive" features of social media companies, but instead compromises the privacy of consumers in the state. The bill is also likely preempted by federal law and raises considerable First Amendment and privacy concerns. S.B. 976 is unlikely to protect children online, and will instead harm all online speakers by burdening free speech and diminishing online privacy by incentivizing companies to collect more personal information.

It is no secret that deepfakes can be incredibly convincing, and that can have scary consequences, especially during an election year. Two bills that attempted to address this issue are A.B. 2655 and A.B. 2839. Authored by Assemblymember Marc Berman (Palo Alto), A.B. 2655 requires online platforms to develop and implement procedures to block and take down, as well as separately label, digitally manipulated content about candidates and other elections-related subjects that creates a false portrayal about those subjects. We believe A.B. 2655 likely violates the First Amendment and will lead to over-censorship of online speech. The bill is also preempted by Section 230, a federal law that provides partial immunity to online intermediaries for causes of action based on the user-generated content published on their platforms. 

Similarly, A.B. 2839, authored by Assemblymember Gail Pellerin (Santa Cruz), not only bans the distribution of materially deceptive or altered election-related content, but also burdens mere distributors (internet websites, newspapers, etc.) who are unconnected to the creation of the content—regardless of whether they know of the prohibited manipulation. By extending beyond the direct publishers and toward republishers, A.B. 2839 burdens and holds liable republishers of content in a manner that has been found unconstitutional.

There are ways to address the harms of deepfakes without stifling innovation and free speech. We recognize the complex issues raised by potentially harmful, artificially generated election content. But A.B. 2655 and A.B. 2839, as written and passed, likely violate the First Amendment and run afoul of federal law. In fact, less than a month after they were signed, a federal judge put A.B. 2839’s enforcement on pause (via a preliminary injunction) on First Amendment grounds.

Privacy Risks in State Databases

We also saw a troubling trend in the legislature this year that we will be making a priority as we look to 2025. Several bills emerged this session that, in different ways, threatened to weaken privacy protections within state databases. Specifically,  A.B. 518 and A.B. 2723, which received Governor Newsom’s signature, are a step backward for data privacy.

A.B. 518 authorizes numerous agencies in California to share, without restriction or consent, personal information with the state Department of Social Services (DSS), exempting this sharing from all state privacy laws. This includes county-level agencies, and people whose information is shared would have no way of knowing or opting out. A. B. 518 is incredibly broad, allowing the sharing of health information, immigration status, education records, employment records, tax records, utility information, children’s information, and even sealed juvenile records—with no requirement that DSS keep this personal information confidential, and no restrictions on what DSS can do with the information.

On the other hand, A.B. 2723 assigns a governing board to the new “Cradle to Career (CTC)” longitudinal education database intended to synthesize student information collected from across the state to enable comprehensive research and analysis. Parents and children provide this information to their schools, but this project means that their information will be used in ways they never expected or consented to. Even worse, as written, this project would be exempt from the following privacy safeguards of the Information Practices Act of 1977 (IPA), which, with respect to state agencies, would otherwise guarantee California parents and students:

  1.     the right for subjects whose information is kept in the data system to receive notice their data is in the system;
  2.     the right to consent or, more meaningfully, to withhold consent;
  3.     and the right to request correction of erroneous information.

By signing A.B. 2723, Gov. Newsom stripped California parents and students of the rights to even know that this is happening, or agree to this data processing in the first place. 

Moreover, while both of these bills allowed state agencies to trample on Californians’ IPA rights, those IPA rights do not even apply to the county-level agencies affected by A.B. 518 or the local public schools and school districts affected by A.B. 2723—pointing to the need for more guardrails around unfettered data sharing on the local level.

A Call for Comprehensive Local Protections

A.B. 2723 and A.B. 518 reveal a crucial missing piece in Californians' privacy rights: that the privacy rights guaranteed to individuals through California's IPA do not protect them from the ways local agencies collect, share, and process data. The absence of robust privacy protections at the local government level is an ongoing issue that must be addressed.

Now is the time to push for stronger privacy protections, hold our lawmakers accountable, and ensure that California remains a leader in the fight for digital privacy. As always, we want to acknowledge how much your support has helped our advocacy in California this year. Your voices are invaluable, and they truly make a difference.

Let’s not settle for half-measures or weak solutions. Our privacy is worth the fight.

Prosecutors in Washington State Warn Police: Don’t Use Gen AI to Write Reports

The King County Prosecuting Attorney’s Office, which handles all prosecutions in the Seattle area, has instructed police in no uncertain terms: do not use AI to write police reports...for now. This is a good development. We hope prosecutors across the country will exercise such caution as companies continue to peddle technology – generative artificial intelligence (genAI) to help write police reports – that could harm people who come into contact with the criminal justice system.

Chief Deputy Prosecutor Daniel J. Clark said in a memo about AI-based tools to write narrative police reports based on body camera audio that the technology as it exists is “one we are not ready to accept.”

The memo continues,“We do not fear advances in technology – but we do have legitimate concerns about some of the products on the market now... AI continues to develop and we are hopeful that we will reach a point in the near future where these reports can be relied on. For now, our office has made the decision not to accept any police narratives that were produced with the assistance of AI.” We would add that, while EFF embraces advances in technology, we doubt genAI in the near future will be able to help police write reliable reports.

We agree with Chief Deputy Clark that: “While an officer is required to edit the narrative and assert under penalty of perjury that it is accurate, some of the [genAI] errors are so small that they will be missed in review.”

This is a well-reasoned and cautious approach. Some police want to cut the time they spend writing reports, and Axon’s new product DraftOne claims to do so by  exporting the labor to machines. But the public, and other local agencies, should be skeptical of this tech. After all, these documents are often essential for prosecutors to build their case, for district attorneys to recommend charges, and for defenders to cross examine arresting officers.

To read more on generative AI and police reports, click here

Civil Rights Commission Pans Face Recognition Technology

In its recent report, Civil Rights Implications of Face Recognition Technology (FRT), the U.S. Commission on Civil Rights identified serious problems with the federal government’s use of face recognition technology, and in doing so recognized EFF’s expertise on this issue. The Commission focused its investigation on the Department of Justice (DOJ), the Department of Homeland Security (DHS), and the Department of Housing and Urban Development (HUD).

According to the report, the DOJ primarily uses FRT within the Federal Bureau of Investigation and U.S. Marshals Service to generate leads in criminal investigations. DHS uses it in cross-border criminal investigations and to identify travelers. And HUD implements FRT with surveillance cameras in some federally funded public housing. The report explores how federal training on FRT use in these departments is inadequate, identifies threats that FRT poses to civil rights, and proposes ways to mitigate those threats.

EFF supports a ban on government use of FRT and strict regulation of private use. In April of this year, we submitted comments to the Commission to voice these views. The Commission’s report quotes our comments explaining how FRT works, including the steps by which FRT uses a probe photo (the photo of the face that will be identified) to run an algorithmic search that matches the face within the probe photo to those in the comparison data set. Although EFF aims to promote a broader understanding of the technology behind FRT, our main purpose in submitting the comments was to sound the alarm about the many dangers the technology poses.

These disparities in accuracy are due in part to algorithmic bias.

The government should not use face recognition because it is too inaccurate to determine people’s rights and benefits, its inaccuracies impact people of color and members of the LGBTQ+ community at far higher rates, it threatens privacy, it chills expression, and it introduces information security risks. The report highlights many of the concerns that we've stated about privacy, accuracy (especially in the context of criminal investigations), and use by “inexperienced and inadequately trained operators.” The Commission also included data showing that face recognition is much more likely to reach a false positive (inaccurately matching two photos of different people) than a false negative (inaccurately failing to match two photos of the same person). According to the report, false positives are even more prevalent for Black people, people of East Asian descent, women, and older adults, thereby posing equal protection issues. These disparities in accuracy are due in part to algorithmic bias. Relatedly, photographs are often unable to accurately capture dark skinned people’s faces, which means that the initial inputs to the algorithm can themselves be unreliable. This poses serious problems in many contexts, but especially in criminal investigations, in which the stakes of an FRT misidentification are peoples’ lives and liberty.

The Commission recommends that Congress and agency chiefs enact better oversight and transparency rules. While EFF agrees with many of the Commission’s critiques, the technology poses grave threats to civil liberties, privacy, and security that require a more aggressive response. We will continue fighting to ban face recognition use by governments and to strictly regulate private use. You can join our About Face project to stop the technology from entering your community and encourage your representatives to ban federal use of FRT.

New EFF Report Provides Guidance to Ensure Human Rights are Protected Amid Government Use of AI in Latin America

                        

Governments increasingly rely on algorithmic systems to support consequential assessments and determinations about people’s lives, from judging eligibility for social assistance to trying to predict crime and criminals. Latin America is no exception. With the use of artificial intelligence (AI) posing human rights challenges in the region, EFF released today the report Inter-American Standards and State Use of AI for Rights-Affecting Determinations in Latin America: Human Rights Implications and Operational Framework.

This report draws on international human rights law, particularly standards from the Inter-American Human Rights System, to provide guidance on what state institutions must look out for when assessing whether and how to adopt artificial intelligence AI and automated decision-making (ADM) systems for determinations that can affect people’s rights.

We organized the report’s content and testimonies on current challenges from civil society experts on the ground in our project landing page.

AI-based Systems Implicate Human Rights

The report comes amid deployment of AI/ADM-based systems by Latin American state institutions for services and decision-making that affects human rights. Colombians must undergo classification from Sisbén, which measures their degree of poverty and vulnerability, if they want to access social protection programs. News reports in Brazil have once again flagged the problems and perils of Córtex, an algorithmic-powered surveillance system that cross-references various state databases with wide reach and poor controls. Risk-assessment systems seeking to predict school dropout, children’s rights violations or teenage pregnancy have integrated government related programs in countries like México, Chile, and Argentina. Different courts in the region have also implemented AI-based tools for a varied range of tasks.

EFF’s report aims to address two primary concerns: opacity and lack of human rights protections in state AI-based decision-making. Algorithmic systems are often deployed by state bodies in ways that obscure how decisions are made, leaving affected individuals with little understanding or recourse.

Additionally, these systems can exacerbate existing inequalities, disproportionately impacting marginalized communities without providing adequate avenues for redress. The lack of public  participation in the development and implementation of these systems further undermines democratic governance, as affected groups are often excluded from meaningful decision-making processes relating to government adoption and use of these technologies.

This is at odds with the human rights protections most Latin American countries are required to uphold. A majority of states have committed to comply with the American Convention on Human Rights and the Protocol of San Salvador. Under these international instruments, they have the duty to respect human rights and prevent violations from occurring. States’ responsibilities before international human rights law as guarantor of rights, and people and social groups as rights holders—entitled to call for them and participate—are two basic tenets that must guide any legitimate use of AI/ADM systems by state institutions for consequential decision-making, as we underscore in the report.

Inter-American Human Rights Framework

Building off extensive research of Inter-American Commission on Human Rights’ reports and Inter-American Court of Human Rights’ decisions and advisory opinions, we devise human rights implications and an operational framework for their due consideration in government use of algorithmic systems.

We detail what states’ commitments under the Inter-American System mean when state bodies decide to implement AI/ADM technologies for rights-based determinations. We explain why this adoption must fulfill necessary and proportionate principles, and what this entails. We underscore what it means to have a human rights approach to state AI-based policies, including crucial redlines for not moving ahead with their deployment.

We elaborate on what states must observe to ensure critical rights in line with Inter-American standards. We look particularly at political participation, access to information, equality and non-discrimination, due process, privacy and data protection, freedoms of expression, association and assembly, and the right to a dignified life in connection to social, economic, and cultural rights.

Some of them embody principles that must cut across the different stages of AI-based policies or initiatives—from scoping the problem state bodies seek to address and assessing whether algorithmic systems can reliably and effectively contribute to achieving its goals, to continuously monitoring and evaluating their implementation.

These cross-cutting principles integrate the comprehensive operational framework we provide in the report for governments and civil society advocates in the region.

Transparency, Due Process, and Data Privacy Are Vital

Our report’s recommendations reinforce that states must ensure transparency at every stage of AI deployment. Governments must provide clear information about how these systems function, including the categories of data processed, performance metrics, and details of the decision-making flow, including human and machine interaction.

It is also essential to disclose important aspects of how they were designed, such as details on the model’s training and testing datasets. Moreover, decisions based on AI/ADM systems must have a clear, reasoned, and coherent justification. Without such transparency, people cannot effectively understand or challenge the decisions being made about them, and the risk of unchecked rights violations increases.

Leveraging due process guarantees is also covered. The report highlights that decisions made by AI systems often lack the transparency needed for individuals to challenge them. The lack of human oversight in these processes can lead to arbitrary or unjust outcomes. Ensuring that affected individuals have the right to challenge AI-driven decisions through accessible legal mechanisms and meaningful human review is a critical step in aligning AI use with human rights standards.

Transparency and due process relate to ensuring people can fully enjoy the rights that unfold from informational self-determination, including the right to know what data about them are contained in state records, where the data came from, and how it’s being processed.

The Inter-American Court recently recognized informational self-determination as an autonomous right protected by the American Convention. It grants individuals the power to decide when and to what extent aspects of their private life can be revealed, including their personal information. It is intrinsically connected to the free development of one’s personality, and any limitations must be legally established, and necessary and proportionate to achieve a legitimate goal.

Ensuring Meaningful Public Participation

Social participation is another cornerstone of the report’s recommendations. We emphasize that marginalized groups, who are most likely to be negatively affected by AI and ADM systems, must have a voice in how these systems are developed and used. Participatory mechanisms must not be mere box-checking exercises and are vital for ensuring that algorithmic-based initiatives do not reinforce discrimination or violate rights. Human Rights Impact Assessments and independent auditing are important vectors for meaningful participation and should be used during all stages of planning and deployment. 

Robust legal safeguards, appropriate institutional structures, and effective oversight, often neglected, are underlying conditions for any legitimate government use of AI for rights-based determinations. As AI continues to play an increasingly significant role in public life, the findings and recommendations of this report are crucial. Our aim is to make a timely and compelling contribution for a human rights-centric approach to the use of AI/ADM in public decision-making.

We’d like to thank the consultant Rafaela Cavalcanti de Alcântara for her work on this report, and Clarice Tavares, Jamila Venturini, Joan López Solano, Patricia Díaz Charquero, Priscilla Ruiz Guillén, Raquel Rachid, and Tomás Pomar for their insights and feedback to the report.

The full report is here.

EFF & 140 Other Organizations Call for an End to AI Use in Immigration Decisions

EFF, Just Futures Law, and 140 other groups have sent a letter to Secretary Alejandro Mayorkas that the Department of Homeland Security (DHS) must stop using artificial intelligence (AI) tools in the immigration system. For years, EFF has been monitoring and warning about the dangers of automated and so-called “AI-enhanced” surveillance at the U.S.-Mexico border. As we’ve made clear, algorithmic decision-making should never get the final say on whether a person should be policed, arrested, denied freedom, or, in this case, are worthy of a safe haven in the United States.  

The letter is signed by a wide range of organizations, from civil liberties nonprofits to immigrant rights groups, to government accountability watchdogs, to civil society organizations. Together, we declared that DHS’s use of AI, defined by the White House as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments,” appeared to violate federal policies governing its responsible use, especially when it’s used as part of the decision-making regarding immigration enforcement and adjudications.

Read the letter here. 

The letter highlighted the findings from a bombshell report published by Mijente and Just Futures Law on the use of AI and automated decision-making by DHS and its sub-agencies, U.S. Citizenship and Immigration Services (USCIS), Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP). Despite laws, executive orders, and other directives to establish standards and processes for the evaluation, adoption, and use of AI by DHS—as well as DHS’s pledge that pledge that it “will not use AI technology to enable improper systemic, indiscriminate, or large-scale monitoring, surveillance or tracking of individuals”—the agency has seemingly relied on the loopholes for national security, intelligence gathering, and law enforcement to avoid compliance with those requirements. This completely undermines any supposed attempt on the part of the federal government to use AI responsibly and contain the technology’s habit of merely digitizing and accelerating decisions based preexisting on biases and prejudices. 

Even though AI is unproven in its efficacy, DHS has frenetically incorporated AI into many of its functions. These products are often a result of partnerships with vendors who have aggressively pushed the idea that AI will make immigration processing more efficient, more objective and less biased

Yet the evidence begs to differ, or, at best, is mixed.  

As the report notes, studies, including those conducted by the government, have recognized that AI has often worsened discrimination due to the reality of “garbage in, garbage out.” This phenomenon was visible in Amazon’s use—and subsequent scrapping—of AI to screen résumés, which highlighted male applicants more often because the data on which the program had been trained included more applications from men. The same pitfalls arises in predictive policing products, something EFF categorically opposes, which often “predicts” crimes more likely to occur in Black and Brown neighborhoods due to the prejudices embedded in the historical crime data used to design that software. Furthermore, AI tools are often deficient when used in complex contexts, such as the morass that is immigration law. 

In spite of these grave concerns, DHS has incorporated AI decision-making into many levels of its operation with without taking the necessary steps to properly vet the technology. According to the report, AI technology is part of USCIS’s process to determine eligibility for immigration benefit or relief, credibility in asylum applications, and public safety or national security threat level of an individual. ICE uses AI to automate its decision-making on electronic monitoring, detention, and deportation. 

At the same time, there is a disturbing lack of transparency regarding those tools. We urgently need DHS to be held accountable for its adoption of opaque and untested AI programs promulgated by those with a financial interest in the proliferation of the technology. Until DHS adequately addresses the concerns raised in the letter and report, the Department should be prohibited from using AI tools. 

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

How the FTC Can Make the Internet Safe for Chatbots

No points for guessing the subject of the first question the Wall Street Journal asked FTC Chair Lina Khan: of course it was about AI.

Between the hype, the lawmaking, the saber-rattling, the trillion-dollar market caps, and the predictions of impending civilizational collapse, the AI discussion has become as inevitable, as pro forma, and as content-free as asking how someone is or wishing them a nice day.

But Chair Khan didn’t treat the question as an excuse to launch into the policymaker’s verbal equivalent of a compulsory gymnastics exhibition.

Instead, she injected something genuinely new and exciting into the discussion, by proposing that the labor and privacy controversies in AI could be tackled using her existing regulatory authority under Section 5 of the Federal Trade Commission Act (FTCA5).

Section 5 gives the FTC a broad mandate to prevent “unfair methods of competition” and “unfair or deceptive acts or practices.” Chair Khan has made extensive use of these powers during her first term as chair, for example, by banning noncompetes and taking action on online privacy.

At EFF, we share many of the widespread concerns over privacy, fairness, and labor rights raised by AI. We think that copyright law is the wrong tool to address those concerns, both because of what copyright law does and doesn’t permit, and because establishing copyright as the framework for AI model-training will not address the real privacy and labor issues posed by generative AI. We think that privacy problems should be addressed with privacy policy and that labor issues should be addressed with labor policy.

That’s what made Chair Khan’s remarks so exciting to us: in proposing that Section 5 could be used to regulate AI training, Chair Khan is opening the door to addressing these issues head on. The FTC Act gives the FTC the power to craft specific, fit-for-purpose rules and guidance that can protect Americans’ consumer, privacy, labor and other rights.

Take the problem of AI “hallucinations,” which is the industry’s term for the seemingly irrepressible propensity of chatbots to answer questions with incorrect answers, delivered with the blithe confidence of a “bullshitter.”

The question of whether chatbots can be taught not to “hallucinate” is far from settled. Some industry leaders think the problem can never be solved, even as startups publish (technically impressive-sounding, but non-peer reviewed) papers claiming to have solved the problem.

Whether the problem can be solved, it’s clear that for the commercial chatbot offerings in the market today, “hallucinations” come with the package. Or, put more simply: today’s chatbots lie, and no one can stop them.

That’s a problem, because companies are already replacing human customer service workers with chatbots that lie to their customers, causing those customers real harm. It’s hard enough to attend your grandmother’s funeral without the added pain of your airline’s chatbot lying to you about the bereavement fare.

Here’s where the FTC’s powers can help the American public:

The FTC should issue guidance declaring that any company that deploys a chatbot that lies to a customer has engaged in an “unfair and deceptive practice” that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails.

After all, if a company doesn’t get in trouble when its chatbot lies to a customer, why would they pay extra for a chatbot that has been designed not to lie? And if there’s no reason to pay extra for a chatbot that doesn’t lie, why would anyone invest in solving the “hallucination” problem?

Guidance that promises to punish companies that replace their human workers with lying chatbots will give new companies that invent truthful chatbots an advantage in the marketplace. If you can prove that your chatbot won’t lie to your customers’ users, you can also get an insurance company to write you a policy that will allow you to indemnify your customers against claims arising from your chatbot’s output.

But until someone does figure out how to make a “hallucination”-free chatbot, guidance promising serious consequences for chatbots that deceive users with “hallucinated” lies will push companies to limit the use of chatbots to low-stakes environments, leaving human workers to do their jobs.

The FTC has already started down this path. Earlier this month, FTC Senior Staff Attorney Michael Atleson published an excellent backgrounder laying out some of the agency’s thinking on how companies should present their chatbots to users.

We think that more formal guidance about the consequences for companies that save a buck by putting untrustworthy chatbots on the front line will do a lot to protect the public from irresponsible business decisions – especially if that guidance is backed up with muscular enforcement.

What Can Go Wrong When Police Use AI to Write Reports?

Axon—the makers of widely-used police body cameras and tasers (and that also keeps trying to arm drones)—has a new product: AI that will write police reports for officers. Draft One is a generative large language model machine learning system that reportedly takes audio from body-worn cameras and converts it into a narrative police report that police can then edit and submit after an incident. Axon bills this product as the ultimate time-saver for police departments hoping to get officers out from behind their desks. But this technology could present new issues for those who encounter police, and especially those marginalized communities already subject to a disproportionate share of police interactions in the United States.

Responsibility and the Codification of (Intended or Otherwise) Inaccuracies

We’ve seen it before. Grainy and shaky police body-worn camera video in which an arresting officer shouts, “Stop resisting!” This phrase can lead to greater use of force by officers or come with enhanced criminal charges.  Sometimes, these shouts may be justified. But as we’ve seen time and again, the narrative of someone resisting arrest may be a misrepresentation. Integrating AI into narratives of police encounters might make an already complicated system even more ripe for abuse.

If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product?

The public should be skeptical of a language algorithm's ability to accurately process and distinguish between the wide range of languages, dialects, vernacular, idioms and slang people use. As we've learned from watching content moderation develop online, software may have a passable ability to capture words, but it often struggles with content and meaning. In an often tense setting such as a traffic stop, AI mistaking a metaphorical statement for a literal claim could fundamentally change how a police report is interpreted.

Moreover, as with all so-called artificial intelligence taking over consequential tasks and decision-making, the technology has the power to obscure human agency. Police officers who deliberately speak with mistruths or exaggerations to shape the narrative available in body camera footage now have even more of a veneer of plausible deniability with AI-generated police reports. If police were to be caught in a lie concerning what’s in the report, an officer might be able to say that they did not lie: the AI simply mistranscribed what was happening in the chaotic video.

It’s also unclear how this technology will work in action. If the officer says aloud in a body camera video, “the suspect has a gun” how would that translate into the software’s narrative final product? Would it interpret that by saying “I [the officer] saw the suspect produce a weapon” or “The suspect was armed”? Or would it just report what the officer said: “I [the officer] said aloud that the suspect has a gun”? Interpretation matters, and the differences between them could have catastrophic consequences for defendants in court.

Review, Transparency, and Audits

The issue of review, auditing, and transparency raises a number of questions. Although Draft One allows officers to edit reports, how will it ensure that officers are adequately reviewing for accuracy rather than rubber-stamping the AI-generated version? After all, police have been known to arrest people based on the results of a match by face recognition technology without any followup investigation—contrary to vendors’ insistence that such results should be used as an investigative lead and not a positive identification.

Moreover, if the AI-generated report is incorrect, can we trust police will contradict that version of events if it's in their interest to maintain inaccuracies? On the flip side, might AI report writing go the way of AI-enhanced body cameras? In other words, if the report consistently produces a narrative from audio that police do not like, will they edit it, scrap it, or discontinue using the software altogether?

And what of external reviewers’ ability to access these reports? Given police departments’ overly intense secrecy, combined with a frequent failure to comply with public records laws, how can the public, or any external agency, be able to independently verify or audit these AI-assisted reports? And how will external reviewers know which portions of the report are generated by AI vs. a human?

Police reports, skewed and biased as they often are, codify the police department’s memory. They reveal not necessarily what happened during a specific incident, but what police imagined to have happened, in good faith or not. Policing, with its legal power to kill, detain, or ultimately deny people’s freedom, is too powerful an institution to outsource its memory-making to technologies in a way that makes officers immune to critique, transparency, or accountability.

Ubix Linux, le datalab de poche

Ubix Linux est une distribution Linux libre et open-source dérivée de Debian.

Le nom « Ubix » est la forme contractée de « Ubics », acronyme issu de l'anglais Universal business intelligence computer system. De fait, le principal objectif d'Ubix Linux est d'offrir une plateforme universelle dédiée à l'informatique décisionnelle et à l'analyse des données.

Il s'agit d'une solution verticale, prête à l'emploi, dédiée à la manipulation des données et à la prise de décision. Allégée par conception, elle n'embarque qu'un jeu limité d'outils spécialisés dans ce domaine. Ceux-ci permettent néanmoins de couvrir tous les besoins dont l'acquisition, la transformation, l'analyse et la présentation des données.

Ubix Linux - Vue d'ensemble

Origines de la distribution

La volonté initiale du concepteur de la distribution était de pouvoir disposer, à tout moment et en toutes circonstances, des outils lui permettant de réaliser des analyses de données et d'en présenter le résultat ad hoc. Ce « couteau suisse » de manipulation des données, devait également lui permettre d'éviter de devoir justifier, rechercher, acquérir et installer l'écosystème logiciel nécessaire chaque fois que ce type de tâches se présentait à lui.

Son cahier des charges stipulait donc une empreinte disque la plus faible possible sans pour autant faire de concessions au niveau des fonctionnalités. La distribution se devait d'être portable et exécutable immédiatement dans des contextes variés, sans nécessité d'investissement, d'installation ou de droits d'accès particulier.

De ce fait, Ubix Linux ne se démarque pas par ses aspects « système », mais plutôt par sa destination et ses cas d'usage.

Au-delà du besoin initial

À l'heure où de nombreux concepts liés à la manipulation des données tels que le « Big Data », la « Data Science » ou le « Machine Learning » font la une de nombreux médias, ceux-ci restent encore des boîtes noires, affaire de spécialistes et d'organisation disposant des moyens de les mettre en application.

Si le grand public en intègre de mieux en mieux les grandes lignes, il ne dispose encore que de peu de recul sur la manière dont ses données peuvent être utilisées, ainsi que la richesse des débouchés associés.

D'un autre côté, de nombreux gisements de données à la portée du plus grand nombre demeurent inexploités, faute de compétences ou de moyens facilement accessibles.

Il se trouve qu'Ubix Linux peut permettre de surmonter cette difficulté, en offrant à tous les moyens de s'approprier (ou se réapproprier) et tirer parti des données disponibles.

Philosophie

Par nécessité, Ubix Linux a été conçue en intégrant uniquement des produits libres et open-source. Bien que cette distribution puisse s'avérer utile à toute personne devant manipuler des données, elle se doit de préserver et défendre une approche pédagogique et universaliste.

Elle a pour ambition de mettre les sciences de données à la portée de tous. La distribution en elle-même n'est qu'un support technique de base devant favoriser l'apprentissage par la pratique. Il est prévu de l'accompagner d'un tutoriels progressifs.

Les outils low-code/no-code intégrés dans la distribution permettent de commencer à manipuler des données sans devoir maîtriser au préalable la programmation. Néanmoins, des outils plus avancés permettent ensuite de s'initier aux principes des algorithmes d'apprentissage automatique.

Synthèse

Ubix Linux s'inscrit dans la philosophie du logiciel libre et plus particulièrement dans celle des projets GNU et Debian.

Elle se destine à :

  • demeurer accessible à tous ;
  • pouvoir s'exécuter sur des configurations matérielles relativement modestes, voire n'être installée que sur un périphérique portable USB ;
  • proposer un outil pédagogique pour appréhender de façon pratique la science des données et l'apprentissage machine ;
  • permettre la découverte, l'expérimentation et l'aguerrissement de tout un chacun aux principaux outils de manipulation des données ;
  • offrir une boîte à outils légère et agile, néanmoins complète et utile pour un public professionnel averti.

Et après…

Nous sommes à l'écoute de toute suggestion. Toutefois, les moyens étant ce qu'ils sont (au fond du garage), la réactivité à les prendre en compte pourra s'avérer inversement proportionnelle.

Nous souhaiterions que cet outil pédagogique puisse bénéficier au plus grand nombre : si vous voulez contribuer à la traduction du contenu du site officiel en espagnol, en portugais ou en allemand, vous êtes les bienvenus.

Commentaires : voir le flux Atom ouvrir dans le navigateur

The Tech Apocalypse Panic is Driven by AI Boosters, Military Tacticians, and Movies

There has been a tremendous amount of hand wringing and nervousness about how so-called artificial intelligence might end up destroying the world. The fretting has only gotten worse as a result of a U.S. State Department-commissioned report on the security risk of weaponized AI.

Whether these messages come from popular films like a War Games or The Terminator, reports that in digital simulations AI supposedly favors the nuclear option more than it should, or the idea that AI could assess nuclear threats quicker than humans—all of these scenarios have one thing in common: they end with nukes (almost) being launched because a computer either had the ability to pull the trigger or convinced humans to do so by simulating imminent nuclear threat. The purported risk of AI comes not just from yielding “control" to computers, but also the ability for advanced algorithmic systems to breach cybersecurity measures or manipulate and social engineer people with realistic voice, text, images, video, or digital impersonations

But there is one easy way to avoid a lot of this and prevent a self-inflicted doomsday: don’t give computers the capability to launch devastating weapons. This means both denying algorithms ultimate decision making powers, but it also means building in protocols and safeguards so that some kind of generative AI cannot be used to impersonate or simulate the orders capable of launching attacks. It’s really simple, and we’re by far not the only (or the first) people to suggest the radical idea that we just not integrate computer decision making into many important decisions–from deciding a person’s freedom to launching first or retaliatory strikes with nuclear weapons.


First, let’s define terms. To start, I am using "Artificial Intelligence" purely for expediency and because it is the term most commonly used by vendors and government agencies to describe automated algorithmic decision making despite the fact that it is a problematic term that shields human agency from criticism. What we are talking about here is an algorithmic system, fed a tremendous amount of historical or hypothetical information, that leverages probability and context in order to choose what outcomes are expected based on the data it has been fed. It’s how training algorithmic chatbots on posts from social media resulted in the chatbot regurgitating the racist rhetoric it was trained on. It’s also how predictive policing algorithms reaffirm racially biased policing by sending police to neighborhoods where the police already patrol and where they make a majority of their arrests. From the vantage of the data it looks as if that is the only neighborhood with crime because police don’t typically arrest people in other neighborhoods. As AI expert and technologist Joy Buolamwini has said, "With the adoption of AI systems, at first I thought we were looking at a mirror, but now I believe we're looking into a kaleidoscope of distortion... Because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made."

Military Tactics Shouldn’t Drive AI Use

As EFF wrote in 2018, “Militaries must make sure they don't buy into the machine learning hype while missing the warning label. There's much to be done with machine learning, but plenty of reasons to keep it away from things like target selection, fire control, and most command, control, and intelligence (C2I) roles in the near future, and perhaps beyond that too.” (You can read EFF’s whole 2018 white paper: The Cautious Path to Advantage: How Militaries Should Plan for AI here

Just like in policing, in the military there must be a compelling directive (not to mention the marketing from eager companies hoping to get rich off defense contracts) to constantly be innovating in order to claim technical superiority. But integrating technology for innovation’s sake alone creates a great risk of unforeseen danger. AI-enhanced targeting is liable to get things wrong. AI can be fooled or tricked. It can be hacked. And giving AI the power to escalate armed conflicts, especially on a global or nuclear scale, might just bring about the much-feared AI apocalypse that can be avoided just by keeping a human finger on the button.


We’ve written before about how necessary it is to ban attempts for police to arm robots (either remote controlled or autonomous) in a domestic context for the same reasons. The idea of so-called autonomy among machines and robots creates the false sense of agency–the idea that only the computer is to blame for falsely targeting the wrong person or misreading signs of incoming missiles and launching a nuclear weapon in response–obscures who is really at fault. Humans put computers in charge of making the decisions, but humans also train the programs which make the decisions.

AI Does What We Tell It To

In the words of linguist Emily Bender,  “AI” and especially its text-based applications, is a “stochastic parrot” meaning that it echoes back to us things we taught it with as “determined by random, probabilistic distribution.” In short, we give it the material it learns, it learns it, and then draws conclusions and makes decisions based on that historical dataset. If you teach an algorithmic model that 9 times out of 10 a nation will launch a retaliatory strike when missiles are fired at them–the first time that model mistakes a flock of birds for inbound missiles, that is exactly what it will do.

To that end, AI scholar Kate Crawford argues, “AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests.” 

AI does what we teach it to. It mimics the decisions it is taught to make either through hypotheticals or historical data. This means that, yet again, we are not powerless to a coming AI doomsday. We teach AI how to operate. We give it control of escalation, weaponry, and military response. We could just not.

Governing AI Doesn’t Mean Making it More Secret–It Means Regulating Use 

Part of the recent report commissioned by the U.S. Department of State on the weaponization of AI included one troubling recommendation: making the inner workings of AI more secret. In order to keep algorithms from being tampered with or manipulated, the full report (as summarized by Time) suggests that a new governmental regulatory agency responsible for AI should criminalize and make potentially punishable by jail time publishing the inner workings of AI. This means that how AI functions in our daily lives, and how the government uses it, could never be open source and would always live inside a black box where we could never learn the datasets informing its decision making. So much of our lives is already being governed by automated decision making, from the criminal justice system to employment, to criminalize the only route for people to know how those systems are being trained seems counterproductive and wrong.

Opening up the inner workings of AI puts more eyes on how a system functions and makes it more easy, not less, to spot manipulation and tampering… not to mention it might mitigate the biases and harms that skewed training datasets create in the first place.

Conclusion

Machine learning and algorithmic systems are useful tools whose potential we are only just beginning to grapple withbut we have to understand what these technologies are and what they are not. They are neither “artificial” or “intelligent”they do not represent an alternate and spontaneously-occurring way of knowing independent of the human mind. People build these systems and train them to get a desired outcome. Even when outcomes from AI are unexpected, usually one can find their origins somewhere in the data systems they were trained on. Understanding this will go a long way toward responsibly shaping how and when AI is deployed, especially in a defense contract, and will hopefully alleviate some of our collective sci-fi panic.

This doesn’t mean that people won’t weaponize AIand already are in the form of political disinformation or realistic impersonation. But the solution to that is not to outlaw AI entirely, nor is it handing over the keys to a nuclear arsenal to computers. We need a common sense system that respects innovation, regulates uses rather than the technology itself, and does not let panic, AI boosters, or military tacticians dictate how and when important systems are put under autonomous control. 

Worried About AI Voice Clone Scams? Create a Family Password

Your grandfather receives a call late at night from a person pretending to be you. The caller says that you are in jail or have been kidnapped and that they need money urgently to get you out of trouble. Perhaps they then bring on a fake police officer or kidnapper to heighten the tension. The money, of course, should be wired right away to an unfamiliar account at an unfamiliar bank. 

It’s a classic and common scam, and like many scams it relies on a scary, urgent scenario to override the victim’s common sense and make them more likely to send money. Now, scammers are reportedly experimenting with a way to further heighten that panic by playing a simulated recording of “your” voice. Fortunately, there’s an easy and old-school trick you can use to preempt the scammers: creating a shared verbal password with your family.

The ability to create audio deepfakes of people's voices using machine learning and just minutes of them speaking has become relatively cheap and easy to acquire technology. There are myriad websites that will let you make voice clones. Some will let you use a variety of celebrity voices to say anything they want, while others will let you upload a new person’s voice to create a voice clone of anyone you have a recording of. Scammers have figured out that they can use this to clone the voices of regular people. Suddenly your relative isn’t talking to someone who sounds like a complete stranger, they are hearing your own voice. This makes the scam much more concerning. 

Voice generation scams aren’t widespread yet, but they do seem to be happening. There have been news stories and even congressional testimony from people who have been the targets of voice impersonation scams. Voice cloning scams are also being used in political disinformation campaigns as well. It’s impossible for us to know what kind of technology these scammers used, or if they're just really good impersonations. But it is likely that the scams will grow more prevalent as the technology gets cheaper and more ubiquitous. For now, the novelty of these scams, and the use of machine learning and deepfakes, technologies which are raising concerns across many sectors of society, seems to be driving a lot of the coverage. 

The family password is a decades-old, low tech solution to this modern high tech problem. 

The first step is to agree with your family on a password you can all remember and use. The most important thing is that it should be easy to remember in a panic, hard to forget, and not public information. You could use the name of a well known person or object in your family, an inside joke, a family meme, or any word that you can all remember easily. Despite the name, this doesn't need to be limited to your family, it can be a chosen family, workplace, anarchist witch coven, etc. Any group of people with which you associate can benefit from having a password. 

Then when someone calls you or someone that trusts you (or emails or texts you) with an urgent request for money (or iTunes gift cards) you simply ask them the password. If they can’t tell it to you, then they might be a fake. You could of course further verify this with other questions,  like, “what is my cat's name” or “when was the last time we saw each other?” These sorts of questions work even if you haven’t previously set up a passphrase in your family or friend group. But keep in mind people tend to forget basic things when they have experienced trauma or are in a panic. It might be helpful, especially for   people with less robust memories, to write down the password in case you forget it. After all, it’s not likely that the scammer will break into your house to find the family password.

These techniques can be useful against other scams which haven’t been invented yet, but which may come around as deepfakes become more prevalent, such as machine-generated video or photo avatars for “proof.” Or should you ever find yourself in a hackneyed sci-fi situation where there are two identical copies of your friend and you aren’t sure which one is the evil clone and which one is the original. 

An image of spider-man pointing at another spider-man who is pointing at him. A classic meme.

Spider-man hopes The Avengers haven't forgotten their secret password!

The added benefit of this technique is that it gives you a minute to step back, breath, and engage in some critical thinking. Many scams of this nature rely on panic and keeping you in your lower brain, by asking for the passphrase you can also take a minute to think. Is your kid really in Mexico right now? Can you call them back at their phone number to be sure it’s them?  

So, go make a family password and a friend password to keep your family and friends from getting scammed by AI impostors (or evil clones).

The No AI Fraud Act Creates Way More Problems Than It Solves

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes senseyou should be able to prevent a company from running an advertisement that falsely claims that you endorse its productsbut the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that categoryfrom pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyoneincluding performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendmentthat’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

AI Watermarking Won't Curb Disinformation

Generative AI allows people to produce piles upon piles of images and words very quickly. It would be nice if there were some way to reliably distinguish AI-generated content from human-generated content. It would help people avoid endlessly arguing with bots online, or believing what a fake image purports to show. One common proposal is that big companies should incorporate watermarks into the outputs of their AIs. For instance, this could involve taking an image and subtly changing many pixels in a way that’s undetectable to the eye but detectable to a computer program. Or it could involve swapping words for synonyms in a predictable way so that the meaning is unchanged, but a program could readily determine the text was generated by an AI.

Unfortunately, watermarking schemes are unlikely to work. So far most have proven easy to remove, and it’s likely that future schemes will have similar problems.

One kind of watermark is already common for digital images. Stock image sites often overlay text on an image that renders it mostly useless for publication. This kind of watermark is visible and is slightly challenging to remove since it requires some photo editing skills.

Images can also have metadata attached by a camera or image processing program, including information like the date, time, and location a photograph was taken, the camera settings, or the creator of an image. This metadata is unobtrusive but can be readily viewed with common programs. It’s also easily removed from a file. For instance, social media sites often automatically remove metadata when people upload images, both to prevent people from accidentally revealing their location and simply to save storage space.

A useful watermark for AI images would need two properties: 

  • It would need to continue to be detectable after an image is cropped, rotated, or edited in various ways (robustness). 
  • It couldn’t be conspicuous like the watermark on stock image samples, because the resulting images wouldn’t be of much use to anybody.

One simple technique is to manipulate the least perceptible bits of an image. For instance, to a human viewer these two squares are the same shade:

But to a computer it’s obvious that they are different by a single bit: #93c47d vs 93c57d. Each pixel of an image is represented by a certain number of bits, and some of them make more of a perceptual difference than others. By manipulating those least-important bits, a watermarking program can create a pattern that viewers won’t see, but a watermarking-detecting program will. If that pattern repeats across the whole image, the watermark is even robust to cropping. However, this method has one clear flaw: rotating or resizing the image is likely to accidentally destroy the watermark.

There are more sophisticated watermarking proposals that are robust to a wider variety of common edits. However, proposals for AI watermarking must pass a tougher challenge. They must be robust against someone who knows about the watermark and wants to eliminate it. The person who wants to remove a watermark isn’t limited to common edits, but can directly manipulate the image file. For instance, if a watermark is encoded in the least important bits of an image, someone could remove it by simply setting all the least important bits to 0, or to a random value (1 or 0), or to a value automatically predicted based on neighboring pixels. Just like adding a watermark, removing a watermark this way gives an image that looks basically identical to the original, at least to a human eye.

Coming at the problem from the opposite direction, some companies are working on ways to prove that an image came from a camera (“content authenticity”). Rather than marking AI generated images, they add metadata to camera-generated images, and use cryptographic signatures to prove the metadata is genuine. This approach is more workable than watermarking AI generated images, since there’s no incentive to remove the mark. In fact, there’s the opposite incentive: publishers would want to keep this metadata around because it helps establish that their images are “real.” But it’s still a fiendishly complicated scheme, since the chain of verifiability has to be preserved through all software used to edit photos. And most cameras will never produce this metadata, meaning that its absence can’t be used to prove a photograph is fake.

Comparing watermarking vs content authenticity, watermarking aims to identify or mark (some) fake images; content authenticity aims to identify or mark (some) real images. Neither approach is comprehensive, since most of the images on the Internet will have neither a watermark nor content authenticity metadata.

Watermarking Content authenticity
AI images Marked Unmarked
(Some) camera images Unmarked Marked
Everything else Unmarked Unmarked

 

Text-based Watermarks

The watermarking problem is even harder for text-based generative AI. Similar techniques can be devised. For instance, an AI could boost the probability of certain words, giving itself a subtle textual style that would go unnoticed most of the time, but could be recognized by a program with access to the list of words. This would effectively be a computer version of determining the authorship of the twelve disputed essays in The Federalist Papers by analyzing Madison’s and Hamilton’s habitual word choices.

But creating an indelible textual watermark is a much harder task than telling Hamilton from Madison, since the watermark must be robust to someone modifying the text trying to remove it. Any watermark based on word choice is likely to be defeated by some amount of rewording. That rewording could even be performed by an alternate AI, perhaps one that is less sophisticated than the one that generated the original text, but not subject to a watermarking requirement.

There’s also a problem of whether the tools to detect watermarked text are publicly available or are secret. Making detection tools publicly available gives an advantage to those who want to remove watermarking, because they can repeatedly edit their text or image until the detection tool gives an all clear. But keeping them a secret makes them dramatically less useful, because every detection request must be sent to whatever company produced the watermarking. That would potentially require people to share private communication if they wanted to check for a watermark. And it would hinder attempts by social media companies to automatically label AI-generated content at scale, since they’d have to run every post past the big AI companies.

Since text output from current AIs isn’t watermarked, services like GPTZero and TurnItIn have popped up, claiming to be able to detect AI-generated content anyhow. These detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism.

Lastly, if AI watermarking is to prevent disinformation campaigns sponsored by states, it’s important to keep in mind that those states can readily develop modern generative AI, and probably will in the near future. A state-sponsored disinformation campaign is unlikely to be so polite as to watermark its output.

Watermarking of AI generated content is an easy-sounding fix for the thorny problem of disinformation. And watermarks may be useful in understanding reshared content where there is no deceptive intent. But research into adversarial watermarking for AI is just beginning, and while there’s no strong reason to believe it will succeed, there are some good reasons to believe it will ultimately fail.

No Robots(.txt): How to Ask ChatGPT and Google Bard to Not Use Your Website for Training

Both OpenAI and Google have released guidance for website owners who do not want the two companies using the content of their sites to train the company's large language models (LLMs). We've long been supporters of the right to scrape websites—the process of using a computer to load and read pages of a website for later analysis—as a tool for research, journalism, and archivers. We believe this practice is still lawful when collecting training data for generative AI, but the question of whether something should be illegal is different from whether it may be considered rude, gauche, or unpleasant. As norms continue to develop around what kinds of scraping and what uses of scraped data are considered acceptable, it is useful to have a tool for website operators to automatically signal their preference to crawlers. Asking OpenAI and Google (and anyone else who chooses to honor the preference) to not include scrapes of your site in its models is an easy process as long as you can access your site's file structure.

We've talked before about how these models use art for training, and the general idea and process is the same for text. Researchers have long used collections of data scraped from the internet for studies of censorship, malware, sociology, language, and other applications, including generative AI. Today, both academic and for-profit researchers collect training data for AI using bots that go out searching all over the web and “scrape up” or store the content of each site they come across. This might be used to create purely text-based tools, or a system might collect images that may be associated with certain text and try to glean connections between the words and the images during training. The end result, at least currently, is the chatbots we've seen in the form of Google Bard and ChatGPT.

It would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, to announce that they'd respect similar requests.

If you do not want your website's content used for this training, you can ask the bots deployed by Google and Open AI to skip over your site. Keep in mind that this only applies to future scraping. If Google or OpenAI already have data from your site, they will not remove it. It also doesn't stop the countless other companies out there training their own LLMs, and doesn't affect anything you've posted elsewhere, like on social networks or forums. It also wouldn't stop models that are trained on large data sets of scraped websites that aren't affiliated with a specific company. For example, OpenAI's GPT-3 and Meta's LLaMa were both trained using data mostly collected from Common Crawl, an open source archive of large portions of the internet that is routinely used for important research. You can block Common Crawl, but doing so blocks the web crawler from using your data in all its data sets, many of which have nothing to do with AI.

There's no technical requirement that a bot obey your requests. Currently only Google and OpenAI who have announced that this is the way to opt-out, so other AI companies may not care about this at all, or may add their own directions for opting out. But it also doesn't block any other types of scraping that are used for research or for other means, so if you're generally in favor of scraping but uneasy with the use of your website content in a corporation's AI training set, this is one step you can take.

Before we get to the how, we need to explain what exactly you'll be editing to do this.

What's a Robots.txt?

In order to ask these companies not to scrape your site, you need to edit (or create) a file located on your website called "robots.txt." A robots.txt is a set of instructions for bots and web crawlers. Up until this point, it was mostly used to provide useful information for search engines as their bots scraped the web. If website owners want to ask a specific search engine or other bot to not scan their site, they can enter that in their robots.txt file. Bots can always choose to ignore this, but many crawling services respect the request.

This might all sound rather technical, but it's really nothing more than a small text file located in the root folder of your site, like "https://www.example.com/robots.txt." Anyone can see this file on any website. For example, here's The New York Times' robots.txt, which currently blocks both ChatGPT and Bard. 

If you run your own website, you should have some way to access the file structure of that site, either through your hosting provider's web portal or FTP. You may need to comb through your provider's documentation for help figuring out how to access this folder. In most cases, your site will already have a robots.txt created, even if it's blank, but if you do need to create a file, you can do so with any plain text editor. Google has guidance for doing so here.

EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information.

What to Include In Your Robots.txt to Block ChatGPT and Google Bard

With all that out of the way, here's what to include in your site's robots.txt file if you do not want ChatGPT and Google to use the contents of your site to train their generative AI models. If you want to cover the entirety of your site, add these lines to your robots.txt file:

ChatGPT

User-agent: GPTBot

Disallow: /

Google Bard

User-agent: Google-Extended

Disallow: /

You can also narrow this down to block access to only certain folders on your site. For example, maybe you don't mind if most of the data on your site is used for training, but you have a blog that you use as a journal. You can opt out specific folders. For example, if the blog is located at yoursite.com/blog, you'd use this:

ChatGPT

User-agent: GPTBot

Disallow: /blog

Google Bard

User-agent: Google-Extended

Disallow: /blog

As mentioned above, we at EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information; we want the information we’re providing to spread far and wide and to be represented in the outputs and answers provided by LLMs. Of course, individual website owners have different views for their blogs, portfolios, or whatever else you use your website for. We're in favor of means for people to express their preferences, and it would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, announce that they'd respect similar requests.

To Best Serve Students, Schools Shouldn’t Try to Block Generative AI, or Use Faulty AI Detection Tools

Generative AI gained widespread attention earlier this year, but one group has had to reckon with it more quickly than most: educators. Teachers and school administrators have struggled with two big questions: should the use of generative AI be banned? And should a school implement new tools to detect when students have used generative AI? EFF believes the answer to both of these questions is no.

AI Detection Tools Harm Students

For decades, students have had to defend themselves from an increasing variety of invasive technology in schools—from disciplinary tech like student monitoring software, remote proctoring tools, and comprehensive learning management systems, to surveillance tech like cameras, face recognition, and other biometrics. “AI detection” software is a new generation of inaccurate and dangerous tech that’s being added to the mix.

Tools such as GPTZero and TurnItIn that use AI detection claim that they can determine (with varying levels of accuracy) whether a student’s writing was likely to have been created by a generative AI tool. But these detection tools are so inaccurate as to be dangerous, and have already led to false charges of plagiarism. As with remote proctoring, this software looks for signals that may not indicate cheating at all. For example, they are more likely to flag writing as AI-created when the word choice is fairly predictable and the sentences are less complex—and as a result, research has already shown that false positives are more frequent for some groups of students, such as non-native speakers

Instead of demonizing it, schools should help students by teaching them how this potentially useful technology works and when it’s appropriate to use it. 

There is often no source document to prove one way or another whether a student used AI in writing. As AI writing tools improve and are able to reflect all the variations of human writing, the possibility that an opposing tool will be able to detect whether AI was involved in writing with any kind of worthwhile accuracy will likely diminish. If the past is prologue, then some schools may combat the growing availability of AI for writing with greater surveillance and increasingly inaccurate disciplinary charges. Students, administrators, and teachers should fight back against this. 

If you are a student wrongly accused of using generative AI without authorization for your school work, the Washington Post has a good primer for how to respond. To protect yourself from accusations, you may also want to save your drafts, or use a document management system that does so automatically.

Bans on Generative AI Access in Schools Hurt Students

Before AI detection tools were more widely available, some of the largest districts in the country, including New York Public Schools and Los Angeles Unified, had banned access to large language model AI tools like ChatGPT outright due to cheating fears. Thankfully, many schools have since done an about face, and are beginning to see the value in teaching about them, instead. New York City Public Schools lifted its ban after only four months, and the number of schools with a policy and curriculum that includes them is growing. New York City Public School’s Chancellor wrote that the school system “will encourage and support our educators and students as they learn about and explore this game-changing technology while also creating a repository and community to share their findings across our schools.” This is the correct approach, and one that all schools should take. 

This is not an endorsement of generative AI tools, as they have plenty of problems, but outright bans only stop students from using them while physically in school—where teachers could actually explain how they work and their pros and cons—and obviously won’t stop their use the majority of the time. Instead, they will only stop students who don’t have access to the internet or a personal device outside of school from using them. 

These bans are not surprising. There is a long history of school administrators and teachers blocking the use of a new technology, especially around the internet. For decades after they became accessible to the average student,  educators argued about whether students should be allowed calculators in the classroom. Schools have banned search engines; they have banned Wikipedia—all of which have a potentially useful place in education, and one that teachers are well-positioned to explain the nuances of. If a tool is effective at creating shortcuts for students, then teachers and administrators should consider emphasizing how it works, what it can do, and, importantly, what it cannot do. (And in the case of many online tools, what data it may collect). Hopefully, schools will take a different trajectory with generative AI technology.

Artificial intelligence will likely impact students throughout their lives. The school environment  presents a good opportunity to help them understand some of the benefits and flaws of such tools. Instead of demonizing it, schools should help students by teaching them how this potentially useful technology works and when it’s appropriate to use it. 

EFF to Copyright Office: Copyright Is Indeed a Hammer, But Don’t Be Too Hasty to Nail Generative AI


Generative AI has sparked a great deal of hype, fear, and speculation. Courts are just beginning to analyze how traditional copyright laws apply to the creation and use of these technologies. Into this breach has stepped the United States Copyright Office with a call for comments on the interplay between copyright law and generative AI. 

Because copyright law carries draconian penalties and grants the power to swiftly take speech offline without judicial review, it is particularly important not to hastily expand its reach. And because of the imbalance in bargaining power between creators and the publishing gatekeepers with the means to commercialize their work in mass markets, trying to help creators by giving them new rights is, as EFF advisor Cory Doctorow has written, like trying to help a bullied kid by giving them more lunch money for the bully to take. Or, in the spirit of the season, like giving someone a blood transfusion and sending them home to an insatiable vampire.

In comments to the United States Copyright Office, we explained that copyright is not a helpful framework for addressing concerns about automation reducing the value of labor, about misinformation generated by AI, privacy of sensitive personal information ingested into a data set, or the desire of content industry players to monopolize any expression that is reminiscent of or stylistically similar to the work of an artist whose rights they own. We explained that it would be harmful to expression to grant such a monopoly – through changes to copyright or a new federal right.

We believe that existing copyright law is sufficiently flexible to answer questions about generative AI and that it is premature to legislate without knowing how courts will apply existing law or whether the hypes, fears, and speculations surrounding generative AI will come to be. 

❌