Vue normale

Il y a de nouveaux articles disponibles, cliquez pour rafraîchir la page.
Hier — 27 janvier 2025Flux principal

It's Copyright Week 2025: Join Us in the Fight for Better Copyright Law and Policy

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation 

One of the unintended consequences of the internet is that more of us than ever are aware of how much of our lives is affected by copyright. People see their favorite YouTuber’s video get removed or re-edited due to copyright. People know they can’t tinker with or fix their devices. And people have realized, and are angry about, the fact that they don’t own much of the media they have paid for.  

All of this is to say that copyright is no longer—if it ever was—a niche concern of certain industries. As corporations have pushed to expand copyright, they have made it everyone’s problem. And that means they don’t get to make the law in secret anymore. 

Twelve years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection. 

As people raise more and more concerns about the major technology companies that control our online lives, it’s important not to fall into the trap of thinking that copyright will save us. As SOPA/PIPA reminds us: expanding copyright serves the gatekeepers, not the users.  

We continue to fight for a version of copyright that does what it is supposed to. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and advocate a set of principles of copyright law. This year’s issues are: 

  • Monday: Copyright Policy Should Be Made in the Open With Input From Everyone: Copyright is not a niche concern. It affects everyone’s experience online, therefore laws and policy should be made in the open and with users’ concerns represented and taken into account. 
  • Tuesday: Copyright Enforcement as a Tool of Censorship: Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.  
  • Wednesday: Device and Digital Ownership: As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.  
  • Thursday: The Preservation and Sharing of Information and Culture: Copyright often blocks the preservation and sharing of information and culture, traditionally in the public interest. Copyright law and policy should encourage and not discourage the saving and sharing of information. 
  • Friday: Free Expression and Fair Use: Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.  

Every day this week, we’ll be sharing links to blog posts on these topics at https://www.eff.org/copyrightweek. 

À partir d’avant-hierFlux principal

Platforms Systematically Removed a User Because He Made "Most Wanted CEO" Playing Cards

Par : Jason Kelley
14 janvier 2025 à 12:33

On December 14, James Harr, the owner of an online store called ComradeWorkwear, announced on social media that he planned to sell a deck of “Most Wanted CEO” playing cards, satirizing the infamous “Most-wanted Iraqi playing cards” introduced by the U.S. Defense Intelligence Agency in 2003. Per the ComradeWorkwear website, the Most Wanted CEO cards would offer “a critique of the capitalist machine that sacrifices people and planet for profit,” and “Unmask the oligarchs, CEOs, and profiteers who rule our world...From real estate moguls to weapons manufacturers.”  

But within a day of posting his plans for the card deck to his combined 100,000 followers on Instagram and TikTok, the New York Post ran a front page story on Harr, calling the cards “disturbing.” Less than 5 hours later, officers from the New York City Police Department came to Harr's door to interview him. They gave no indication he had done anything illegal or would receive any further scrutiny, but the next day the New York police commissioner held the New York Post story up during a press conference after announcing charges against Luigi Mangione, the alleged assassin of UnitedHealth Group CEO Brian Thompson. Shortly thereafter, platforms from TikTok to Shopify disabled both the company’s accounts and Harr’s personal accounts, simply because he used the moment to highlight what he saw as the harms that large corporations and their CEOs cause.

Even benign posts, such as one about Mangione’s astrological sign, were deleted from Threads.

Harr was not alone. After the assassination, thousands of people took to social media to express their negative experiences with the healthcare industry, speculate about who was behind the murder, and show their sympathy for either the victim or the shooter—if social media platforms allowed them to do so. Many users reported having their accounts banned and content removed after sharing comments about Luigi Mangione, Thompson's alleged assassin. TikTok, for example reportedly removed comments that simply said, "Free Luigi." Even seemingly benign content, such as a post about Mangione’s astrological sign or a video montage of him set to music, was deleted from Threads, according to users. 

The Most Wanted CEO playing cards did not reference Mangione, and would the cards—which have not been released—would not include personal information about any CEO. In his initial posts about the cards, Harr said he planned to include QR codes with more information about each company and, in his view, what dangers the companies present. Each suit would represent a different industry, and the back of each card would include a generic shooting-range style silhouette. As Harr put it in his now-removed video, the cards would include “the person, what they’re a part of, and a QR code that goes to dedicated pages that explain why they’re evil. So you could be like, 'Why is the CEO of Walmart evil? Why is the CEO of Northrop Grumman evil?’” 

A design for the Most Wanted CEO playing cards

Many have riffed on the military’s tradition of using playing cards to help troops learn about the enemy. You can currently find “Gaza’s Most Wanted” playing cards on Instagram, purportedly depicting “leaders and commanders of various groups such as the IRGC, Hezbollah, Hamas, Houthis, and numerous leaders within Iran-backed militias.” A Shopify store selling “Covid’s Most Wanted” playing cards, displaying figures like Bill Gates and Anthony Fauci, and including QR codes linking to a website “where all the crimes and evidence are listed,” is available as of this writing. Hero Decks, which sells novelty playing cards generally showing sports figures, even produced a deck of “Wall Street Most Wanted” cards in 2003 (popular enough to have a second edition). 

A Shopify store selling “Covid’s Most Wanted” playing cards is available as of this writing.

As we’ve said many times, content moderation at scale, whether human or automated, is impossible to do perfectly and nearly impossible to do well. Companies often get it wrong and remove content or whole accounts that those affected by the content would agree do not violate the platform’s terms of service or community guidelines. Conversely, they allow speech that could arguably be seen to violate those terms and guidelines. That has been especially true for speech related to divisive topics and during heated national discussions. These mistakes often remove important voices, perspectives, and context, regularly impacting not just everyday users but journalists, human rights defenders, artists, sex worker advocacy groups, LGBTQ+ advocates, pro-Palestinian activists, and political groups. In some instances, this even harms people's livelihoods. 

Instagram disabled the ComradeWorkwear account for “not following community standards,” with no further information provided. Harr’s personal account was also banned. Meta has a policy against the "glorification" of dangerous organizations and people, which it defines as "legitimizing or defending the violent or hateful acts of a designated entity by claiming that those acts have a moral, political, logical or other justification that makes them acceptable or reasonable.” Meta’s Oversight Board has overturned multiple moderation decisions by the company regarding its application of this policy. While Harr had posted to Instagram that “the CEO must die” after Thompson’s assassination, he included an explanation that, "When we say the ceo must die, we mean the structure of capitalism must be broken.” (Compare this to a series of Instagram story posts from musician Ethel Cain, whose account is still available, which used the hashtag #KillMoreCEOs, for one of many examples of how moderation affects some people and not others.) 

TikTok reported that Harr violated the platform’s community guidelines with no additional information. The platform has a policy against "promoting (including any praise, celebration, or sharing of manifestos) or providing material support" to violent extremists or people who cause serial or mass violence. TikTok gave Harr no opportunity for appeal, and continued to remove additional accounts Harr only created to  update his followers on his life. TikTok did not point to any specific piece of content that violated its guidelines. 

These voices shouldn’t be silenced into submission simply for drawing attention to the influence that platforms have.

On December 20, PayPal informed Harr it could no longer continue processing payments for ComradeWorkwear, with no information about why. Shopify informed Harr that his store was selling “offensive content,” and his Shopify and Apple Pay accounts would both be disabled. In a follow-up email, Shopify told Harr the decision to close his account “was made by our banking partners who power the payment gateway.”  

Harr’s situation is not unique. Financial and social media platforms have an enormous amount of control over our online expression, and we’ve long been critical of their over-moderation,  uneven enforcement, lack of transparency, and failure to offer reasonable appeals. This is why EFF co-created The Santa Clara Principles on transparency and accountability in content moderation, along with a broad coalition of organizations, advocates, and academic experts. These platforms have the resources to set the standard for content moderation, but clearly don’t apply their moderation evenly, and in many instances, aren’t even doing the basics—like offering clear notices and opportunities for appeal.  

Harr was one of many who expressed frustration online with the growing power of corporations. These voices shouldn’t be silenced into submission simply for drawing attention to the influence that they have. These are exactly the kinds of actions that Harr intended to highlight. If the Most Wanted CEO deck is ever released, it shouldn’t be a surprise for the CEOs of these platforms to find themselves in the lineup.  

While the Court Fights Over AI and Copyright Continue, Congress and States Focus On Digital Replicas: 2024 in Review

27 décembre 2024 à 13:29

The phrase “move fast and break things” carries pretty negative connotations in these days of (Big) techlash. So it’s surprising that state and federal policymakers are doing just that with the latest big issue in tech and the public consciousness: generative AI, or more specifically its uses to generate deepfakes.

Creators of all kinds are expressing a lot of anxiety around the use of generative artificial intelligence, some of it justified. The anxiety, combined with some people’s understandable sense of frustration that their works were used to develop a technology that they fear could displace them, has led to multiple lawsuits.

But while the courts sort it out, legislators are responding to heavy pressure to do something. And it seems their highest priority is to give new or expanded rights to protect celebrity personas–living or dead–and the many people and corporations that profit from them.

The broadest “fix” would be a federal law, and we’ve seen several proposals this year. The two most prominent are NO AI FRAUD (in the House of Representatives) and NO FAKES (in the Senate).  The first, introduced in January 2024, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. It also characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs because Section 230 immunity does not apply to federal IP claims. NO FAKES, introduced in April, is not significantly different.

There’s a host of problems with these bills, and you can read more about them here and here. 

A core problem is that these bills are modeled on the broadest state laws recognizing a right of publicity. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. It’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. 

And states are taking swift action to further expand publicity rights. Take this year’s digital replica law in Tennessee, called the ELVIS Act because of course it is. Tennessee already gave celebrities (and their heirs) a property right in their name, photograph, or likeness. The new law extends that right to voices, expands the risk of liability to include anyone who distributes a likeness without permission and limits some speech-protective exceptions.  

Across the country, California couldn’t let Tennessee win the race for most restrictive/protective rules for famous people (and their heirs). So it passed AB 1836, creating liability for anyo ne person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, without consent. There are a number of exceptions, which is better than nothing, but those exceptions are pretty confusing for people who don’t have lawyers to help sort them out.

These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction. 

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

EFF in the Press: 2024 in Review

Par : Josh Richman
23 décembre 2024 à 11:08

EFF’s attorneys, activists, and technologists were media rockstars in 2024, informing the public about important issues that affect privacy, free speech, and innovation for people around the world. 

Perhaps the single most exciting media hit for EFF in 2024 was “Secrets in Your Data,” the NOVA PBS documentary episode exploring “what happens to all the data we’re shedding and explores the latest efforts to maximize benefits – without compromising personal privacy.” EFFers Hayley Tsukayama, Eva Galperin, and Cory Doctorow were among those interviewed.

One big-splash story in January demonstrated just how in-demand EFF can be when news breaks. Amazon’s Ring home doorbell unit announced that it would disable its Request For Assistance tool, the program that had let police seek footage from users on a voluntary basis – an issue on which EFF, and Matthew Guariglia in particular, have done extensive work. Matthew was quoted in Bloomberg, the Associated Press, CNN, The Washington Post, The Verge, The Guardian, TechCrunch, WIRED, Ars Technica, The Register, TechSpot, The Focus, American Wire News, and the Los Angeles Business Journal. The Bloomberg, AP, and CNN stories in turn were picked up by scores of media outlets across the country and around the world. Matthew also did interviews with local television stations in New York City, Oklahoma City, Allentown, PA, San Antonio, TX and Norfolk, VA. Matthew and Jason Kelley were quoted in Reason, and EFF was cited in reports by the New York Times, Engadget, The Messenger, the Washington Examiner, Silicon UK, Inc., the Daily Mail (UK), AfroTech, and KFSN ABC30 in Fresno, CA, as well as in an editorial in the Times Union of Albany, NY.

Other big stories for us this year – with similar numbers of EFF media mentions – included congressional debates over banning TikTok and censoring the internet in the name of protecting children, state age verification laws, Google’s backpedaling on its Privacy Sandbox promises, the Supreme Court’s Netchoice and Murthy rulings, the arrest of Telegram’s CEO, and X’s tangles with Australia and Brazil.

EFF is often cited in tech-oriented media, with 34 mentions this year in Ars Technica, 32 mentions in The Register, 23 mentions in WIRED, 23 mentions in The Verge, 20 mentions in TechCrunch, 10 mentions in The Record from Recorded Future, nine mentions in 404 Media, and six mentions in Gizmodo. We’re also all over the legal media, with 29 mentions in Law360 and 15 mentions in Bloomberg Law. 

But we’re also a big presence in major U.S. mainstream outlets, cited 38 times this year in the Washington Post, 11 times in the New York Times, 11 times in NBC News, 10 times in the Associated Press, 10 times in Reuters, 10 times in USA Today, and nine times in CNN. And we’re being heard by international audiences, with mentions in outlets including Germany’s Heise and Deutsche Welle, Canada’s Globe & Mail and Canadian Broadcasting Corp., Australia’s Sydney Morning Herald and Australian Broadcasting Corp., the United Kingdom’s Telegraph and Silicon UK, and many more. 

We’re being heard in local communities too. For example, we talked about the rapid encroachment of police surveillance with media outlets in Sarasota, FL; the San Francisco Bay Area; Baton Rouge, LA; Columbus, OH; Grand Rapids, MI; San Diego, CA; Wichita, KS; Buffalo, NY; Seattle, WA; Chicago, ILNashville, TN; and Sacramento, CA, among other localities. 

EFFers also spoke their minds directly in op-eds placed far and wide, including: 

And if you’re seeking some informative listening during the holidays, EFFers joined a slew of podcasts in 2024, including: 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Saving the Internet in Europe: How EFF Works in Europe

16 décembre 2024 à 11:32

This post is part one in a series of posts about EFF’s work in Europe.

EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. While our work has taken us to far corners of the globe, in recent years we have worked to expand our efforts in Europe, building up a policy team with key expertise in the region, and bringing our experience in advocacy and technology to the European fight for digital rights.

In this blog post series, we will introduce you to the various players involved in that fight, share how we work in Europe, and how what happens in Europe can affect digital rights across the globe.

Why EFF Works in Europe

European lawmakers have been highly active in proposing laws to regulate online services and emerging technologies. And these laws have the potential to impact the whole world. As such, we have long recognized the importance of engaging with organizations and lawmakers across Europe. In 2007, EFF became a member of the European Digital Rights Initiative (EDRi), a collective of NGOs, experts, advocates and academics that have for two decades worked to advance digital rights throughout Europe. From the early days of the movement, we fought back against legislation threatening user privacy in Germany, free expression in the UK, and the right to innovation across the continent.

Over the years, we have continued collaborations with EDRi as well as other coalitions including IFEX, the international freedom of expression network, Reclaim Your Face, and Protect Not Surveil. In our EU policy work, we have advocated for fundamental principles like transparency, openness, and information self-determination. We emphasized that legislative acts should never come at the expense of protections that have served the internet well: Preserve what works. Fix what is broken. And EFF has made a real difference: We have ensured that recent internet regulation bills don’t turn social networks into censorship tools and safeguarded users’ right to private conversations. We also helped guide new fairness rules in digital markets to focus on what is really important: breaking the chokehold of major platforms over the internet.

Recognizing the internet’s global reach, we have also stressed that lawmakers must consider the global impact of regulation and enforcement, particularly effects on vulnerable groups and underserved communities. As part of this work, we facilitate a global alliance of civil society organizations representing diverse communities across the world to ensure that non-European voices are heard in Brussels’ policy debates.

Our Teams

Today, we have a robust policy team that works to influence policymakers in Europe. Led by International Policy Director Christoph Schmon and supported by Assistant Director of EU Policy Svea Windwehr, both of whom are based in Europe, the team brings a set of unique expertise in European digital policy making and fundamental rights online. They engage with lawmakers, provide policy expertise and coordinate EFF’s work in Europe.

But legislative work is only one piece of the puzzle, and as a collaborative organization, EFF pulls expertise from various teams to shape policy, build capacity, and campaign for a better digital future. Our teams engage with the press and the public through comprehensive analysis of digital rights issues, educational guides, activist workshops, press briefings, and more. They are active in broad coalitions across the EU and the UK, as well as in East and Southeastern Europe.

Our work does not only span EU digital policy issues. We have been active in the UK advocating for user rights in the context of the Online Safety Act, and also work on issues facing users in the Balkans or accession countries. For instance, we recently collaborated with Digital Security Lab Ukraine on a workshop on content moderation held in Warsaw, and participated in the Bosnia and Herzegovina Internet Governance Forum. We are also an active member of the High-Level Group of Experts for Resilience Building in Eastern Europe, tasked to advise on online regulation in Georgia, Moldova and Ukraine.

EFF on Stage

In addition to all of the behind-the-scenes work that we do, EFF regularly showcases our work on European stages to share our mission and message. You can find us at conferences like re:publica, CPDP, Chaos Communication Congress, or Freedom not Fear, and at local events like regional Internet Governance Forums. For instance, last year Director for International Freedom of Expression Jillian C. York gave a talk with Svea Windwehr at Berlin’s re:publica about transparency reporting. More recently, Senior Speech and Privacy Activist Paige Collings facilitated a session on queer justice in the digital age at a workshop held in Bosnia and Herzegovina.

There is so much more work to be done. In the next posts in this series, you will learn more about what EFF will be doing in Europe in 2025 and beyond, as well as some of our lessons and successes from past struggles.

Introducing EFF’s New Video Series: Gate Crashing

10 décembre 2024 à 14:56

The promise of the internet—at least in the early days—was that it would lower the barriers to entry for any number of careers. Traditionally, the spheres of novel writing, culture criticism, and journalism were populated by well-off straight white men, with anyone not meeting one of those criteria being an outlier. Add in giant corporations acting as gatekeepers to those spheres and it was a very homogenous culture. The internet has changed that. 

There is a lot about the internet that needs fixing, but the one thing we should preserve and nurture is the nontraditional paths to success it creates. In this series of interviews, called “Gate Crashing,” we look to highlight those people and learn from their examples. In an ideal world, lawmakers will be guided by lived experiences like these when thinking about new internet legislation or policy. 

In our first video, we look at creators who honed their media criticism skills in fandom spaces. Please join Gavia Baker-Whitelaw and Elizabeth Minkel, co-creators of the Rec Center newsletter, in a wide-ranging discussion about how they got started, where it has led them, and what they’ve learned about internet culture and policy along the way. 

play
Privacy info. This embed will serve content from youtube.com

Looking for the Answer to the Question, "Do I Really Own the Digital Media I Paid For?"

26 novembre 2024 à 12:58

Sure, buying your favorite video game, movie, or album online is super convenient. I personally love being able to pre-order a game and play it the night of release, without needing to go to a store. 

But something you may not have thought about before making your purchase are the differences between owning a physical or digital copy of that media. Unfortunately, there’s quite a few rights you give up by purchasing a digital copy of your favorite game, movie, or album! On our new site, Digital Rights Bytes, we outline the differences between owning physical and digital media, and why we need to break down that barrier. 

Digital Rights Bytes explains this and answers other common questions about technology that may be getting on your nerves and includes short videos featuring adorable animals. You can also read up on what EFF is doing to ensure you actually own the digital media you pay for, and how you can take action, too. 

Got other questions you’d like us to answer in the future? Let us know on your favorite social platform using the hashtag #DigitalRightsBytes. 

The 2024 U.S. Election is Over. EFF is Ready for What's Next.

Par : Cindy Cohn
6 novembre 2024 à 11:56

The dust of the U.S. election is settling, and we want you to know that EFF is ready for whatever’s next. Our mission to ensure that technology serves you—rather than silencing, tracking, or oppressing you—does not change. Some of what’s to come will be in uncharted territory. But we have been preparing for whatever this future brings for a long time. EFF is at its best when the stakes are high. 

No matter what, EFF will take every opportunity to stand with users. We’ll continue to advance our mission of user privacy, free expression, and innovation, regardless of the obstacles. We will hit the ground running. 

During the previous Trump administration, EFF didn’t just hold the line. We pushed digital rights forward in significant ways, both nationally and locally.  We supported those protesting in the streets, with expanded Surveillance Self-Defense guides and our Security Education Companion. The first offers information for how to protect yourself while you exercise your First Amendment rights, and the second gives tips on how to help your friends and colleagues be more safe.

Along with our allies, we fought government use of face surveillance, passing municipal bans on the dangerous technology. We urged the Supreme Court to expand protections for your cell phone data, and in Carpenter v United States, they did so—recognizing that location information collected by cell providers creates a “detailed chronicle of a person’s physical presence compiled every day, every moment over years.” Now, police must get a warrant before obtaining a significant amount of this data. 

EFF is at its best when the stakes are high. 

But we also stood our ground when governments and companies tried to take away the hard-fought protections we’d won in previous years. We stopped government attempts to backdoor private messaging with “ghost” and “client-side scanning” measures that obscured their intentions to undermine end-to-end encryption. We defended Section 230, the common sense law that protects Americans’ freedom of expression online by protecting the intermediaries we all rely on. And when the COVID pandemic hit, we carefully analyzed and pushed back measures that would have gone beyond what was necessary to keep people safe and healthy by invading our privacy and inhibiting our free speech. 

Every time policymakers or private companies tried to undermine your rights online during the last Trump administration from 2016-2020, we were there—just as we continued to be under President Biden. In preparation for the next four years, here’s just some of the groundwork we’ve already laid: 

  • Border Surveillance: For a decade we’ve been revealing how the hundreds of millions of dollars pumped into surveillance technology along the border impacts the privacy of those who live, work, or seek refuge there, and thousands of others transiting through our border communities each day. We’ve defended the rights of people whose devices have been searched or seized upon entering the country. We’ve mapped out the network of automated license plate readers installed at checkpoints and land entry points, and the more than 465 surveillance towers along the U.S.-Mexico border. And we’ve advocated for sanctuary data policies restricting how ICE can access criminal justice and surveillance data.  
  • Surveillance Self-Defense: Protecting your private communications will only become more critical, so we’ve been expanding both the content and the translations of our Surveillance Self-Defense guides. We’ve written clear guidance for staying secure that applies to everyone, but is particularly important for journalists, protesters, activists, LGBTQ+ youths, and other vulnerable populations.
  • Reproductive Rights: Long before Roe v. Wade was overturned, EFF was working to minimize the ways that law enforcement can obtain data from tech companies and data brokers. After the Dobbs decision was handed down, we supported multiple laws in California that shield both reproductive and transgender health data privacy, even for people outside of California. But there’s more to do, and we’re working closely with those involved in the reproductive justice movement to make more progress. 
  • Transition Memo: When the next administration takes over, we’ll be sending a lengthy, detailed policy analysis to the incoming administration on everything from competition to AI to intellectual property to surveillance and privacy. We provided a similarly thoughtful set of recommendations on digital rights issues after the last presidential election, helping to guide critical policy discussions. 

We’ve prepared much more too. The road ahead will not be easy, and some of it is not yet mapped out, but one of the reasons EFF is so effective is that we play the long game. We’ll be here when this administration ends and the next one takes over, and we’ll continue to push. Our nonpartisan approach to tech policy works because we work for the user. 

We’re not merely fighting against individual companies or elected officials or even specific administrations.  We are fighting for you. That won’t stop no matter who’s in office. 

DONATE TODAY

Sorry, Gas Companies - Parody Isn't Infringement (Even If It Creeps You Out)

30 octobre 2024 à 17:09

Activism comes in many forms. You might hold a rally, write to Congress, or fly a blimp over the NSA. Or you might use a darkly hilarious parody to make your point, like our client Modest Proposals recently did.

Modest Proposals is an activist collective that uses parody and culture jamming to advance environmental justice and other social causes. As part of a campaign shining a spotlight on the environmental damage and human toll caused by the liquefied natural gas (LNG) industry, Modest Proposals invented a company called Repaer. The fake company’s website offers energy companies the opportunity to purchase “life offsets” that balance the human deaths their activities cause by extending the lives of individuals deemed economically valuable. The website also advertises a “Plasma Pals” program that encourages parents to donate their child’s plasma to wealthy recipients. Scroll down on the homepage a bit, and you’ll see the logos for three (real) LNG companies—Repaer’s “Featured Partners.” 

Believe it or not, the companies didn’t like this. (Shocking!) Two of them—TotalEnergies and Equinor—sent our client stern emails threatening legal action if their names and logos weren’t removed from the website. TotalEnergies also sent a demand to the website’s hosting service, Netlify, that got repaer.earth taken offline. That was our cue to get involved.

We sent letters to both companies, explaining what should be obvious: the website was a noncommercial work of activism, unlikely to confuse any reasonable viewer. Trademark law is about protecting consumers; it’s not a tool for businesses to shut down criticism. We also sent a counternotice to Netlify denying TotalEnergies’ allegations and demanding that repaer.earth be restored. 

 We wish this were the first time we’ve had to send letters like that, but EFF regularly helps activists and critics push back on bogus trademark and copyright claims. This incident is also part of a broader and long-standing pattern of the energy industry weaponizing the law to quash dissent by environmental activists. These are just examples EFF has written about. We’ve been fighting these tactics for a long time, both by representing individual activist groups and through supporting legislative efforts like a federal anti-SLAPP bill. 

Frustratingly, Netlify made us go through the full DMCA counternotice process—including a 10-business-day waiting period to have the site restored—even though this was never a DMCA claim. (The DMCA is copyright law, not trademark, and TotalEnergies didn’t even meet the notice requirements that Netlify claims to follow.) Rather than wait around for Netlify to act, Modest Proposals eventually moved the website to a different hosting service. 

Equinor and TotalEnergies, on the other hand, have remained silent. This is a pretty common result when we help push back against bad trademark and copyright claims: the rights owners slink away once they realize their bullying tactics won’t work, without actually admitting they were wrong. We’re glad these companies seem to have backed off regardless, but victims of bogus claims deserve more certainty than this.

Courts Agree That No One Should Have a Monopoly Over the Law. Congress Shouldn’t Change That

16 octobre 2024 à 15:29

Some people just don’t know how to take a hint. For more than a decade, giant standards development organizations (SDOs) have been fighting in courts around the country, trying use copyright law to control access to other laws. They claim that that they own the copyright in the text of some of the most important regulations in the country – the codes that protect product, building and environmental safety--and that they have the right to control access to those laws. And they keep losing because, it turns out, from New York, to Missouri, to the District of Columbia, judges understand that this is an absurd and undemocratic proposition. 

They suffered their latest defeat in Pennsylvania, where  a district court held that UpCodes, a company that has created a database of building codes – like the National Electrical Code--can include codes incorporated by reference into law. ASTM, a private organization that coordinated the development of some of those codes, insists that it retains copyright in them even after they have been adopted into law. Some courts, including the Fifth Circuit Court of Appeals, have rejected that theory outright, holding that standards lose copyright protection when they are incorporated into law. Others, like the DC Circuit Court of Appeals in a case EFF defended on behalf of Public.Resource.Org, have held that whether or not the legal status of the standards changes once they are incorporated into law, posting them online is a lawful fair use. 

In this case, ASTM v. UpCodes, the court followed the latter path. Relying in large part on the DC Circuit’s decision, as well as an amicus brief EFF filed in support of UpCodes, the court held that providing access to the law (for free or subject to a subscription for “premium” access) was a lawful fair use. A key theme to the ruling is the public interest in accessing law: 

incorporation by reference creates serious notice and accountability problems when the law is only accessible to those who can afford to pay for it. … And there is significant evidence of the practical value of providing unfettered access to technical standards that have been incorporated into law. For example, journalists have explained that this access is essential to inform their news coverage; union members have explained that this access helps them advocate and negotiate for safe working conditions; and the NAACP has explained that this access helps citizens assert their legal rights and advocate for legal reforms.

We’ve seen similar rulings around the country, from California to New York to Missouri. Combined with two appellate rulings, these amount to a clear judicial consensus. And it turns out the sky has not fallen; SDOs continue to profit from their work, thanks in part to the volunteer labor of the experts who actually draft the standards and don’t do it for the royalties.  You would think the SDOs would learn their lesson, and turn their focus back to developing standards, not lawsuits.

Instead, SDOs are asking Congress to rewrite the Constitution and affirm that SDOs retain copyright in their standards no matter what a federal regulator does, as long as they make them available online. We know what that means because the SDOs have already created “reading rooms” for some of their standards, and they show us that the SDOs’ idea of “making available” is “making available as if it was 1999.” The texts are not searchable, cannot be printed, downloaded, highlighted, or bookmarked for later viewing, and cannot be magnified without becoming blurry. Cross-referencing and comparison is virtually impossible. Often, a reader can view only a portion of each page at a time and, upon zooming in, must scroll from right to left to read a single line of text. As if that wasn’t bad enough, these reading rooms are inaccessible to print-disabled people altogether.

It’s a bad bargain that would trade our fundamental due process rights in exchange for a pinky promise of highly restricted access to the law. But if Congress takes that step, it’s a comfort to know that we can take the fight back to the courts and trust that judges, if not legislators, understand why laws are facts, not property, and should be free for all to access, read, and share. 

NextNav’s Callous Land-Grab to Privatize 900 MHz

Par : Rory Mir
13 septembre 2024 à 10:52

The 900 MHz band, a frequency range serving as a commons for all, is now at risk due to NextNav’s brazen attempt to privatize this shared resource. 

Left by the FCC for use by amateur radio operators, unlicensed consumer devices, and industrial, scientific, and medical equipment, this spectrum has become a hotbed for new technologies and community-driven projects. Millions of consumer devices also rely on the range, including baby monitors, cordless phones, IoT devices, garage door openers. But NextNav would rather claim these frequencies, fence them off, and lease them out to mobile service providers. This is just another land-grab by a corporate rent-seeker dressed up as innovation. 

EFF and hundreds of others have called on the FCC to decisively reject this proposal and protect the open spectrum as a commons that serves all.

NextNav’s Proposed 'Band-Grab'

NextNav wants the FCC to reconfigure the 902-928 MHz band to grant them exclusive rights to the majority of the spectrum. The country's airwaves are separated into different sections for different devices to communicate, like dedicated lanes on a highway. This proposal would not only give NextNav their own lane, but expanded operating region, increased broadcasting power, and more leeway for radio interference emanating from their portions of the band. All of this points to more power for NextNav at everyone else’s expense.

This land-grab is purportedly to implement a Positioning, Navigation and Timing (PNT) network to serve as a US-specific backup of the Global Positioning System(GPS). This plan raises red flags off the bat. 

Dropping the “global” from GPS makes it far less useful for any alleged national security purposes, especially as it is likely susceptible to the same jamming and spoofing attacks as GPS.

NextNav itself admits there is also little commercial demand for PNT. GPS works, is free, and is widely supported by manufacturers. If Nextnav has a grand plan to implement a new and improved standard, it was left out of their FCC proposal. 

What NextNav did include however is its intent to resell their exclusive bandwidth access to mobile 5G networks. This isn't about national security or innovation; it's about a rent-seeker monopolizing access to a public resource. If NextNav truly believes in their GPS backup vision, they should look to parts of the spectrum already allocated for 5G.

Stifling the Future of Open Communication

The open sections of the 900 MHz spectrum are vital for technologies that foster experimentation and grassroots innovation. Amateur radio operators, developers of new IoT devices, and small-scale operators rely on this band.

One such project is Meshtastic, a decentralized communication tool that allows users to send messages across a network without a central server. This new approach to networking offers resilient communication that can endure emergencies where current networks fail.

This is the type of innovation that actually addresses crises raised by Nextnav, and it’s happening in the part of the spectrum allocated for unlicensed devices while empowering communities instead of a powerful intermediary. Yet, this proposal threatens to crush such grassroots projects, leaving them without a commons in which they can grow and improve.

This isn’t just about a set of frequencies. We need an ecosystem which fosters grassroots collaboration, experimentation, and knowledge building. Not only do these commons empower communities, they avoid a technology monoculture unable to adapt to new threats and changing needs as technology progresses.

Invention belongs to the public, not just to those with the deepest pockets. The FCC should ensure it remains that way.

FCC Must Protect the Commons

NextNav’s proposal is a direct threat to innovation, public safety, and community empowerment. While FCC comments on the proposal have closed, replies remain open to the public until September 20th. 

The FCC must reject this corporate land-grab and uphold the integrity of the 900 MHz band as a commons. Our future communication infrastructure—and the innovation it supports—depends on it.

You can read our FCC comments here.

NO FAKES – A Dream for Lawyers, a Nightmare for Everyone Else

Performers and ordinary humans are increasingly concerned that they may be replaced or defamed by AI-generated imitations. We’re seeing a host of bills designed to address that concern – but every one just generates new problems. Case in point: the NO FAKES Act. We flagged numerous flaws in a “discussion draft” back in April, to no avail: the final text has been released, and it’s even worse.  

NO FAKES creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.

Under NO FAKES, any human person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for up to 70 years after the person dies. Because it is a federal intellectual property right, Section 230 protections – a crucial liability shield for platforms and anyone else that hosts or shares user-generated content—will not apply. And that legal risk begins the moment a person gets a notice that the content is unlawful, even if they didn't create the replica and have no way to confirm whether or not it was authorized, or have any way to verify the claim. NO FAKES thereby creates a classic “hecklers’ veto”: anyone can use a specious accusation to get speech they don’t like taken down.  

The bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, but their application is uncertain at best. For example, there’s an exemption for use of a replica for a “bona fide” news broadcast, provided that the replica is “materially relevant” to the subject of the broadcast. Will citizen journalism qualify as “bona fide”? And who decides whether the replica is “materially relevant”?  

These are just some of the many open questions, all of which will lead to full employment for lawyers, but likely no one else, particularly not those whose livelihood depends on the freedom to create journalism or art about famous people. 

The bill also includes a safe harbor scheme modelled on the DMCA notice and takedown process. To stay within the NO FAKES safe harbors, a platform that receives a notice of illegality must remove “all instances” of the allegedly unlawful content—a broad requirement that will encourage platforms to adopt “replica filters” similar to the deeply flawed copyright filters like YouTube’s Content I.D. Platforms that ignore such a notice can be on the hook just for linking to unauthorized replicas. And every single copy made, transmitted, or displayed is a separate violation incurring a $5000 penalty – which will add up fast. The bill does throw platforms a not-very-helpful-bone: if they can show they had an objectively reasonable belief that the content was lawful, they only have to cough up $1 million if they guess wrong.  

All of this is a recipe for private censorship. For decades, the DMCA process has been regularly abused to target lawful speech, and there’s every reason to suppose NO FAKES will lead to the same result.  

All of this is a recipe for private censorship. 

What is worse, NO FAKES offers even fewer safeguards for lawful speech than the DMCA. For example, the DMCA includes a relatively simple counter-notice process that a speaker can use to get their work restored. NO FAKES does not. Instead, NO FAKES puts the burden on the speaker to run to court within 14 days to defend their rights. The powerful have lawyers on retainer who can do that, but most creators, activists, and citizen journalists do not.  

NO FAKES does include a provision that, in theory, would allow improperly targeted speakers to hold notice senders accountable. But they must prove that the lie was “knowing,” which can be interpreted to mean that the sender gets off scot-free as long as they subjectively believes the lie to be true, no matter how unreasonable that belief. Given the multiple open questions about how to interpret the various exemptions (not to mention the common confusions about the limits of IP protection that we’ve already seen), that’s pretty cold comfort. 

These significant flaws should doom the bill, and that’s a shame. Deceptive AI-generated replicas can cause real harms, and performers have a right to fair compensation for the use of their likenesses, should they choose to allow that use. Existing laws can address most of this, but Congress should be considering narrowly-targeted and proportionate proposals to fill in the gaps.  

The NO FAKES Act is neither targeted nor proportionate. It’s also a significant Congressional overreach—the Constitution forbids granting a property right in (and therefore a monopoly over) facts, including a person’s name or likeness.  

The best we can say about NO FAKES is that it has provisions protecting individuals with unequal bargaining power in negotiations around use of their likeness. For example, the new right can’t be completely transferred to someone else (like a film studio or advertising agency) while the person is alive, so a person can’t be pressured or tricked into handing over total control of their public identity (their heirs still can, but the dead celebrity presumably won’t care). And minors have some additional protections, such as a limit on how long their rights can be licensed before they are adults.   

TAKE ACTION

Throw Out the NO FAKES Act and Start Over

But the costs of the bill far outweigh the benefits. NO FAKES creates an expansive and confusing new intellectual property right that lasts far longer than is reasonable or prudent, and has far too few safeguards for lawful speech. The Senate should throw it out and start over. 

EFF Honored as DEF CON 32 Uber Contributor

Par : Rory Mir
15 août 2024 à 15:23

At DEF CON 32 this year, the Electronic Frontier Foundation became the first organization to be given the Uber Contributor award. This award recognizes EFF’s work in education and litigation, naming us “Defenders of the Hacker Spirit.”

Image of award outside, silver brick with DEF CON logo.

DEF CON Uber Contributor Award

EFF staff accetping the award on stage.

EFF Staff Attorney Hannah Zhao and Staff Technologist Cooper Quintin accepting the Uber Contributor Award from DEF CON founder Jeff Moss

The Uber Contributor Award is an honor created three years ago to recognize people and groups who have made exceptional contributions to the infosec and hacker community at DEF CON. Our connection with DEF CON runs deep, dating back over 20 years. The conference has become a vital part of keeping EFF’s work, grounded in the ongoing issues faced by the creative builders and experimenters keeping tech secure (and fun).

Silly selfie of EFF staff holding the award

EFF Staff Attorney Hannah Zhao (left) and Staff Technologist Cooper Quintin (right) with the Uber Contributor Award (center)

Every year attendees and organizers show immense support and generosity in return, but this year exceeded all expectations. EFF raised more funds than all previous years at hacker summer camp—the three annual Las Vegas hacker conferences, BSidesLV, Black Hat USA, and DEF CON. We also gained over 1,000 new supporting and renewing members supporting us year-round. This community’s generosity fuels our work to protect encrypted messaging, fight back against illegal surveillance, and defend your right to hack and experiment. We’re honored to be welcomed so warmly year after year. 

Just this year, we saw another last minute cease-and-desist order sent to a security researcher about their DEF CON talk. EFF attorneys from our  Coders’ Rights Project attend every year, and were able to  jump into action to protect the speaker. While the team puts out fires at DEF CON for one week in August, their year-round support of coders is thanks to the continued support of the wider community. Anyone facing intimidation and spurious legal threats can always reach out for support at info@eff.org

We are deeply grateful for this honor and the unwavering support from DEF CON. Thank you to everyone who supported EFF at the membership booth, participated in our Poker Tournament and Tech Trivia, or checked out our talks. 

We remain committed to meeting the needs of coders and will continue to live up to this award, ensuring the hacker spirit thrives despite an increasingly hostile landscape. We look forward to seeing you again next year!

EFF Tells Yet Another Court to Ensure Everyone Has Access to the Law and Reject Private Gatekeepers

7 août 2024 à 13:09

Our laws belong to all of us, and we should be able to find, read, and comment on them free of registration requirements, fees, and other roadblocks. That means private organizations shouldn’t be able to control who can read and share the law, or where and how we can do those things. But that’s exactly what some industry groups are trying to do.

EFF has been fighting for years to stop them. The most recent instance is ASTM v. Upcodes. ASTM, an organization that develops technical standards, claims it retains copyright in those standards even when they’ve become binding law through “incorporation by reference.” When a standard is incorporated “by reference,” that means its text is not actually reprinted in the body of the government’s published regulations. Instead, the regulations include a citation to the standard, which means you have to track down a copy somewhere else if you want to know what the law requires.

 Incorporation by reference is common for a wide variety of laws governing the safety of buildings, pipelines, consumer products and so on. Often, these are laws that affect us directly in our everyday lives—but they can also be the most inaccessible. ASTM makes some of those laws available for free, but not all of them, and only via “reading rooms” that are hard to navigate and full of restrictions. Services like UpCodes have emerged to try to bridge the gap by making mandatory standards more easily available online. Among other things, UpCodes has created a searchable online library of some of the thousands of ASTM standards that have been incorporated by reference around the country. According to ASTM, that’s copyright infringement.

 EFF litigated a pair of cases on this issue for our client Public.Resource.Org (or “Public Resource”). We argued there that incorporated standards are the law, and no one can own copyright in the law. And in any event, it’s a fair use to republish incorporated standards in a centralized repository that makes them easier to access and use. In December 2023, the D.C. Circuit Court of Appeals ruled in Public Resource’s favor on fair use grounds.

 Based on our experience, we filed an amicus brief supporting UpCodes, joined by Public Knowledge and iFixit, Inc. and with essential support from local counsel Sam Silver and Abigail Burton at Welsh & Recker.  Unlike our cases for Public Resource, in UpCodes the standards at issue haven’t been directly incorporated into any laws. Instead, they’re incorporated by reference into other standards, which in turn have been incorporated into law. As we explain in our brief, this extra degree of separation shouldn’t make a difference in the legal analysis. If the government tells you, “Do what Document A says,” and Document A says, “Do what Document B says,” you’re going to need to read Document B to know what the government is telling you to do.

TAKE ACTION

Tell Congress: Access To Laws Should Be Fully Open

At the same time that we’re fighting this battle in the courts, we’re fighting a similar one in Congress. The Pro Codes Act would effectively endorse the claim that organizations like ASTM can “retain” copyright in codes, even after they are made law, as long as they make the codes available through a “publicly accessible” website—which means read-only, and subject to licensing limits. The Pro Codes Act recently fell short of the necessary votes to pass through the House, but it’s still being pushed by some lawmakers.

Whether it’s in courts or in Congress, we’ll keep fighting for your right to read and share the laws that we all must live by. A nation governed by the rule of law should not tolerate private control of that law. We hope the court in UpCodes comes to the same conclusion.

Victory! EFF Supporters Beat USPTO Proposal To Wreck Patent Reviews

Par : Joe Mullin
16 juillet 2024 à 15:44

The U.S. patent system is broken, particularly when it comes to software patents. At EFF, we’ve been fighting hard for changes that make the system more sensible. Last month, we got a big victory when we defeated a set of rules that would have mangled one of the U.S. Patent and Trademark Office (USPTO)’s most effective systems for kicking out bad patents. 

In 2012, recognizing the entrenched problem of a patent office that spewed out tens of thousands of ridiculous patents every year, Congress created a new system to review patents called “inter partes reviews,” or IPRs. While far from perfect, IPRs have resulted in cancellation of thousands of patent claims that never should have been issued in the first place. 

At EFF, we used the IPR process to crowd-fund a challenge to the Personal Audio “podcasting patent” that tried to extract patent royalty payments from U.S. podcasters. We won that proceeding and our victory was confirmed on appeal.

It’s no surprise that big patent owners and patent trolls have been trying to wreck the IPR system for years. They’ve tried, and failed, to get federal courts to dismantle IPRs. They’ve tried, and failed, to push legislation that would break the IPR system. And last year, they found a new way to attack IPRs—by convincing the USPTO to propose a set of rules that would have sharply limited the public’s right to challenge bad patents. 

That’s when EFF and our supporters knew we had to fight back. Nearly one thousand EFF supporters filed comments with the USPTO using our suggested language, and hundreds more of you wrote your own comments. 

Today, we say thank you to everyone who took the time to speak out. Your voice does matter. In fact, the USPTO withdrew all three of the terrible proposals that we focused on. 

Our Victory to Keep Public Access To Patent Challenges 

The original rules would have greatly increased expanded what are called “discretionary denials,” enabling judges at the USPTO to throw out an IPR petition without adequately considering the merits of the petition. While we would like to see even fewer discretionary denials, defeating the proposed limitations patent challenges is a significant win.

First, the original rules would have stopped “certain for-profit entities” from using the IPR system altogether. While EFF is a non-profit, for-profit companies can and should be allowed to play a role in getting wrongly granted patents out of the system. Membership-based patent defense organizations like RPX or Unified Patents can allow small companies to band together and limit their costs while defending themselves against invalid patents. And non-profits like the Linux Foundation, who joined us in fighting against these wrongheaded proposed rules, can work together with professional patent defense groups to file more IPRs. 

EFF and our supporters wrote in opposition to this rule change—and it’s out. 

Second, the original rules would have exempted “micro and small entities” from patent reviews altogether. This exemption would have applied to many of the types of companies we call “patent trolls”—that is, companies whose business is simply demanding license fees for patents, rather than offering actual products or services. Those companies, specially designed to threaten litigation, would have easily qualified as “small entities” and avoided having their patents challenged. Patent trolls, which bully real small companies and software developers into paying unwarranted settlement fees, aren’t the kind of “small business” that should be getting special exemptions from patent review. 

EFF and our supporters opposed this exemption, and it’s out of the final rulemaking. 

Third, last year’s proposal would have allowed for IPR petitions to be kicked out if they had a “parallel proceeding”—in other words, a similar patent dispute—in district court. This was a wholly improper reason to not consider IPRs, especially since district court evidence rules are different than those in place for an IPR. 

EFF and our supporters opposed these new limitations, and they’re out. 

While the new rules aren’t perfect, they’re greatly improved. We would still prefer more IPRs rather than fewer, and don’t want to see IPRs that otherwise meet the rules get kicked out of the review process. But even there, the new revised rules have big improvements. For instance, they allow for separate briefing of discretionary denials, so that people and companies seeking IPR review can keep their focus on the merits of their petition. 

Additional reading: 

What’s the Difference Between Mastodon, Bluesky, and Threads?

The ongoing Twitter exodus sparked life into a new way of doing social media. Instead of a handful of platforms trying to control your life online, people are reclaiming control by building more open and empowering approaches to social media. Some of these you may have heard of: Mastodon, Bluesky, and Threads. Each is distinct, but their differences can be hard to understand as they’re rooted in their different technical approaches. 

The mainstream social web arguably became “five websites, each consisting of screenshots of text from the other four,”  but in just the last few years radical and controversial changes to major platforms were a wake up call to many and are driving people to seek alternatives to the billionaire-driven monocultures.

Two major ecosystems have emerged in the wake, both encouraging the variety and experimentation of the earlier web. The first, built on ActivityPub protocol, is called the Fediverse. While it includes many different kinds of websites, Mastodon and Threads have taken off as alternatives for Twitter that use this protocol. The other is the AT Protocol, powering the Twitter alternative Bluesky.

These protocols, a shared language between computer systems, allow websites to exchange information. It’s a simple concept you’re benefiting from right now, as protocols enable you to read this post in your choice of app or browser. Opening this freedom to social media has a huge impact, letting everyone send and receive posts their own preferred way. Even better, these systems are open to experiment and can cater to every niche, while still connecting to everyone in the wider network. You can leave the dead malls of platform capitalism, and find the services which cater to you.

To save you some trial and error, we have outlined some differences between these options and what that might mean for them down the road.

ActivityPub and AT Protocols

ActivityPub

The Fediverse goes a bit further back,  but ActivityPub’s development by the world wide web consortium (W3C) started in 2014. The W3C is a public-interest non-profit organization which has played a vital role in developing open international standards which define the internet, like HTML and CSS (for better or worse). Their commitment to ActivityPub gives some assurance the protocol will be developed in a stable and ostensibly consensus driven process.

This protocol requires a host website (often called an “instance”) to maintain an “inbox” and “outbox” of content for all of its users, and selectively share this with other host websites on behalf of the users. In this federation model users are accountable to their instance, and instances are accountable to each other. Misbehaving users are banned from instances, and misbehaving instances are cut off from others through “defederation.” This creates some stakes for maintaining good behavior, for users and moderators alike.

ActivityPub handles a wide variety of uses, but the application most associated with the protocol is Mastodon. However, ActivityPub is also integral to Meta’s own Twitter alternative, Threads, which is taking small steps to connect with the Fediverse. Threads is a totally different application, solely hosted by Meta, and is ten times bigger than the Fediverse and Bluesky networks combinedmaking it the 500-pound gorilla in the room. Meta’s poor reputation on privacy, moderation, and censorship, has driven many Fediverse instances to vow they’ll defederate from Threads. Other instances still may connect with Threads to help users find a broader audience, and perhaps help sway Threads users to try Mastodon instead.

AT Protocol

The Authenticated Transfer (AT) Protocol is newer; sparked by Twitter co-founder Jack Dorsey in 2019. Like ActivityPub, it is also an open source protocol. However, it is developed unilaterally by a private for-profit corporation— Bluesky PBLLC— though it may be imparted to a web standards body in the future. Bluesky remains mostly centralized. While it has recently opened up to small hosts, there are still some restrictions preventing major alternatives from participating. As developers further loosens control we will likely see rapid changes in how people use the network.

The AT Protocol network design doesn’t put the same emphasis on individual hosts as the Fediverse does, and breaks up hosting, distribution, and curation into distinct services. It’s easiest to understand in comparison to traditional web hosting. Your information, like posts and profiles, are held in Personal Data Servers (PDSes)—analogous to the hosting of a personal website. This content is then fetched by relay servers, like web crawlers, which aggregate a “firehose” of everyone’s content without much alteration. To sort and filter this on behalf of the user, like a “search engine,” AT has Appview services, which give users control over what they see. When accessing the Appview through a client app or website, the user has many options to further filter, sort, and curate their feed, as well as “subscribe” to filters and labels someone else made.

The result is a decentralized system which can be highly tailored while still offering global reach. However, this atomized system also may mean the community accountability encouraged by the host-centered system may be missing, and users are ultimately responsible for their own experience and moderation. This will depend on how the network opens to major hosts other than the Bluesky corporation.

User Experience

Mastodon, Threads and Bluesky have a number of differences that are not essential to their underlying protocol which affect users looking to get involved today. Mastodon and Bluesky are very customizable, so these differences are just addressing the prevalent trends.

Timeline Algorithm

Most Mastodon and most ActivityPub sites prefer a more straightforward timeline of content from accounts you follow. Threads have a Meta-controlled algorithm, like Instagram. Bluesky defaults to a chronological feed, but opens algorithmic curation and filtering up to apps and users. 

User Design

All three services present a default appearance that will be familiar to anyone who has used Twitter. Both Mastodon and Bluesky have alternative clients with the only limit being a developer’s imagination. In fact, thanks to their open nature, projects like SkyBridge let users of one network use apps built for the other (in this case, Bluesky users using Mastodon apps). Threads does not have any alternate clients and requires a developer API, which is still in beta.

Onboarding 

Threads has the greatest advantage to getting people to sign up, as it has only one site which accepts an Instagram account as a login. Bluesky also has only one major option for signing up, but has some inherent flexibility in moving your account later on. That said, diving into a few extra setup steps can improve the experience. Finally, one could easily join Mastodon by joining the flagship instance, mastodon.social. However, given the importance of choosing the right instance, you may miss out on some of the benefits of the Fediverse and want to move your account later on. 

Culture

Threads has a reputation for being more brand-focused, with more commercial accounts and celebrities, and Meta has made no secret about their decisions to deemphasize political posts on the platform. Bluesky is often compared to early Twitter, with a casual tone and a focus on engaging with friends. Mastodon draws more people looking for community online, especially around shared interests, and each instance will have distinct norms.

Privacy Considerations

Neither ActivityPub nor AT Protocol currently support private end-to-end encrypted messages at this time, so they should not be used for sensitive information. For all services here, the majority of content on your profile will be accessible from the public web. That said, Mastodon, Threads, and Bluesky differ in how they handle user data.

Mastodon

Everything you do as a user is entrusted to the instance host including posts, interactions, DMs, settings, and more. This means the owner of your instance can access this information, and is responsible for defending it against attackers and law enforcement. Tech-savvy people may choose to self-host, but users generally need to find an instance run by someone they trust.

The Fediverse muffles content sharing through a myriad of permissions set by users and instances. If your instance blocks a poorly moderated instance for example, the people on that other site will no longer be in your timelines nor able to follow your posts. You can also limit how messages are shared to further reduce the intended audience. While this can create a sense of community and closeness,  remember it is still public and instance hosts are always part of the equation. Direct messages, for example, will be accessible to your host and the host of the recipient.

If content needs to be changed or deleted after being shared, your instance can request these changes, and this is often honored. That said, once something is shared to the network, it may be difficult to “undo.”

Threads

All user content is entrusted to one host, in this case Meta, with a privacy policy similar to Instagram. Meta determines when information is shared with law enforcement, how it is used for advertising, how well protected it is from a breach, and so on.

Sharing with instances works differently for Threads, as Meta has more restricted interoperability. Currently, content sharing is one-way: Threads users can opt-in to sharing their content with the Fediverse, but won’t see likes or replies. By the end of this year, they will allow Threads users to follow accounts on Mastodon accounts.

Federation on Threads may always be restricted, and features like transferring one's account to Mastodon may never be supported. Limits in sharing should not be confused with enhanced privacy or security, however. Public posts are just that—public—and you are still trusting your host (Meta) with private data like DMs (currently handled by Instagram). Instead these restrictions, should they persist, should be seen as the minimum level of control over users Meta deems necessary.

Bluesky

Bluesky, in contrast, is a very “loud” system. Every public message, interaction, follow and block is hosted by your PDS and freely shared to everyone in the network. Every public post is for everyone and is only discovered according to their own app and filter preferences. There are ways to algorithmically imitate smaller spaces with filtering and algorithmic feeds, such as with the Blacksky project, but these are open to everyone and your posts will not be restricted to that curated space.

Direct messages are limited to the flagship Bluesky app, and can be accessed by the Bluesky moderation team. The project plans to eventually incorporate DMs into the protocol, and include end-to-end-encryption, but it is not currently supported. Deletion on Bluesky is simply handled by removing the content from your PDS, but once a message is shared to Relay and Appview services it may remain in circulation a while longer according to their retention settings.

Moderation

Mastodon

Mastodon’s approach to moderation is often compared to subreddits, where the administrators of an instance are responsible for creating a set of rules and empowering a team of moderators to keep the community healthy. The result is a lot more variety in moderation experience, with the only boundary being an instance’s reputation in the broader Fediverse. Instances coordinating and “defederating” from problematic hosts has already been effective in the Fediverse. One former instance, Gab, was successfully cut off from the Fediverse for hosting extreme right-wing hate. The threat of defederation sets a baseline of behavior across the Fediverse, and from there users can choose instances based on reputation and on how aligned the hosts are with their own moderation preferences.

At its best, instances prioritize things other than growth. New members are welcomed and onboarded carefully as new community members, and hosts only grow the community if their moderation team can support it. Some instances even set a permanent cap on participation to a few thousand to ensure a quality and intimate experience. Current members too can vote with their feet, and if needed split off into their own new instance without needing to disconnect entirely.

While Mastodon has a lot going for it by giving users a choiceavoiding automation, and avoiding unsustainable growth, there are other evergreen moderation issues at play. Decisions can be arbitrary, inconsistent, and come with little recourse. These aren't just decisions impacting individual users, but also those affecting large swaths of them, when it comes to defederation. 

Threads

Threads, as alluded to when discussing privacy above, aims for a moderation approach more aligned with pre-2022 Twitter and Meta’s other current platforms like Instagram. That is, an impossible task of scaling moderation with endless growth of users.

As the largest of these services however, this puts Meta in a position to set norms around moderation as it enters the Fediverse. A challenge for decentralized projects will be to ensure Meta’s size doesn’t make them the ultimate authority on moderation decisions, a pattern of re-centralization we’ve seen happen in email. Spam detection tools have created an environment where email, though an open standard, is in practice dominated by Microsoft and Google as smaller services are frequently marked as spammers. A similar dynamic could play out with the federated social web, where Meta has capacity to exclude smaller instances with little recourse. Other instances may copy these decisions or fear not to do so, lest they are also excluded. 

Bluesky

While in beta, Bluesky received a lot of praise and criticism for its moderation. However, up until recently, all moderation was handled by the centralized Bluesky company—not throughout the distributed AT network. The true nature of moderation structure on the network is only now being tested.

AT Protocol relies on labeling services, aka “labelers”  for moderation. These special accounts using Bluesky’s Ozone tool labels posts with small pieces of metadata. You can also filter accounts with account block lists published by other users, a lot like the Block Together tool formerly available on Twitter. Your Appview aggregating your feed uses these labels to and block lists to filter content. Arbitrary and irreconcilable moderation decisions are still a problem, as are some of the risks of using automated moderation, but it is less impactful as users are not deplatformed and remain accessible to people with different moderation settings. This also means problematic users don’t go anywhere and can still follow you, they are just less visible.

The AT network is censorship resistant, and conversely, it is difficult to meaningfully ban users. To be propagated in the network one only needs a PDS to host their account, and at least one Relay to spread that information. Currently Relays sit out of moderation, only scanning to restrict CSAM. In theory Relays could be more like a Fediverse instance and more accurately curate and moderate users. Even then, as long as one Relay carries the user they will be part of the network. PDSes, much like web hosts, may also choose to remove controversial users, but even in those cases PDSes are easy to self-host even on a low-power computer.

Like the internet generally, removing content relies on the fragility of those targeted. With enough resources and support, a voice will remain online. Without user-driven approaches to limit or deplatform content (like defederation), Bluesky services may be targeted by censorship on the infrastructure level, like on the ISP level.

Hosting and Censorship

With any internet service, there are some legal obligations when hosting user generated content. No matter the size, hosts may need to contend with DMCA takedowns, warrants for user data, cyber attacks,  blocking from authoritarian regimes, and other pressures from powerful interests. This decentralized approach to social media also relies on a shared legal protection for all hosts, Section 230.  By ensuring they are not held liable for user-generated content, this law provides the legal protection necessary for these platforms to operate and innovate.

Given the differences in the size of hosts and their approach to moderation, it isn’t surprising that each of these platforms will address platform liability and censorship differently.

Mastodon

Instance hosts, even for small communities, need to navigate these legal considerations as we outlined in our Fediverse legal primer. We have already seen some old patterns reemerge with these smaller, and often hobbyist, hosts struggling to defend themselves from legal challenges and security threats. While larger hosts have resources to defend against these threats, an advantage of the decentralized model is censors need to play whack-a-mole in a large network where messages flow freely across the globe. Together, the Fediverse is set up to be quite good at keeping information safe from censorship, but individual users and accounts are very susceptible to targeted censorship efforts and will struggle with rebuilding their presence.

Threads

Threads is the easiest to address, as Meta is already several platforms deep into addressing liability and speech concerns, and have the resources to do so. Unlike Mastodon or Bluesky, they also need to do so on a much larger scale with a larger target on their back as the biggest platform backed by a multi-billion dollar company. The unique challenge for Threads however will be how Meta decides to handle content from the rest of the Fediverse. Threads users will also need to navigate the perks and pitfalls of sticking with a major host with a spotty track record on censorship and disinformation.

Bluesky

Bluesky is not yet tested beyond the flagship Bluesky services, and raises a lot more questions. PDSes, Relays and even Appviews play some role in hosting, and can be used with some redundancies. For example your account on one PDS may be targeted, but the system is designed to be easy for users to change this host, self-host, or have multiple hosts while retaining one identity on the network.

Relays, in contrast, are more computationally demanding and may remain the most “centralized” service as natural monopolies— users have some incentive to mostly follow the biggest relays. The result is a potential bottle-neck susceptible to influence and censorship. However, if we see a wide variety of relays with different incentives, it becomes more likely that messages can be shared throughout the network despite censorship attempts.

You Might Not Have to Choose

With this overview, you can start diving into one of these new Twitter alternatives leading the way in a more free social web. Thanks to the open nature of these new systems, where you set up will become less important with improved interoperability.

Both ActivityPub and AT Protocol developers are receptive to making the two better at communicating with one another, and independent projects like  Bridgy Fed, SkyBridge, RSS Parrot and Mastofeed are already letting users get the best of both worlds. Today a growing number of projects speak both protocols, along with older ones like RSS. It may be these paths towards a decentralized web become increasingly trivial as they converge, despite some early growing pains. Or the two may be eclipsed by yet another option. But their shared trajectory is moving us towards a more free, more open and refreshingly weird social web free of platform gatekeepers.

Ah, Steamboat Willie. It’s been too long. 🐭

Par : Aaron Jue
18 juin 2024 à 11:31

Did you know Disney’s Steamboat Willie entered the public domain this year? Since its 1928 debut, U.S. Congress has made multiple changes to copyright law, extending Disney’s ownership of this cultural icon for almost a century. A century.

Creativity should spark more creativity.

That’s not how intellectual property laws are supposed to work. In the United States, these laws were designed to give creators a financial incentive to contribute to science and culture. Then eventually the law makes this expression free for everyone to enjoy and build upon. Disney itself has reaped the abundant benefits of works in the public domain including Hans Christian Andersen’s “The Little Mermaid" and "The Snow Queen." Creativity should spark more creativity.

In that spirit, EFF presents to you this year’s EFF member t-shirt simply called “Fix Copyright":

Copyright Creativity is fun for the whole family.

The design references Steamboat Willie, but also tractor owners’ ongoing battle to repair their equipment despite threats from manufacturers like John Deere. These legal maneuvers are based on Section 1201 of the Digital Millennium Copyright Act or DMCA. In a recent appeals court brief, EFF and co-counsel Wilson Sonsini Goodrich & Rosati argued that Section 1201 chills free expression, impedes scientific research, and to top it off, is unenforceable because it’s too broad and violates the First Amendment. Ownership ain’t what it used to be, so let’s make it better.

We need you! Get behind this mission and support EFF's work as a member. Through EFF's 34th anniversary on July 10:

You can help cut through the BS and make the world a little brighter—whether online or off.

Join EFF

Defend Creativity & Innovation Online

_________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Hand me the flashlight. I’ll be right back...

Par : M. Jackalope
13 juin 2024 à 03:21

It’s time for the second installment of campfire tales from our friends, The Encryptids—the rarely-seen enigmas who’ve become folk legends. They’re helping us celebrate EFF’s summer membership drive for internet freedom!

Through EFF's 34th birthday on July 10, you can receive 2 rare gifts, be a member for just $20, and as a bonus new recurring monthly or annual donations get a free match! Join us today.

So...do you ever feel like tech companies still own the devices you’ve paid for? Like you don’t have alternatives to corporate choices? Au contraire! Today, Monsieur Jackalope tells us why interoperability plays a key role in giving you freedom in tech...

-Aaron Jue
EFF Membership Team

_______________________________________

Jackalope in a forest saying "Interoperability makes good things great!"C

all me Jacques. Some believe I am cuddly. Others deem me ferocious. Yet I am those things and more. How could anyone tell me what I may be? Beauty lives in creativity, innovation, and yes, even contradiction. When you are confined to what is, you lose sight of what could be. Zut! Here we find ourselves at the mercy of oppressive tech companies who perhaps believe you are better off without choices. But they are wrong.

Control, commerce, and lack of competition. These limit us and rob us of our potential. We are destined for so much more in tech! When I must make repairs on my scooter, do I call Vespa for their approval on my wrenches? Mais non! Then why should we prohibit software tools from interacting with one another? The connected world must not be a darker reflection of this one we already know.

The connected world must not be a darker reflection of this one we already know.

EFF’s team—avec mon ami Cory Doctorow!—advocate powerfully for systems in which we do not need the permission of companies to fix, connect, or play with technology. Oui, c’est difficile: you find copyrighted software in nearly everything, and sparkling proprietary tech lures you toward crystal prisons. But EFF has helped make excellent progress with laws supporting your Right to Repair, they speak out against tech monopolies, they lift up the free and open source software community, and they advocate for creators across the web.

Join EFF

Interoperability makes good things great

You can make a difference in the fight to truly own your devices. Support the EFF’s efforts as a member this year and reach toward the sublime web that interconnection and creativity can bring.

Cordialement,

Monsieur Jackalope

_______________________________________

EFF is a member-supported U.S. 501(c)(3) organization celebrating TEN YEARS of top ratings from the nonprofit watchdog Charity Navigator! Your donation is tax-deductible as allowed by law.

Podcast Episode: AI on the Artist's Palette

Par : Josh Richman
4 juin 2024 à 03:06

Collaging, remixing, sampling—art always has been more than the sum of its parts, a synthesis of elements and ideas that produces something new and thought-provoking. Technology has enabled and advanced this enormously, letting us access and manipulate information and images in ways that would’ve been unimaginable just a few decades ago.

play
Privacy info. This embed will serve content from simplecast.com

Listen on Spotify Podcasts Badge Listen on Apple Podcasts Badge  Subscribe via RSS badge

(You can also find this episode on the Internet Archive and on YouTube.)

For Nettrice Gaskins, this is an essential part of the African American experience: The ability to take whatever is at hand—from food to clothes to music to visual art—and combine it with life experience to adapt it into something new and original. She joins EFF’s Cindy Cohn and Jason Kelley to discuss how she takes this approach in applying artificial intelligence to her own artwork, expanding the boundaries of Black artistic thought.  

In this episode you’ll learn about: 

  • Why making art with AI is about much more than just typing a prompt and hitting a button 
  • How hip-hop music and culture was an early example of technology changing the state of Black art 
  • Why the concept of fair use in intellectual property law is crucial to the artistic process 
  • How biases in machine learning training data can affect art 
  • Why new tools can never replace the mind of a live, experienced artist 

Dr. Nettrice R. Gaskins is a digital artist, academic, cultural critic, and advocate of STEAM (science, technology, engineering, arts, and math) fields whose work she explores "techno-vernacular creativity" and Afrofuturism. She teaches, writes, "fabs,” and makes art using algorithms and machine learning. She has taught multimedia, visual art, and computer science with high school students, and now is assistant director of the Lesley STEAM Learning Lab at Lesley University.  She was a 2021 Ford Global Fellow, serves as an advisory board member for the School of Literature, Media, and Communication at Georgia Tech, and is the author of “Techno-Vernacular Creativity and Innovation” (2021). She earned a BFA in Computer Graphics with honors from Pratt Institute in 1992; an MFA in Art and Technology from the School of the Art Institute of Chicago in 1994; and a doctorate in Digital Media from Georgia Tech in 2014.  

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

NETTRICE GASKINS
I just think we have a need to remix, to combine, and that's where a lot of our innovation comes from, our ability to take things that we have access to. And rather than see it as a deficit, I see it as an asset because it produces something beautiful a lot of the times. Something that is really done for functional reasons or for practical reasons, or utilitarian reasons is actually something very beautiful, or something that takes it beyond what it was initially intended to be.

CINDY COHN
That's Nettrice Gaskins. She’s a professor, a cultural critic and a digital artist who has been using algorithms and generative AI as a part of her artistic practice for years.

I’m Cindy Cohn - executive director of the Electronic Frontier Foundation.

JASON KELLEY
And I’m Jason Kelley - EFF’s Activism Director. This is our podcast series How to Fix the Internet.

CINDY COHN
On this show, we’re trying to fix the internet – or at least trying to envision what the world could look like if we get things right online. At EFF we spend a lot of time pointing out the way things could go wrong – and jumping in to the fray when they DO go wrong. But this show is about envisioning, and hopefully helping create, a better future.

JASON KELLEY
Our guest today is Nettrice Gaskins. She’s the assistant director of the Lesley STEAM learning lab at Lesley University and the author of Techno-Vernacular Creativity and Innovation. Her artwork has been featured by the Smithsonian, among many other institutions.

CINDY COHN
Nettrice has spoken about how her work creating art using generative AI prompts is directly related to remix culture and hip hop and collage. There’s a rich tradition of remixing to create new artworks that can be more than the sum of their parts, and – at least the way that Nettrice uses it – generative AI is another tool that can facilitate this kind of art. So we wanted to start the conversation there.

NETTRICE GASKINS
Even before hip hop, even the food we ate, um, poor people didn't have access to, you know, ham or certain things. So they used the intestines of a pig and then they created gumbo, because they had a little bit of this and a little bit of that and they found really creative and innovative ways to put it all together that is now seen as a thing to have, or have tried. So I think, you know, when you have around the world, not just in the United States, but even in places that are underserved or disenfranchised you have this, still, need to create, and to even innovate.

And I think a lot of the history of African Americans, for example, in the United States, they weren't permitted to have their own languages. But they found ways to embed it in language anyway. They found ways to embed it in the music.

So I think along the way, this idea of what we now know as remixing or sampling or collage has been there all along and this is just one other way.  I think that once you explain how generative AI works to people who are familiar with remixing and all this thing in the history, it clicks in many ways.
Because it starts to make sense that it is instead of, you know, 20 different magazines I can cut images out and make a collage with, now we're talking about thousands of different, pieces of information and data that can inform how an image is created and that it's a prediction and that we can create all these different predictions. It sounds a lot like what happens when we were looking at a bunch of ingredients in the house and realizing we had to make something from nothing and we made gumbo.

And that gumbo can take many different forms. There's a gumbo in this particular area of the country, then there's gumbo in this particular community, and they all have the same idea, but the output, the taste, the ingredients are different. And I think that when you place generative AI in that space, you're talking about a continuum. And that's kind of how I treat it when I'm working with gen AI.

CINDY COHN
I think that's so smart. And the piece of that that's important that's kind of inherent in the way you're talking about it, is that the person doing the mixing, right? The chef, right, is the one who who does the choices and who's the chef matters, right?

NETTRICE GASKINS
And also, you know, when they did collage, there's no attribution. So if you look at a Picasso work that's done collage, he didn't, you know, all the papers, newspapers that he took from, there's no list of what magazines those images came from, and you could have hundreds to 50 to four different references, and they created fair use kind of around stuff like that to protect, you know, works that are like, you know, collage or stuff from modern art.

And we're in a situation where those sources are now quadrupled, it's not even that, it's like, you know, how many times, as opposed to when we were just using paper, or photographs.

We can't look at it the same because the technology is not the same, however, some of the same ideas can apply. Anybody can do collage, but what makes collage stand out is the power of the image once it's all done. And in some cases people don't want to care about that, they just want to make collage. They don't care, they're a kid and they just want to make paper and put it together, make a greeting card and give it to mom.

Other people make some serious work, sometimes very detailed using collage, and that's just paper, we're not even talking about digital collage, or the ways we use Adobe Photoshop to layer images and create digital collages, and now Photoshop's considered to be an AI generator as well. SoI think that if we look in the whole continuum of modern art, and we look at this need to curate abstractions from things from life.

And, you know, Picasso was looking at African art, there's a way in which they abstracted that he pulled it into cubism, him and many other artists of his time. And then other artists looked at Picasso and then they took it to whatever level they took it to. But I think we don't see the continuum. We often just go by the tool or go by the process and not realize that this is really an extension of what we've done before. Which is how I view gen AI. And the way that I use it is oftentimes not just hitting a button or even just cutting and pasting. It is a real thoughtful process about ideas and iteration and a different type of collage.

CINDY COHN
I do think that this bridges over into, you know, an area where EFF does a lot of work, right, which is really making sure we have a robust Fair Use doctrine that doesn't get stuck in one technology, but really can grow because, you know we definitely had a problem with hip hop where the, kind of, over-copyright enforcement really, I think, put a damper on a lot of stuff that was going on early on.

I don't actually think it serves artists either, that we have to look elsewhere as a way to try to make sure that we're getting artists paid rather than trying to control each piece and make sure that there's a monetization scheme that's based upon the individual pieces. I don't know if you agree, but that's how I think about it.

NETTRICE GASKINS
Yeah, and I, you know, just like we can't look at collage traditionally and then look at gen AI as exactly the same. There's some principles and concepts around that I think they're very similar, but, you know, there's just more data. This is much more involved than just cutting and pasting on canvas board or whatever, that we're doing now.

You know, I grew up with hip hop, hip hop is 50 this year, I'm 53, so I was three, so hip hop is my whole life. You know, from the very beginning to, to now. And I've also had some education or some training in sampling. So I had a friend who was producing demos for, and I would sit there all night and watch him splice up, you know, different sounds. And eventually I learned how to do it myself. So I know the nature of that. I even spliced up sampled musics further to create new compositions with that.

And so I'm very much aware of that process and how it connects even from the visual arts side, which is mostly what I am as a visual artist, of being able to splice up and, and do all that. And I was doing that in 1992.

CINDY COHN
Nice.

NETTRICE GASKINS
I was trying to do it in 1987, when the first time I used Amiga and DePaint, I was trying to make collages then in addition to what I was doing in my visual arts classes outside of that. So I've always been interested in this idea, but if you look at the history of even the music, these were poor kids living in the Bronx. These were poor kids and they couldn't afford all the other things, the other kids who were well off, so they would go to the trash bins and take equipment and re-engineer it and come up with stuff that now DJs around the world are using. That people around the world are doing, but they didn't have, so they had to be innovative. They had to think outside the box. And they had to use – they weren't musicians. They didn't have access to instruments, but they did have access to was records. And they had access to, you know, discarded electronics and they were able to figure out a way to stretch out a rhythm so that people could dance to it.

They had the ability to layer sounds so that there was no gap between one album and the next, so they could continue that continuous play so that the party kept going. They found ways to do that. They didn't go to a store and buy anything that made that happen. They made it happen by tinkering and doing all kinds of things with the equipment that they had access to, which is from the garbage.

CINDY COHN
Yeah, absolutely. I mean, Grandmaster Flash and the creation of the crossfader and a lot of actual, kind of, old school hardware development, right, came out of that desire and that recognition that you could take these old records and cut them up, right? Pull the, pull the breaks and, and play them over and over again. And I just think that it's pulling on something very universal. Definitely based upon the fact that a lot of these kids didn't have access to formal instruments and formal training, but also just finding a way to make that music, make that party still go despite that, there's just something beautiful about that.

And I guess I'm, I'm hoping, you know, AI is quite a different context at this point, and certainly it takes a lot of money to build these models. But I'm kind of interested in whether you think we're headed towards a future where these foundational models or the generative AI models are ubiquitous and we'll start to see the kids of the future picking them up and building new things out of them.

NETTRICE GASKINS
I think they could do it now. I think that with the right situation where they could set up a training model and figure out what data they wanted to go into the model and then use that model and build it over time. I just think that it's the time and the space, just like the time and the space that people had to create hip hop, right?

The time and the space to get in a circle and perform together or get into a room and have a function or party. I think that it was the time. And I think that, we just need that moment in this space to be able to produce something else that's more culturally relevant than just something that's corporate.
And I think my experiences as an artist, as someone who grew up around hip-hop all my life, some of the people that I know personally are pioneers in that space of hip-hop. But also, I don't even stay in hip-hop. You know, I was talking about sashiko, man, that's a Japanese hand-stitching technique that I'm applying, remixing to. And for me to do that with Japanese people, you know, and then their first concern was that I didn't know enough about the sashiko to be going there. And then when I showed them what I knew, they were shocked. Like, when I go into, I go deep in. And so they were very like, Oh, okay. No, she knows.

Sashiko is a perfect example. If you don't know about sashiko embroidery and hand stitching, there were poor people and they wanted to stretch out the fabrics and the clothing for longer because they were poor. So they figure out ways to create these intricate stitching patterns that reinforced the fabric so that it would last longer because they were poor. And then they would do patches, like patchwork quilts and they it was both a quilting and embroidery technique for poor people, once again, using what they had.

When we think about gumbo, here's another situation of people who didn't have access to fancy clothing or fancy textiles, but found a way. And then the work that they did was beautiful. Aesthetically, it was utilitarian in terms of why they did it. But now we have this entire cultural art form that comes out of that, that's beautiful.

And I think that's kind of what has happened along the way. You know, we are, just like there are gatekeepers in the art world so the Picassos get in, but not necessarily. You know, I think about Romare Bearden, who did get into some of the museums and things. But most people, they know of Picasso, but they don't know about Romare Bearden who decided to use collage to represent black life.

But I also feel like, we talk about equity, and we talk about who gets in, who has the keys. Where the same thing occurs in generative AI. Or just AI in general, I don't know, the New York Times had an article recently listed all the AI pioneers and no women were involved, it was just men. And then so it was a Medium article, here were 13, 15 women you could have had in your list. Once again, we see it again, where people are saying who holds the keys. These are the people that hold the keys. And in some cases, it's based on what academic institution you're at.

So again, who holds the keys? Even in the women who are listed. MITs, and the Stanfords, and somewhere out there, there's an AI innovator who isn't in any of those institutions, but is doing some cool things within a certain niche, you know, so we don't hear those stories, but there's not even opening to explore that, that person who wrote and just included those men didn't even think about women, didn't even think about the other possibilities of who might be innovating in space.

And so we continue to have this year in and year out every time there's a new change in our landscape, we still have the same kinds of historical omissions that have been going on for many years.

JASON KELLEY
Could we lift up some of the work that you have, have been doing and talk about like the specific process or processes that you've used? How do you actually use this? 'Cause I think a lot of people probably that listen, just know that you can go to a website and type in a prompt and get an image, and they don't know about, like, training it, how you can do that yourself and how you've done it. So I'm wondering if you could talk a little bit about your specific process.

NETTRICE GASKINS
So, I think, you know, people were saying, especially maybe two years ago, that my color scheme was unusually advanced for just using Gen AI. Well, I took two semesters of mandatory color theory in college.

So I had color theory training long before this stuff popped up. I was a computer graphics major, but I still had to take those classes. And so, yeah, my sense of color theory and color science is going to be strong because I had to do that every day as a freshman. And so that will show up.

I've had to take drawing, I've had to take painting. And a lot of those concepts that I learned as an art student go into my prompts. So that's one part of it. I'm using colors. I know the compliment. I know the split compliments.

I know the interactions between two colors that came from training, from education, of being in the classroom with a teacher or professor, but also, like one of my favorite books is Cane by an author named Jean Toomer. He only wrote one book, but it's a series of short stories. I love it. It's so visual. The way he writes is so visual. So I started reinterpreting certain aspects of some of my favorite stories from that book.

And then I started interpreting some of those words and things and concepts and ideas in a way that I think the AI can understand, the generator can understand.

So another example would be Maya Angelou's Phenomenal Woman. There's this part of the poem that talks about oil wells and how, you know, one of the lines. So when I generated my interpretation of that part of the poem, the oil wells weren't there, so I just extended using, in the same generator, my frame and set oil wells and drew a box: In this area of my image, I want you to generate oil wells.

And then I post it and people have this reaction, right? And then I actually put the poem and said, this is Midjourney. It's reinterpretation is not just at the level of reinterpreting the image and how that image like I want to create like a Picasso.

I don't, I don't want my work to look like Picasso at all or anybody. I want my work to look like the Cubist movement mixed with the Fauvists mixed with the collages mixed with this, with … I want a new image to pop up. I want to see something brand new and that requires a lot of prompting, a lot of image prompting sometimes, a lot of different techniques.

And it's a trial and error kind of thing until you kind of find your way through. But that's a creative process. That's not hitting a button. That's not cutting and pasting or saying make this look like Picasso. That's something totally different.

JASON KELLEY
Let’s take a moment to say thank you to our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.

And now back to our conversation with Nettrice Gaskins.

The way Nettrice talks about her artistic process using generative AI makes me think of that old cliche about abstract art – you know, how people say 'my kid could paint that.' There's a misconception now with Gen AI that people assume you just pop in a few words and boom, you get a piece of art. Sometimes that’s true, but Nettrice's approach goes far beyond a simple prompt.

NETTRICE GASKINS
Well, I did a talk recently, and it may have been for the Philadelphia Museum of Art. I did a lecture and the Q& A, they said, could you just demo? What you do, you have some time. And I remember after I demoed, they said, Oh, that definitely isn't hitting a button. That is much more, now I feel like I should go in there.

And a lot of times people come away, They're feeling like, now I really want to get in there, And see what I can do. Cause it isn't. I was showing, you know, in what, 30 seconds to a minute, basically how I generate images, which is very different than, you know, what they might think. And that was just within Midjourney. Another reason why personally that I got into on the prompt side before it was image style transfer, it was deep style. It wasn't prompt based. So it was about applying a style to. an image. Now you can apply many styles to one image. But then it was like, apply a style to this photo. And I spent most of my time in generative AI doing that until 2021, with DALL-E and Midjourney.

So before that, there were no prompts, it was just images. But then a lot came from that. The Smithsonian show came from that earlier work. It was like right on the edge of DALL-E and all that stuff coming. But I feel like, you know, my approach even then was somehow I didn't see images that reflected me or reflected, um, the type of images I wanted to see.

So that really propelled me into going into generative AI from the image style, applying styles to, for example, there's something if you're in a computer graphics major or you do computer graphics development or CGI, you may know a lot of people would know something called subsurface scattering.
And subsurface scattering is an effect people apply to skin. It's kind of like a milk, it's called glow. It's very well known, you texture and model your, your person based on that. However, it dulls dark skin tones. And if you look at photography and all the years with film and all that stuff, we have all these examples of where things were calibrated a certain way, not quite for darker skin tones. Here we are again, this time with, but there's something called specular reflection or shine, but apparently when applied, it brings up and enhances darker skin tones. So I wondered if I could apply, using neural image style transfer or deep style, if I could apply that shine or subsurface scattering to my photographs and create portraits of darker skin tones that enhanced features.

Well that succeeded. It worked. And I was just using 18th century tapestries that had metallics in them. So they have gold or they, you know, they had that shine in it as the style applied.

CINDY COHN
Ah.

NETTRICE GASKINS
So one of those, I did a bunch of series of portraits called the gilded series. And around the time I was working on that and exploring that, um, Greg Tate, the cultural critic and writer, Greg Tate, passed away in 2021 and, um, I did a portrait. I applied my tapestry, the style, and it was a selfie he had taken of himself. So it wasn't like it was from a magazine or anything like that. And then I put it on social media and immediately his family and friends reached out.
So now it's a 25 foot mural in Brooklyn.

CINDY COHN
Wow.

JASON KELLEY
It's beautiful. I was looking at it earlier. We'll link to it.

CINDY COHN
Yeah, I’ve seen it too.

NETTRICE GASKINS
And that was not prompt based, that's just applying some ideas around specular reflection and it says from the Gilded Series on the placard. But that is generative AI. And that is remixing. Some of that is in Photoshop, and I Photoshopped, and some of that is three different outputs from the generator that were put together and combined in Photoshop to make that image.

And when it's nighttime, because it has metallics in there, there's a little bit of a shine to the images. When I see people tag me, if they're driving by in the car, you see that glow. I mean, you see that shine, and it, it does apply. And that came from this experimenting with an idea using generative AI.

CINDY COHN
So, and when people are thinking about AI right now, you know, we've really worked hard and EFF has been part of this, but others as well, is to put the threat of bias and bias kind of as something we also have to talk about because it's definitely been historically a problem with, uh, AI and machine learning systems, including not recognizing black skin.

And I'm wondering as somebody who's playing with this a lot, how do you think about the role bias plays and how to combat it. And I think your stories kind of do some of this too, but I'd love to hear how you think about combating bias. And I have a follow up question too, but I want to start with that.

NETTRICE GASKINS
Yeah, some of the presentations I've done, I did a Power of Difference for Bloomberg, was talking to the black community about generative AI. There was a paper I read a month or two ago, um, they did a study for all the main popular AI generators, like Stable Diffusion, Midjourney, DALL-E, maybe another, and they did an experiment to show bias, to show why this is important, and one of the, the prompt was portrait, a portrait of a lawyer. And they did it in all, and it was all men...

CINDY COHN
I was going to say it didn't look like me either. I bet.

NETTRICE GASKINS
I think it was DALL-E was more diverse. So all men, but it was like a black guy. It was like, you know, they were all, and then there was like a racially ambiguous guy. And, um, was it Midjourney, um, for Deep Dream Generator, it was just a black guy with a striped shirt.

But for Portrait of a Felon. Um, Midjourney had kind of a diverse, still all men, but for kind of more diverse, racially ambiguous men. But DALL-E produced three apes and a black man. And so my comment to the audience or to listeners is, we know that there's history in Jim Crow and before that about linking black men, black people to apes. Somehow that's in the, that was the only thing in the prompt portrait of a felon and there are three apes and a black man. How do apes play into "felon?" The connection isn't "felon," the connection is the black man, and then to the apes. That's sitting somewhere and it easily popped up.

And there’s been scary stuff that I've seen in Midjourney, for example. And I'm trying to do a blues musician and it gives me an ape with a guitar. So it's still, you know, and I said, so there's that, and it's still all men, right?

So then because I have a certain particular knowledge, I do know of a lawyer who was Constance Baker Motley. So I did a portrait of Constance Baker Motley, but you would have to know that. If I'm a student or someone, I don't know any lawyers and I do portrait of a lawyer for an assignment or portrait of whatever, who knows what might pop up and then how do I process that?

We see bias all the time. I could, because of who I am, and I know history, I know why the black man and the apes or animals popped up for "felon," but it still happened, and we still have this reality. And so to offset that one of the things is, has it needed, in order to offset some of that is artists or user intervention.
So we intervene by changing the image. Thumbs up, thumbs down. Or we can, in the prediction, say, this is wrong. This is not the right information. And eventually it trains the model not to do that. Or we can create a Constance Baker Motley, you know, of our own to offset that, but we would have to have that knowledge first.

And a lot of people don't have that knowledge first. I can think of a lawyer off the top, you know, that's a black woman that, you know, is different from what I got from the AI generators. But if that intervention right now is key, and then we gotta have more people who are looking at the data, who are looking at the data sources, and are also training the model, and more ways for people from diverse groups to train the model, or help train the model, so we get better results.

And that hasn't, that usually doesn't happen. These happen easily. And so that's kind of my answer to that.

CINDY COHN
One of the stories that I've heard you tell is about the, working with these dancers in Trinidad and training up a model of the Caribbean dancers. And I'm wondering if one of the ways you think about addressing bias is, I guess, same with your lawyer story, is like sticking other things into the model to try to give it a broader frame than it might otherwise have, or in the training data.

But I'm, I'm wondering if that's something you do a lot of, and, and I, I might ask you to tell that story about the dancers, because I thought it was cool.

NETTRICE GASKINS
That was the Mozilla Foundation sponsored project for many different artists and technologists to interrogate AI - Generative AI specifically, but AI in general. And so we did choose, 'cause two of my theme, it was a team of three women, me and two other women. One's a dancer, one's an architect, but we, those two women are from the Caribbean.

And so because during the lockdown there was no festival, there was no carnival, a lot of people, across those cultures were doing it on Zoom. So we're having Zoom parties. So we just had Zoom parties with the data we were collecting. We were explaining generative AI and what we were doing, how it worked to the Caribbean community.

CINDY COHN
Nice.

NETTRICE GASKINS
And then we would put the music on and dance, so we were getting footage from the people who are participating. And then using PoseNet and machine learning to produce an app that allows you to dance with yourself, mini dancer, or to dance with shapes and, or create color painting with movement that was colors with colors from Carnival.

And one of the members, Vernelle Noel, she was using GAN, Generative Adversarial Networks to produce costuming, um, that you might see, but in really futuristic ways, using GAN technology. So different ways we could do that. We explored that with the project.

CINDY COHN
One of the things that, again, I'm kind of feeding you stuff back from yourself because I found it really interesting as you're talking about, like, using these tools in a liberatory way for liberation, as opposed to surveillance and control. And I wondered if you have some thoughts about how best to do that, like what are the kinds of things you look for in a project to try to see whether it's really based in liberation or based in kind of surveillance and monitoring and control, because that's been a long time issue, especially for people from majority countries.

NETTRICE GASKINS
You know, we were very careful with the data from the Carnival project. We said after a particular set period of time, we would get rid of the data. We were only using it for this project for a certain period of time, and we have, you know, signed, everyone signed off on that, including the participants.
Kind of like IRB if you're an academic, and in some cases, and one, Vernelle, was an academic. So it was done through her university. So there was IRB involved, but, um, I think it was just an art. Uh, but we want to be careful with data. Like we wanted people to know we're going to collect this and then we're going to get rid of it once we, you know, do what we need to do.

And I think that's part of it, but also, you know, people have been doing stuff with surveillance technology for a good minute. Um, artists have been doing, um, statements using surveillance technology. Um, people have been making music. There's a lot of rap music and songs about surveillance. Being watched and you know, I did a in Second Life, I did a wall of eyes that follow you everywhere you go...

CINDY COHN
Oof.

NETTRICE GASKINS
...to curate the feeling of always being watched. And for people who don't know what that's like it created that feeling in them as avatars they were like why am I being watched and I'm like this is you at a, if you're black at a grocery store, if you go to Neiman Marcus, you know go to like a fancy department store. This might be what you feel like. I'm trying to simulate that in virtual 3D was a goal.

I'm not so much trying to simulate. I'm trying to, here's another experience. There are people who really get behind the idea that you're taking from other people's work. And that that is the danger. And some people are doing that. I don't want to say that that's not the case. There are people out there who don't have a visual vocabulary, but want to get in here. And they'll use another person's artwork or their name to play around with tools. They don't have an arts background. And so they are going to do that.

And then there are people like me who want to push the boundaries. And want to see what happens when you mix different tools and do different things. And they never, those people who say that you're taking other people's work, I say opt out. Do that. I still continue because a lot of the work that, there's been so lack of representation from artists like me in the spaces, even if you opt out, it doesn't change my process at all.

And that says a lot about gatekeepers, equity, you know, representation and galleries and museums and all that thing are in certain circles for digital artists like Deviant, you know, it just, it doesn't get at some of the real gray areas around this stuff.

CINDY COHN
I think there's something here about people learning as well, where, you know, young musicians start off and they want to play like Beethoven, right? But at some point you find your own, you need to find your own voice. And that, that, that to me is the, you know, obviously there are people who are just cheaters who are trying to pass themselves off as somebody else and that matters and that's important.

But there's also just this period of, I think, artistic growth, where you kind of start out trying to emulate somebody who you admire, and then through that process, you kind of figure out your own voice, which isn't going to be just the same.

NETTRICE GASKINS
And, you know, there was some backlash over a cover that I had done for a book. And then they went, when the publisher came back, they said, where are your sources? It was a 1949 photograph of my mother and her friends. It has no watermark. So we don't know who took the photo. And obviously, from 1949, it's almost in the public domain, it's like, right on the edge.

CINDY COHN
So close!

NETTRICE GASKINS
But none of those people live anymore. My mom passed in 2018. So I use that as a source. My mom, a picture of my mom from a photo album. Or something from, if it's a client, they pay for licensing of particular stock photos. In one case, I used three stock photos because we couldn't find a stock photo that represented the character of the book.

So I had to do like a Frankenstein of three to create that character. That's a collage. And then that was uploaded to the generator, after that, to go further.
So yeah, I think that, you know, when we get into the backlash, a lot of people think, this is all you're doing. And then when I open up the window and say, or open up the door and say, look at what I'm doing - Oh, that's not what she was doing at all!

That's because people don't have the education and they're hearing about it in certain circles, but they're not realizing that this is another creative process that's new and it's entering our world that people can reject or not.

Like, people will say digital photography is going to take our jobs. Really, the best photography comes from being in a darkroom. And going through the process with the enlarger and the chemicals. That's the true photography. Not what you do in these digital cameras and all that stuff and using software, that's not real photography. Same kind of idea but here we are talking about something else. But very, very similar reaction.

CINDY COHN
Yeah, I think people tend to want to cling to the thing that they're familiar with as the real thing, and a little slow sometimes to recognize what's going on. And what I really appreciate about your approach is you're really using this like a tool. It's a complicated process to get a really cool new paintbrush that people can create new things with.

And I want to make sure that we're not throwing out the babies with the bathwater as we're thinking about this. And I also think that, you know, my hope and my dream is that in our, in our better technological future, you know, these tools will be far more evenly distributed than say some of the earlier tools, right?
And you know, Second Life and, and things like that, you know, were fairly limited by who could have the financial ability to actually have access. But we have broadened that aperture a lot, not as far as it needs to go now. And so, you know, part of my dream for a better tech future is that these tools are not locked away and only people who have certain access and certain credentials get the ability to use them.

But really, we broaden them out. That, that points towards more open models, open foundational models, as well as, um, kind of a broader range of people being able to play with them because I think that's where the cool stuff's gonna probably come from. That's where the cool stuff has always come from, right?

It hasn't come from the mainstream corporate business model for art. It's come from all the little nooks and crannies where the light comes in.

NETTRICE GASKINS
Yeah. Absolutely.

CINDY COHN
Oh Nettrice, thank you so much for sharing your vision and your enthusiasm with us. This has just been an amazing conversation.

NETTRICE GASKINS
Thanks for having me.

JASON KELLEY
What an incredible conversation to have, in part because, you know, we got to talk to an actual artist about their process and learn that, well, I learned that I know nothing about how to use generative AI and that some people are really, really talented and it comes from that kind of experience, and being able to really build something, and not just write a sentence and see what happens, but have an intention and a, a dedicated process to making art.

And I think it's going to be really helpful for more people to see the kind of art that Nettrice makes and hear some of that description of how she does it.

CINDY COHN
Yeah. I think so too. And I think the thing that just shines clear is that you can have all the tools, but you need the artist. And if you don't have the artist with their knowledge and their eye and their vision, then you're not really creating art with this. You may be creating something, something you could use, but you know, there's just no replacing the artist, even with the fanciest of tools.

JASON KELLEY
I keep coming back to the term that, uh, was applied to me often when I was younger, which was “script kitty,” because I never learned how to program, but I was very good at finding some code and using it. And I think that a lot of people think that's the only thing that generative AI lets you do.

And it's clear that if you have the talent and the, and the resources and the experience, you can do way more. And that's what Nettrice can show people. I hope more people come away from this conversation thinking like, I have to jump onto this now because I'm really excited to do exactly the kinds of things that she's doing.

CINDY COHN
Yeah, you know, she made a piece of generative art every day for a year, right? I mean, first of all, she comes from an art background, but then, you know, you've got to really dive in, and I think that cool things can come out of it.

The other thing I really liked was her recognition that so much of our, our culture and our society and the things that we love about our world comes from, you know, people on the margins making do and making art with what they have.

And I love the image of gumbo as a thing that comes out of cultures that don't have access to the finest cuts of meat and seafood and instead build something else, and she paired that with an image of Sashiko stitching in Japan, which came out of people trying to think about how to make their clothes last longer and make them stronger. And this gorgeous art form came out of it.

And how we can think of today's tools, whether they're AI or, or others as another medium in which we can begin to make things a beauty or things that are useful out of, you know, maybe the dribs of drabs of something that was built for a corporate purpose.

JASON KELLEY
That's exactly right. And I also loved that. And I think we've discussed this before at EFF many times, but the comparison of the sort of generative AI tools to hip hop and to other forms of remix art, which I think probably a lot of people have made that connection, but I think it's, it's worth saying it again and again, because it is, it is such a, a sort of clear through line into those kinds of techniques and those kinds of art forms.

CINDY COHN
Yeah. And I think that, you know, from EFF's policy perspective, you know, one of the reasons that we stand up for fair use and think that it's so important is the recognition that arts like collage and like using generative AI, you know, they're not going to thrive if, if our model of how we control or monetize them is based on charging for every single little piece.

That's going to limit, just as it limited in hip hop, it's going to limit what kind of art we can get. And so that doesn't mean that we just shrug our shoulders and don't, you know, and say, forget it, artists, you're never going to be paid again.

JASON KELLEY
I guess we’re just never going to have hip hop or

CINDY COHN
Or the other side, which is we need to find a way, you know, we, we, there are lots of ways in which we compensate people for creation that aren't tied to individual control of individual artifacts. And, and I think in this age of AI, but in previous images as well, like the failure for us to look to those things and to embrace them, has real impacts for our culture and society.

JASON KELLEY
Thanks for joining us for this episode of How to Fix the Internet.

If you have feedback or suggestions, we'd love to hear from you. Visit EFF. org slash podcast and click on listener feedback. While you're there, you can become a member, donate, maybe pick up some merch and just see what's happening in digital rights this week and every week.

This podcast is licensed Creative Commons Attribution 4. 0 International and includes music licensed Creative Commons Unported by their creators.

In this episode, you heard Xena's Kiss slash Madea's Kiss by MWIC and Lost Track by Airtone featuring MWIC. You can find links to their music in our episode notes or on our website at EFF.org slash podcast.

Our theme music is by Nat Keefe of BeatMower with Reed Mathis

How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program in public understanding of science and technology.

We’ll see you next time.

I’m Jason Kelley…

CINDY COHN
And I’m Cindy Cohn.

Trois projets récompensés durant OW2con'24 : Mercator, WIdaaS et Centreon

La conférence OW2con a récompensé trois projets de la communauté OW2 :

  • catégorie communauté (Community Award) : Mercator, pour la cartographie du système d’information
  • catégorie performance sur le marché (Market Award) : WIdaaS, pour la gestion d’identité et d’accès
  • catégorie technologie (Technology Award) : Centreon, pour la supervision

OW2con24 Awards

Mercator

Le prix OW2con'24 Best Project Community Award est remis à MERCATOR. Cette application web procure une cartographie du système d'information et suit les recommandations de l’ANSSI précisées dans son "Guide de la cartographie du système d'information". L’application permet une approche globale de la gestion des risques, autorisant une protection et une défense complètes ainsi qu’une résilience du système d'information. La cartographie du S.I. est un outil essentiel à sa maîtrise. Elle constitue une obligation pour les Opérateurs d'Importance Vitale (OIV) et les opérateurs de services essentiels (OSE).

WIdaaS

Le prix OW2con'24 Best Project Market Award revient à W'IdaaS (Worteks IDentity as a Service, l’identité comme service par Worteks), un logiciel de gestion d’identités et d’accès, en mode Cloud, accessible via des interfaces Web et piloté par des API REST. Ce programme s’appuie sur le projet OW2 FusionIAM, et gère l’authentification multi-facteurs (2FA/MFA). Ses fonctionnalités et son modèle économique correspondent aux attentes actuelles du marché des entreprises et des collectivités territoriales.

Centreon

Centreon est une plateforme de supervision ouverte, extensible et facile à intégrer, pour superviser de bout en bout des infrastructures d’entreprise. La solution est interopérable avec ITSM (information technology service management, voir ITIL), les outils d’observabilité, d’analyse de données, d’orchestration et d’automatisation.

Le mot d’OW2

(NdM: deux dépêches ont été proposées et fusionnées)

Cette année, l’association OW2 est ravie de récompenser trois logiciels open source pouvant aider les entreprises à protéger l’ensemble de leur patrimoine numérique et à conserver la maîtrise de leurs infrastructures en interne, en périphérie et dans le cloud. Centreon, W’IDaaS et Mercator méritent leur prix pour leur ouverture aux solutions tierces, leurs fonctionnalités, leur ergonomie et leur modèle économique,” déclare le CEO d'OW2 Pierre-Yves Gibello.

Les prix OW2con’24 Best Project Awards distinguent des réalisations exemplaires. Ils apportent aux membres d’OW2, aux chefs de projet et à leurs équipes une distinction communautaire et des opportunités de visibilité sur le marché.

Commentaires : voir le flux Atom ouvrir dans le navigateur

❌
❌