Tech & AI Archives - Plural Policy https://pluralpolicy.com/tag/tech-ai/ AI-Powered Public Policy Software Tue, 02 Jul 2024 16:43:35 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://pluralpolicy.com/wp-content/uploads/2023/05/Plural-flag.svg Tech & AI Archives - Plural Policy https://pluralpolicy.com/tag/tech-ai/ 32 32 Civil Liberties and Government Surveillance https://pluralpolicy.com/blog/government-surveillance-civil-liberties/?utm_source=rss&utm_medium=rss&utm_campaign=government-surveillance-civil-liberties Tue, 02 Jul 2024 16:43:34 +0000 https://pluralpolicy.com/?p=2310 What is government surveillance, and how does it relate to Americans' civil liberties? Learn more about the history and current issues.

The post Civil Liberties and Government Surveillance appeared first on Plural Policy.

]]>
Civil liberties and their foundations have been around since the dawn of civilization. Concepts enshrined in the Biblical Ten Commandments, the English Magna Carta, the U.S. Constitution and Bill of Rights are fundamental examples of the rights and freedoms guaranteed to individuals.

Today, many of these rights and freedoms are enshrined in a country’s constitution or legal framework. We all know them well. For instance, the Bill of Rights protects and guarantees freedom of speech, privacy, and assembly. It also protects against unwarranted governmental intrusion.

But critics argue that modern technology has intruded on individual rights and civil liberties. In particular, social media and smartphones have accelerated this intrusion. Government surveillance on individual technology tools was unimaginable just decades ago. Surveillance programs often aim to enhance national security and prevent crime. Often, however, critics determine that they infringe upon personal privacy and freedom.

Following 9/11, many governments expanded their surveillance capabilities in an effort to counter terrorism. They employed technologies like data mining, wiretapping, and online monitoring. Critics argue that these practices often lack transparency and oversight. They claim that they enable an abuse of power. Overall, critics argue that mass surveillance erodes trust in government, degrades free speech, and targets marginalized communities.

Balancing civil liberties with security needs is complex. Legal frameworks like the Fourth Amendment are designed to protect against unreasonable searches and seizures. Modern laws and rapid technological advancement have outpaced existing protections in the name of national security.

What Is Government Surveillance?

Government surveillance involves monitoring and collecting information on individuals, groups, or activities.. State and/or federal law enforcement work with regulatory agencies to oversee this effort. This can include:

  • Intercepting communications like phone calls and emails
  • Observing physical movements through CCTV and drones
  • Gathering data from financial transactions, travel records, and social media

Some argue that surveillance is essential for national security, crime prevention, and public safety. However, many share significant concerns about privacy, civil liberties, and power abuses. These concerns are especially prevalent in our increasingly digitally-connected culture. Almost any piece of electronic equipment can be used for listening or watching.

In particular, mass surveillance involves extensive monitoring of large populations. It harnesses the power of advanced technology and data analytics. Mass surveillance techniques only intensify debates surrounding the balance between security and individual rights. Ensuring transparency and accountability is essential to addressing these concerns.

Before the advent of the internet, smart phones, and social media, government surveillance was less reliant on technology. It involved practices like mail interception, informants, spies, and even monitoring the press.

The History of Government Surveillance in the United States

In the United States, government surveillance has evolved over time. In the 19th century, surveillance techniques relied on practices like mail interception and the use of informants. Modern surveillance relies heavily on digital tools and technologies. Key moments include the establishment of the Secret Service in 1865, the Federal Bureau of Investigations (FBI) in 1908, and the National Security Agency (NSA) in 1952.

Passed in 2001, the Patriot Act expanded digital surveillance in the aftermath of 9/11. Later, whistleblower Edward Snowden’s 2013 revelations highlighted extensive NSA data collection. His claims sparked debates over privacy rights and civil liberties.

Pre-9/11 vs. Post-9/11 Government Surveillance 

Government surveillance has a long history, evolving significantly in the 19th century.

Pre-9/11 Government Surveillance

Below, we detail key tactics utilized by the U.S. government. We assess how these tactics and the agencies that employ them have evolved over time.

Mail Interception (1875-1890s)

Unsurprisingly, mail was one of the most common forms of communication before the advent of the telephone. The U.S. Postal Service began monitoring mail for illegal or subversive content in the mid 1800s. The Comstock Act of 1873 allowed for the inspection of mail for obscene materials.

The Bureau of Investigation (1908)

The predecessor to the FBI was established in 1908. It focused on anarchists and political radicals active in the early 1900s. Anarchist organizations were considered by many to be early terror groups. They claimed responsibility for the assassination of President William McKinley in 1901. Later, their assassination of Archduke Franz Ferdinand was a catalyst for the beginning of World War I. Between 1919 and 1920, the Bureau conducted the Palmer Raids, which targeted leftist organizations.

The Espionage Act (1917) and Sedition Act (1918)

These acts criminalized dissent against the war effort. They led to increased surveillance of suspected anti-war activists and socialists. Similar acts were passed during World War II.

COINTELPRO (1956-1971)

The FBI’s Counter Intelligence Program aimed to disrupt the communist, socialist, and civil rights movements. It targeted prominent individuals and organizations within these movements, including the Black Panther Party and Martin Luther King, Jr.

Project SHAMROCK (1945-1975)

The predecessor to the NSA monitored international telegraph and telephone communications.

Church Committee (1975)

In the 1970s, Congressional investigations revealed widespread abuses by intelligence agencies. Calls for reforms led to the establishment of the Foreign Intelligence Service Act (FISA) in 1978. The CIA employed Operation CHAOS as a domestic surveillance program. It targeted anti-war activists and political dissidents.

Post-9/11 Government Surveillance

After 9/11, the United States significantly expanded its surveillance capabilities. It created various programs and laws to this end. Key programs and laws included:

The Patriot Act (2001)

Passed just weeks after 9/11, the Patriot Act armed law enforcement agencies with new tools to detect and prevent terrorism. In particular:

  • Section 215 allowed the FBI to obtain “any tangible things” for investigations to protect against terrorism. This led to mass collection of telephone metadata.
  • Section 206 permitted “roving wiretaps” on suspected terrorists. As such, the Act allowed surveillance on multiple communication devices.
NSA Surveillance Programs

In this time period, the NSA also created and employed various surveillance programs. This included:

  • Authorized by President George W. Bush, the Stellar Wind program involved the warrantless surveillance of domestic communications, including email and phone calls. The program was employed with the legal justification of the Authorization for Use of Military Force (AUMF) and subsequent presidential orders.
  • The PRISM Program enabled the NSA to collect internet communications from major tech companies like Google, Facebook, and Apple. This was justified under the Protect America Act of 2007 and the FISA Amendments Act of 2008.
  • The Upstream Collection program involved tapping into the internet backbone to capture communications directly as they traveled across network switches. This was conducted under Section 702 of the FISA Amendments Act.
Terrorist Surveillance Program (TSP)

Implemented by the NSA, the TSP monitored international communications involving suspected terrorists. This activity was conducted without warrants. It was justified by the AUMF and President Bush’s executive authority.

FISA Court

Though established in 1978, the FISA court gained prominence in the aftermath of 9/11. The court authorized broader surveillance requests, including bulk data collection, under Section 215 of the Patriot Act.

National Security Letters (NSLs)

NSLs allowed the FBI to demand data from companies without court orders. Complemented by the Patriot Act, NSLs could compel internet service providers, financial institutions, and others to provide customer information.

Enhanced Border and Immigration Surveillance

Post-9/11 government surveillance programs also involved enhanced border and immigration surveillance. Programs like the Automated Targeting System used data analytics to screen travelers and cargo. Operating under the Homeland Security Act of 2002, they are largely credited with enhancing border security.

The Legal Framework for Government Surveillance

Today, the legal framework for government surveillance includes:

These laws and orders authorize surveillance with oversight from the FISC and various congressional committees.

The Fourth Amendment

The Fourth Amendment protects citizens from unreasonable searches and seizures, ensuring that any government surveillance or search must be justified by a warrant issued upon probable cause. Today, the Fourth Amendment remains crucial in balancing the needs of law enforcement and national security with individual rights to privacy. Below, we examine its role in the context of modern surveillance.

Warrant Requirement

The Fourth Amendment mandates that, in most cases, law enforcement must obtain a warrant from a neutral judge before conducting a search or surveillance. This warrant must be based on probable cause and must specify the place to be searched and the items to be seized.

Expectation of Privacy

In interpreting the Fourth Amendment, courts assess whether individuals have a reasonable expectation of privacy. This principle extends to various contexts, including homes, vehicles, digital communications, and personal data.

Exclusionary Rule

Evidence obtained in violation of the Fourth Amendment is generally inadmissible in court. This is due to the exclusionary rule. The exclusionary rule serves as a deterrent against unlawful searches and seizures.

Judicial and Legislative Oversight

The FISC oversees surveillance requests related to national security, ensuring they comply with the Fourth Amendment. Congress also enacts laws to regulate surveillance activities and protect civil liberties. Learn more about judicial and legislative oversight here.

Overall, the Fourth Amendment remains a fundamental safeguard against intrusive government surveillance.

USA PATRIOT Act

The USA PATRIOT Act enhances government surveillance capabilities. It does so primarily through Section 215, which permits the collection of “any tangible things” relevant to terrorism investigations.

This provision was used to justify the bulk collection of telephone metadata by the NSA until 2015. Additionally, Section 206 allows for roving wiretaps on suspects who frequently change communication devices. Section 213 enables delayed notification of search warrants, called sneak and peek warrants, aiding in investigations by preventing suspects from tampering with evidence. Finally, Section 214 expands the use of pen registers and trap and trace devices to monitor internet and phone communications. These measures have sparked debates over privacy rights and government overreach. They influenced subsequent legislative efforts to balance national security with civil liberties.

Public Perception of Government Surveillance

Over time, Americans have become more disapproving of government surveillance programs. Edward Snowden’s 2013 revelations about NSA surveillance played a large role in this growth. By January 2014, 53% of Americans signaled their disapproval of government collection of phone and internet data. In response to growing surveillance concerns, most Americans changed their technology habits.

There are partisan differences in surveillance concerns. In 2021, 75% of Democrats, compared to 57% of Republicans, reported concerns about domestic extremist threats.

The COVID-19 pandemic also impacted surveillance views. Compared with their Republican counterparts, Democrats were more supportive of measures like temperature checks and cameras to enforce social distancing. Partisan differences were smaller for contact tracing apps.

Overall, Americans share significant concerns about government surveillance and data collection. Regardless of variations along partisan lines and the specifics of different policies, there is a common desire for more control over personal information and limitations on government surveillance.

Recent Topics in Government Surveillance

Recent concerns around government surveillance continue to grow. The NSA uses PRISM and Upstream to conduct mass surveillance programs. Current events highlight the extent to which government surveillance remains present in our society. In particular:

  • Throughout the Israel-Gaza Protests, Pro-Israel groups urged Congress and other lawmakers to reauthorize FISA to help monitor “foreign involvement in domestic anti-Semitic events.”
  • A 2023 report from the Government Accountability Office found that federal agencies use face recognition with little to no accountability, transparency, or training. Learn more here.
  • The Electronic Frontier Foundation (EFF) filed a lawsuit against California’s San Bernadino County, citing their “aggressive” tracking activities through cell towers.

Get Started With Plural

Top public policy teams trust Plural to track legislation pertaining to government surveillance and civil liberties. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Interested in getting started? Create a free account or book a demo today!

More Resources for Public Policy Teams

The post Civil Liberties and Government Surveillance appeared first on Plural Policy.

]]>
Deepfake Laws: A Comprehensive Overview https://pluralpolicy.com/blog/deepfake-laws/?utm_source=rss&utm_medium=rss&utm_campaign=deepfake-laws Thu, 23 May 2024 18:50:59 +0000 https://pluralpolicy.com/?p=2229 Deepfakes pose threats to elections, personal integrity, and more. Learn about the scope of state- and federal-level deepfake laws today.

The post Deepfake Laws: A Comprehensive Overview appeared first on Plural Policy.

]]>
Artificial intelligence is leading to a massive shift in the way voice, video, and created content is consumed and shared. Deepfake content facilitates the creation and sharing of misleading, fake, explicit, or incorrect information that appears real. Proponents argue that deepfakes, or synthetic content, could have a legitimate use in movies, entertainment, and education. However, they also pose significant risks, including:

  • Spreading misinformation
  • Creating non-consensual explicit content
  • Perpetrating fraud

In particular, deepfakes pose a significant threat to politics, national security, and government. Deepfakes can infiltrate anything from a candidate statement or video, to fake footage of an emergency or fabricated audio of recordings from government officials or politicians.

The rapid advancement of AI has made deepfakes increasingly sophisticated. Legislators and the Americans they serve share concerns about their potential misuse. The need for effective detection and regulation methods is clear. Read on to learn more about efforts to regulate deepfakes at the state and federal levels.

What Are Deepfakes?

A deepfake is a fake piece of media that is created using AI. The AI creates fabricated images, video clips, or audio snippets. The main issue with deepfakes is their “believability.” Deepfakes are often used to deceive viewers or users of social media on any number of controversial issues. This might include politics, national security, social issues, or notable people. AI-generated content is becoming more convincing and can closely mimic the appearance and voice of real individuals.

Why Regulate Deepfakes?

Deepfakes can cause significant harm. One of the primary concerns related to deepfakes is the spread of misinformation across various platforms, including social media and news organizations. Since 2016, misinformation has been a consistent target of federal legislation and regulatory efforts. Efforts to curb misinformation led to fact checking of posts and user content on Meta (then-Facebook). Currently, X offers a community notes function for users to correct or add context to misleading posts. A similar regulation or standard across social media could be adopted for deepfakes.

Without regulation, deepfakes could manipulate public opinion, interfere with elections, or incite unrest. The ability to produce highly convincing fake content is a serious threat to our democracy.

Privacy and personal security are also at risk with an increase in deepfakes. Citizens can be exploited with inappropriate and non-consensual content that is explicit. Everyday citizens must be protected from being victimized by deepfake content.

Deepfakes could also ramp up financial fraud – especially for seniors. While phishing emails and robocalls are already common attempts to defraud seniors, deepfake audio could be employed to impersonate state or federal officials or agencies looking for access to sensitive information.

How Are Legislators Approaching Deepfake Laws?

Congress has taken several steps to address the regulation of deepfakes. Legislators recognize the threats deepfakes pose to national security, privacy, and public trust. Efforts to understand deepfakes mirror those with big tech. Congress has held hearings to better understand the implications of deepfakes and explore technological solutions for detection and prevention.

In March of 2024, the House Committee on Oversight and Accountability held a hearing on deepfakes, called “Addressing Real Harm Caused by Deepfakes.” The hearing focused not only on the national security and political implications of deepfakes, but also their impact on everyday citizens, including children.

A follow-up report from the hearing found that improved technology will make it more difficult to distinguish deepfakes from real content. Technological advancement will further erode public trust in social media and the news. It also found that women and children were more likely to be targets of a deepfake video. 

Federal Deepfake Laws

The 2019 National Defense Authorization Act mandates that the U.S. Department of Homeland Security (DHS) produce annual reports on the use of deepfakes. This law was among the first targeting deepfakes. Since then, Congress, agencies, and the White House alike have taken significant steps on deepfake regulation.

Congress has recently passed several efforts to regulate and oversee deepfakes. These include:

  • The Preventing Deep Fakes Scams Act. H.R. 5808 establishes the Task Force on Artificial Intelligence in the Financial Services Sector. The Task Force reports to Congress on issues related to AI in the financial services sector.
  • The DEEPFAKES Accountability Act. H.R. 5586 protects national security organizations from threats posed by deepfake technology. It also provides a legal recourse to victims of harmful deepfakes.
  • The Protecting Consumers From Deceptive AI Act. H.R. 7766 requires the National Institute of Standards and Technology to establish task forces on AI and deepfakes. The Task Forces aim to facilitate and inform the development of technical standards and guidelines related to the identification of content created by generative AI. These standards will ensure that audio or visual content created or substantially modified by AI includes a disclosure acknowledging the origin of such content.
  • The No AI Fraud Act. H.R. 6943 provides for individual property rights in likeness and voice.

Agency and White House Involvement

Beyond Congress, the White House and federal agencies have also taken steps to address deepfakes. The White House has conducted meetings and consultations with technology companies, researchers, and policymakers to discuss deepfake legislation and regulation.

Along with the DHS’s annual reports on deepfakes, mentioned above, other federal agencies have launched programs aimed at developing technologies to detect and counteract deepfakes. This includes the Department of Defense’s Advanced Research Projects Agency (DARPA). DARPA’s Media Forensics program creates automated tools to identify deepfake content.

The Federal Trade Commission has also engaged in efforts to protect consumers from the deceptive practices enabled by deepfakes. The Agency emphasizes the need for transparency and accountability in digital content creation.

State-Level Deepfake Laws

Several states have already passed or are looking at legislation to regulate deepfakes.

  • California is a pioneer in deepfake regulation with laws enacted as far back as 2019. A.B. 602, passed in 2019, allows victims of non-consensual deepfake pornography to sue creators. Also passed in 2019, A.B. 730 prohibits the distribution of deceptive media aimed at influencing elections within 60 days of an election​.
  • With H.B. 1766 and S.B. 2396, Hawaii has focused on preventing misinformation or communications that could be considered deepfakes or fraudulent before or during elections.
  • Laws in Arizona, including S.B. 1078 and S.B. 1336, aim to prevent the false use of digitized audio recordings or the unauthorized dissemination of deepfake videos, audio or communications for financial gain or malicious intent.
  • Washington legislators have passed S.B. 5152, which targets deceptive media and election integrity. The law mandates disclosure of manipulated media that could influence elections. It also requires clear identification of any AI-generated content used in political campaigns​.
  • Similar to other states, Florida has implemented S.B. 850, which requires the labeling of political ads or other election related communications if they were created with generative AI​.
  • In New York, Governor Hochul signed S. 1042 in October of 2023. The legislation aims to regulate the use of deepfakes in various domains, including non-consensual pornography and election interference. It will provide clear guidelines and penalties for the misuse of deepfake technology.

Looking Ahead: The Future of Deepfake Laws

Deepfake laws and regulations must be multifaceted. This technology is rapidly advancing and avenues for potential misuse are many. Federal and state attempts to regulate this technology may aim to reign in bad actors or set common standards. Legislators must develop more comprehensive and specific laws targeting the malicious creation and dissemination of deepfakes, particularly those intended to deceive, defraud, or harm individuals and public institutions.

Key aspects of future regulations may include mandatory labeling of deepfake media. Measures such as these would ensure transparency and help viewers identify altered content. This might include the use of a blockchain system or immutable codes that identify original videos from deepfake content. Legal frameworks could impose severe penalties for creating or distributing harmful deepfakes, such as those used for political manipulation, financial fraud, or non-consensual explicit content.

Deepfake threats are global. International cooperation between countries and regulatory bodies will be crucial. Global leaders must share technological solutions and best practices for detection and regulation.

Advancements in deepfake detection will play a significant role in the future of regulation. Governments may fund research into and development of AI-driven tools to identify deepfakes accurately and swiftly. Public awareness campaigns will also be vital to educate citizens about the existence and potential dangers of deepfakes. These efforts will foster a more informed and critical media consumption.

Overall, the evolving legal and regulatory landscape will aim to balance protecting society from the risks of deepfakes while allowing for legitimate use in fields like entertainment and education.

Using Plural to Monitor Deepfake Laws

Top public policy professionals trust Plural for their legislative tracking and stakeholder management needs. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Create a free account or book a demo today!

More Tech & AI Resources

The post Deepfake Laws: A Comprehensive Overview appeared first on Plural Policy.

]]>
Big Tech Regulations: Efforts to Regulate Big Tech https://pluralpolicy.com/blog/big-tech-regulations/?utm_source=rss&utm_medium=rss&utm_campaign=big-tech-regulations Fri, 10 May 2024 17:13:24 +0000 https://pluralpolicy.com/?p=2147 As technology has evolved, tech giants and big tech regulations governing them have evolved alongside. Learn more today!

The post Big Tech Regulations: Efforts to Regulate Big Tech appeared first on Plural Policy.

]]>
As technology has evolved, tech giants and big tech regulations governing them have evolved alongside. Learn more today!

The term “Big Tech” won’t be new to most readers. It’s a phrase that describes the rise and influence of technology companies. Today, large tech companies play a large role in the use of consumer technology and the economy in the United States. As technology has evolved, tech giants and the regulations governing them have evolved alongside.

What Are Big Tech Organizations?

In a little over two decades, the big tech moniker has been pushed and pulled between a handful of companies. The first big tech grouping centered around the rise of the internet, with Google, Microsoft, and Apple among the first well-known tech companies. Big tech rapidly changed as tech has proliferated in our daily lives. This includes the rise of Facebook, now Meta and Apple’s launch and subsequent domination of smartphones and tablets. It extends to the rapid growth of Youtube and Amazon’s push into, well, every consumer service. The perpetual popularity of social media platforms underscore the role of Big Tech as well.

Today, Big Tech most commonly refers to a combination of Apple, Amazon, Meta (Facebook), Alphabet (Google) and Microsoft. The acronym FAANG is often used when talking about these tech companies. Netflix and Tesla are also sometimes included in big tech groupings. Each of these companies holds considerable market value:

  • Apple controls 55% of U.S. smartphone sales. Its App Store contains 2.18 million apps to download to those iPhones.
  • Microsoft controls 70% of the world’s computer operating systems. Six billion computers around the world run Windows.
  • Alphabet’s Android mobile operating system controls a 71% share of the global smartphone market. Ninety-two percent of all search queries are performed on Google.
  • Amazon has 39% of the U.S. e-commerce market, delivering over 4.75 billion packages per year in the U.S.
  • Meta’s platforms, including Facebook, Instagram, WhatsApp, and Messenger, each have over one billion users. Seventy-seven percent of global internet users use at least one Meta product.

The History of Big Tech Regulations

Big tech regulations largely began with the Federal Telecommunications Act of 1996. At this time, accessing information online was becoming increasingly widespread. The Act, referred to as a “digital free for all,” set the stage for the widespread growth of the internet.

Another key provision in big tech regulation is Section 230 under the Communications Decency Act of 1996. Section 230 has been used by websites, publishers, social media platforms to escape liability around the publication of third-party content.

Until recently, many regulations aimed to protect social media platforms and their operations. Social media platforms are hugely popular and play a key role in the economy. Today, regulations largely aim to do the opposite.

Types of Regulations

Big tech regulations have shifted with the growth and popularity of social media platforms. Today, many regulations focus on disinformation, misinformation, and standards for requiring fact-checking. This has given way to stricter state and federal laws around protections for minors.

Other regulations include lawsuits over monopolistic business practices and anti-competitive actions. Big tech companies often buy their competitors. This monopolistic activity forces consumers and small business owners to use their services. It also changes default use of search engines and software on our devices.

Many lawsuits are led by state attorneys general. Federal agencies like the Federal Trade Commission (FTC) or the Department of Justice (DoJ) also play a role.

Current Events in Big Tech Regulation

Big Tech regulation has advanced since 1996. In that time, telecommunications have also largely been deregulated. Currently, regulation primarily focuses on the TikTok ban, minors’ safety, and consumer protections.

The TikTok Ban

In April, President Biden signed a law requiring TikTok’s owner, ByteDance, to sell the app within one year. If ByteDance fails to do so, it will face a ban in the U.S.

TikTok allegedly spent more than $7 million lobbying Congress to prevent the bill from becoming law. While not among the usual group of Big Tech companies, TikTok is one of the fastest growing social media platforms. The app reports a monthly user count of 1.5 billion.  

Talk of a TikTok ban has floated around the federal government since President Trump was in office. Despite this, many were surprised by the seemingly sudden ban. This action signifies a shift in the approach to regulating social media companies.

Minors’ Safety on Meta Platforms

But TikTok isn’t the only social media app in the crosshairs of government regulation. Minors’ mental health and online exploitation are dominating public policy. In February, Big Tech CEOs sat before Congress to testify about the safety of minors using social media platforms. This included X (formerly Twitter), Meta, Discord, Snap, and TikTok. The testimony followed a 2023 lawsuit brought by 42 states, alleging that Meta engaged in a decade-long pattern of harming young adults while claiming both Instagram and Facebook were “safe.”

The Digital Consumer Protection Commission Act

Currently, the responsibility of regulating digital platforms is shared among many federal agencies. The Digital Consumer Protection Commission Act proposes a change to this structure. In short, the Act would charter a commission with sole discretion over the regulation and governing of digital platforms. The new federal commission would regulate digital platforms, investigating and addressing issues like transparency, competition, privacy, consumer protection, national security, and digital platform licensing.

Other Federal Level Efforts

Protecting Minors’ Safety Online

Several federal efforts aim to protect kids’ and teens’ safety online. The Kids Online Safety Act is a sweeping example of such efforts. The Act would require companies to adopt:

  • A strong standard for kids privacy protections
  • Dedicated mechanisms for reporting harmful online behavior
  • Standards around mitigating dangerous content to minors and independent audits
  • Research about the impact of social media to kids and teens.

The Children and Teens Online Privacy Act is a similar proposal. The Act would update a 1998 provision that prohibits collection of internet data from teens without consent or notice. 

Addressing Amazon

Regulatory actions are also being taken beyond social media. The FTC and other agencies are continuing a long-standing lawsuit against Amazon. The lawsuit challenges Amazon’s pricing, advertising, and logistics services.

State-Level Big Tech Regulations

Like Congress, states are not waiting around to regulate Big Tech. Several states have passed big tech regulations. Overall, many laws allow legal action against tech companies. Others deny minors access to social media accounts.

  • Florida passed a law that requires social media companies to delete accounts held by minors under 14-years-old. It also requires parental consent for some teens to create an account.  
  • Utah passed amendments to the Social Media Regulation Act that allow parents to sue social media platforms if they believe their children’s mental health has been impacted.
  • In 2023, Arkansas passed The Social Media Safety Act. The Act was later blocked by a federal judge. It required third-party validation of social media account holder ages and parental consent.
  • California has introduced the Social Media Addiction Bill. The bill would ban online platforms from sending “addictive social media feeds” to minors without their consent.

International Action

In March of 2024, the European Union passed a comprehensive law addressing many key big tech concerns. The law:

  • Changed how Google displays search results
  • Modified how Microsoft provides default search engine tools
  • Increased access to payment software and rival apps in Apple’s App Store.

Top public policy teams trust Plural for their legislative tracking needs.

Plural makes it easier than ever to discover big tech regulations. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Create a free account or book a demo today!

More Resources for Public Policy Teams

The post Big Tech Regulations: Efforts to Regulate Big Tech appeared first on Plural Policy.

]]>
Using Plural to Craft Better AI Policy https://pluralpolicy.com/blog/cntr-brown-plural/?utm_source=rss&utm_medium=rss&utm_campaign=cntr-brown-plural Thu, 02 May 2024 16:30:37 +0000 https://pluralpolicy.com/?p=2112 How does CNTR @ Brown use Plural to craft better AI policy? CNTR used Plural to find, track, and categorize AI bills for analysis. Learn more!

The post Using Plural to Craft Better AI Policy appeared first on Plural Policy.

]]>
One of the most exciting parts about building top-notch legislative intelligence tools is seeing all the creative ways that people use them. Our customers use Plural to enrich their knowledge of the legislative process, and we’re proud to support efforts to make American democracy easier to understand.

Right now, our team is watching the Brown University Center for Technological Responsibility, Reimagination, and Redesign (CNTR) with great interest. The CNTR @ Brown advocates for deeper understanding of and better policy around AI’s role in government. They aim to promote technology that “actively seeks to promote human well-being and flourishing.” It’s a vital goal, one that pairs well with Plural’s mission to use technology to make democracy more transparent and participatory.

CNTR @ Brown’s Overview of Proposed AI Legislation Using Plural

The team at CNTR recently published an overview of proposed AI legislation across all 50 states. It identified 610 bills on AI in general, and 114 bills that would regulate state governments’ use of AI. CNTR used Plural to find, track, and categorize these bills for analysis. The analysis identified areas where states may have gaps in AI policy, as well as opportunities to better “harmonize” a given AI procurement policy with federal guidelines. With so many states enacting new rules for AI all at the same time, avoiding unnecessarily conflicting rules through harmonization efforts could make those rules more clear and likely to be followed. The CNTR summary explains:

CNTR’s overview is self-described as “quick and dirty.” The group plans to publish a deeper analysis of trends in this corpus of bills later this year. However, it’s clear that they’ve already found some interesting trends that policymakers should be aware of. They’ve published detailed methods as well as code for their analysis on Github.

The CNTR @ Brown is led by Suresh Venkatasubramanian. Venkatasubramanian helped co-author the Blueprint for an AI Bill of Rights, an Executive Branch publication that creates guidelines for the implementation of AI and automated decision-making systems in a more safe and equitable way.

Get Started With Plural

Are you interested in joining the community of researchers, journalists, and advocates who use Plural to better understand public policy? Create a free account or book a consultation today.

More Tech & AI Resources

The post Using Plural to Craft Better AI Policy appeared first on Plural Policy.

]]>
Cybersecurity Laws and Policy: A Comprehensive Overview https://pluralpolicy.com/blog/cybersecurity-laws-and-policy/?utm_source=rss&utm_medium=rss&utm_campaign=cybersecurity-laws-and-policy Fri, 19 Apr 2024 18:04:13 +0000 https://pluralpolicy.com/?p=2092 Cybersecurity laws and regulations play a key role in our day-to-day lives, ensuring that our information is protected from cyber threats. Learn more today!

The post Cybersecurity Laws and Policy: A Comprehensive Overview appeared first on Plural Policy.

]]>
Cybersecurity laws and regulations play a key role in our day-to-day lives. These important policies ensure that our information is protected from cyber threats. Nearly every aspect of our daily lives has been digitized. This includes everything from storing health information to water infrastructure and corporate emails. Unfortunately, this means that our information and infrastructure could be impacted by a cyber attack. Cyber attacks are among the greatest risks to government, companies, and individuals alike in the United States. Attacks are increasing in frequency and pose a danger to physical infrastructure, privacy, and financial systems. In 2023, IBM found that the average global cost of data breach was more than $4 million. Costs increase in industries like healthcare, where the average cost of a data breach was $11 million.

Effective cybersecurity laws protect users from cyber attacks. This includes protections from phishing schemes, ransomware attacks, identity theft, data breaches, and financial losses. On both the state and national levels, cybersecurity laws aim to strengthen the tracking, prevention, and mitigation of cyber threats. They bolster the cybersecurity efforts undertaken by private companies and the government itself. For consumers, cybersecurity and data protections make up the foundations of online data privacy. Laws like HIPAA, the GDPR in Europe, and CCPA in California govern how personal data is transferred and processed.

U.S. Government Approaches to Cybersecurity

Cybersecurity is especially sensitive for the United States government. Bad actors may use cyber threats to gain access to sensitive information, government employees’ data. Further, a ransomware attack could have grave impacts. National security, the military, and critical infrastructure are all at risk.

Threats to America’s digital infrastructure necessitate government adoption of cybersecurity best practices. Best practices should also extend to public agencies, companies, and private corporations. These efforts include:

  • Presidential strategies
  • Cybersecurity laws and regulations passed by Congress
  • Directives and initiatives by federal agencies

The Role of Government Agencies 

The U.S. Department of Homeland Security plays a leading role in cybersecurity. The agency aims to strengthen cybersecurity resilience across key infrastructure sectors. One key department under Homeland Security is The Cybersecurity and Infrastructure Security Agency (CISA.) CISA leads efforts to understand, manage, and reduce risks to our cyber and physical infrastructure. It serves two key roles. CISA serves as the operational lead for federal cybersecurity efforts. It also acts as a national coordinator for critical infrastructure security and resilience.

Many other federal executive roles and agencies play key roles in cybersecurity policymaking. This includes advising the White House and ensuring that existing regulations, laws, and executive orders are followed. This includes:

  • The National Cyber Director, who advises the White House on cybersecurity policy and strategy.
  • The National Cybersecurity Strategy, which President Biden signed into law in March of 2023. The Strategy is less a cybersecurity law and more a blueprint documenting challenges and best practices in sectors reliant on cybersecurity.
  • The Cyber Safety Review Board, which operates under CISA. The Board is a public-private-partnership that reviews significant cybersecurity threats in both the public and private sector.
  • The Office of Management and Budget (OMB), which approves and enforces information security requirements under federal law for “federal systems.” OMB also oversees the Chief Information Officers Council. The Council consists of the chief information officers for each federal agency.
  • The U.S. Department of Justice handles most enforcement and prosecution. It works with other agencies like the Secret Service and Department of Defense to handle certain intelligence, law enforcement, or military-related investigations.

Cybersecurity Laws and Regulations for Protecting Sensitive Information

There are many cybersecurity laws and regulations that govern the United States. This legislative framework consists of state, federal, and international measures.

U.S. Federal Laws

The federal government has taken significant action on cybersecurity spanning decades. Read below to learn about three key laws.

The Health Insurance Portability and Accountability Act (HIPAA)

Passed in 1996, HIPAA is one of the first data regulation laws. HIPAA focuses solely on healthcare data. It created national standards to protect sensitive patient health information from being disclosed without the patient’s consent or knowledge. HIPAA has also evolved alongside technological advancements. The law now reflects the digitization of healthcare data, what with the implementation of electronic medical records and digital patient data.

HIPAA also set forth reporting requirements for cybersecurity breaches. It imposes fines depending on the severity of the incident. Federal agencies like the Federal Trade Commission are also involved with investigating and collecting fines related to cyber breaches.

The Gramm-Leach-Bliley Act

Passed in 1999, the Gramm-Leach-Bliley Act regulates cybersecurity practices in the financial industry. It requires financial institutions offering products or services like loans, investment advice, or insurance to explain their information-sharing practices to their customers. Financial institutions must also take steps to safeguard sensitive data. The Gramm-Leach-Bliley Act created three main rules:

  1. A privacy rule that ensures the protection of consumers’ personal financial information
  2. A safeguards rule requiring security measures to prevent data breaches
  3. A provision that prohibits deceptive methods of obtaining personal financial information

The Federal Information Security Management Act (FISMA)

FISMA was passed as part of the 2002 Homeland Security Act. The law requires the Director of the OMB to oversee federal agency information security policies and practices. FISMA also requires each agency to provide information on their information security practices. The Act has been amended several times since its passage in 2002. In 2014, it added Homeland Security as a key partner in federal cybersecurity efforts.

State and International Law(s)

Beyond federal cybersecurity laws and regulations, several key state and international measures govern cybersecurity best practices in the United States.

The General Data Protection Regulation (GDPR)

Created in 2016 and launched in 2018, GDPR is the European Union’s (EU) cybersecurity law. GDPR created regulations and standards about collecting, storing, and managing data on companies. Any company in the world that targets or collects data related to people in the EU is subject to GDPR. GDPR also created fines against those who violate privacy and security standards. The largest fine imposed by the EU was 1.2 billion euros against Meta in 2023.

The California Consumer Privacy Act (CCPA)

The CCPA became law in response to GDPR. It serves as a de-facto national data privacy law. The law applies to any company – inside or outside of the state – that collects data from California citizens. The CCPA standardized privacy rights around consumer data. It includes rights for consumers to opt-out of sharing their data and personal information to websites and apps. These include:

  • The Right to Know
  • The Right to Delete
  • The Right to Opt-Out of Sale
  • The Right to Correct
  • The Right to Limit
  • The Right to Non-Discrimination

The Importance of Understanding Cybersecurity Policy

Cybersecurity is a key interest for national security and companies large and small. Preventing cybersecurity threats is a key aspect of CISA’s 16 critical infrastructure sectors. Any cybersecurity threat could jeopardize these critical industries and sectors, endangering Americans and our infrastructure.

The same can be said for protecting individual data. With almost all of our data online, the risk of unwanted parties accessing and using personal, sensitive information is a almost a given. The current patchwork of laws and regulations help ensure that the public and private sectors follow cybersecurity best practices.

Cybersecurity policy in itself is often complicated. Federal cybersecurity laws mix with international and state compliance, presidential strategic initiatives, and specific regulations in crucial sectors. Overall, cybersecurity laws and policy will continue to evolve. Best practices and emerging technologies like artificial intelligence will shape the course of this evolution.

Plural for Insights Into Cybersecurity Laws

Plural is the legislative tracking tool of choice for those seeking to monitory cybersecurity laws and regulations. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Create a free account or book a demo today!

More Resources for Public Policy Teams

The post Cybersecurity Laws and Policy: A Comprehensive Overview appeared first on Plural Policy.

]]>
Cryptocurrency Regulation and Laws in 2024 https://pluralpolicy.com/blog/cryptocurrency-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=cryptocurrency-regulation Tue, 02 Apr 2024 14:10:39 +0000 https://pluralpolicy.com/?p=2011 What's the status of blockchain regulation and cryptocurrency laws in 2024? Many arms of the US government are involved in regulatory efforts. Learn more today!

The post Cryptocurrency Regulation and Laws in 2024 appeared first on Plural Policy.

]]>
There are many good reasons to centralize data. Storing data in a single location makes it easier for an organization to access, organize, and update said data. The analyses gleaned from highly consolidated databases are more likely to be comprehensive and consistent. Administrators of centralized servers can more readily look for evidence of data quality issues and security breaches. Finally, centralizing data allows applications to operate in a cost-effective and efficient manner.

There are also less desirable outcomes from data centralization. Concentrated power, sacrificed privacy, reduced accountability, and curbed competition are a few. We know this because the internet has become increasingly centralized over the last 25 years. This process began when a handful of companies created user-friendly applications. These apps and websites made the world wide web highly accessible. Now, the vast majority of internet activity ends up recorded in databases owned by Google, Meta, Amazon, and Microsoft, and others.

Blockchain technology is a response to the negative side effects of data centralization. It is an alternative approach to structuring digital information. Blockchain technology allows data to be stored across multiple computers in a network. The nature of blockchain means that individual computers can reliably verify the authenticity of the information received from other “nodes” in the blockchain network. Every time data on a blockchain is shared, the transaction is automatically recorded in a distributed ledger. The distributed ledger cannot be modified.

Helpful explanations of blockchain have been published by government entities like the U.S. Government Accountability Office, the National Institute for of Standards and Technology, and the Department of Homeland Security.

Blockchain’s Most Prominent Application: Cryptocurrency 

Blockchains can store virtually any kind of data, but the initial use cases enabled the creation of cryptocurrencies. Bitcoin, the first cryptocurrency, was launched in 2009. Since then, cryptocurrencies have allowed people to conduct secure financial transactions. Cryptocurrency transactions are completed without the involvement of a national authority, such as a central bank.

Early on, critics argued that only criminal actors would benefit from such a system. Cryptocurrency has potential to fund extremist groups as well as facilitate money laundering and dark web transactions. While threats remain, cryptocurrency has many benefits. These include:

  • Increasing participation in the global economy, particularly for people from developing countries
  • Reducing the risks associated with conducting business in new markets
  • Facilitating more efficient transactions at a lower cost
  • Protecting private citizens from corrupt government seizure of their assets
  • Reducing security risks of identity theft
  • Preserving personal privacy and individual autonomy

For better or worse, policymakers appear to be coming to terms with the fact that cryptocurrencies are “here to stay.” Policymakers have moved beyond attempting to ban the technology or ignoring it altogether. Rather, they’re now focusing on figuring out how to approach responsible crypto-asset regulation.

Crypto and Public Policy: Multiple Dimensions of Financial Regulation

The prospects for Congressional action on cryptocurrency remain murky. This is in part because of the sheer number of pertinent issues that must be addressed. Issues range from ensuring the stability of financial markets to determining the legal implications of smart contracts.

Currently, at least four federal regulatory authorities are involved in managing cryptocurrency risks. This includes the Securities and Exchange Commission (SEC), the Commodity Features Trading Commission (CFTC), the Department of Justice (DoJ) and the Department of the Treasury. Each organization would take a different approach to a comprehensive regulatory framework. Below, we examine the enforcement priorities of each.

SEC: Protecting Investors and Closing Loopholes

The foundation of all SEC regulation is reporting requirements. These requirements are intended to help investors make sound decisions. Companies that sell shares or “securities” are required by the SEC to file a registered public offering statement prior to distributing to investors. Additionally, organizations that facilitate the buying and selling of securities, including stock exchanges and certain types of investment firms, must register as national securities exchanges. 

From the SEC’s perspective, many cryptocurrency offerings are effectively the same as securities sales. This means that cryptocurrency companies must comply with the same investor protection standards that govern publicly-traded companies. This includes regular disclosures related to corporate governance and susceptibility to market risks.

The SEC also seeks to classify certain cryptocurrency companies as securities exchanges. This is because they allow users to trade in their digital assets for traditional currencies. As an example, in its ongoing complaint against Coinbase, Inc., the SEC charges that the organization has been operating as an unregistered national securities exchange since 2019.

CFTC: Deterring Market Manipulators and Stopping Scams

The CFTC is concerned with curbing fraud and other deceptive behaviors in derivatives markets. Derivatives are financial investment contracts. Their value comes from the market price of an underlying asset, such as a currency or a commodity.  Commodities have historically included resources like wheat, gold, and oil, and, since 2015, Bitcoin.

For nearly a decade, the CFTC has sought to regulate Bitcoin and other digital currencies. In that time, the agency has primarily focused on bringing cases against market manipulators. Mitigating the abuses brought by market manipulators is particularly important with cryptocurrency regulation. Once a person falls victim to a scam, transactions function like digital contracts that cannot be reversed or disputed. Blockchains also make it easy for scammers to hide their real identity, acting quickly to withdraw their ill-earned gains as cash before disappearing.

DoJ: Prosecuting Fraud and Curbing Illicit Finance

A key aspect of the DoJ’s approach to cryptocurrency is targeting criminals who use crypto to conduct nefarious activities, like funding terrorist groups and committing cyber crimes. In 2021, the DoJ’s Criminal Division announced the launch of the National Cryptocurrency Enforcement Team (NCET). As a subdivision of the Fraud Section, the NCET’s aim was to combat the use of cryptocurrency as an illicit tool. It focuses on instances of extortion, fraud, and money laundering.

The DoJ goes beyond combatting criminal crypto activity. The Department also targets crypto exchanges that turn a blind eye to such crimes. The prosecution of former Binance CEO Changpeng Zhoa is an example of such efforts. Zhoa’s prosecution centered on his failure to implement an effective anti-money laundering program.

Department of the Treasury: Interpreting and Enforcing Tax Law

The Internal Revenue Service (IRS) sits within the Department of the Treasury. In its capacity to regulate cryptocurrency, the IRS evaluates crypto assets within the context of the tax code. Typically, the money an individual gains or loses from securities and commodities transactions over the course of a given year are reported to the IRS by their broker. This standard reporting practice helps deter tax evasion.

The decentralized and private nature of cryptocurrency presents challenges for the IRS. Since crypto is inherently decentralized, determining who qualifies as a broker is challenging. The privacy-preserving nature of blockchain also complicates compliance logistics for digital asset monitoring.

Looking Ahead: A Push to Fill in Gaps in Federal Cryptocurrency Regulation

Many arms of the federal government are actively involved in cryptocurrency regulation. Despite this, each has also issued calls for Congressional action. No matter how creative or involved enforcement agencies may get, gaps will remain. Regulators are unable to fully mitigate the unique risks associated with blockchain. Comprehensive blockchain regulation is still

Cryptocurrency regulation is multifaceted. It’s important for observers to keep in mind the wide range of legislative proposals. As an example, bills have been introduced to:

  • Dictate new CFTC reporting requirements for digital asset trading platforms (HR.5966
  • Expand the applicability of existing federal anti-money laundering laws to cover digital assets (S.2669)
  • Require crypto advertisements to disclose when celebrity spokespeople have been paid for their endorsements (S.1358)
  • Direct agencies to study the environmental impacts of cryptocurrency mining (S.661)

Opinions on cryptocurrency regulation don’t fall along clear party lines. In the absence of obvious partisan signals, monitoring the details of competing proposals is especially important.

Plural for Cryptocurrency Regulation

Plural is the policy tracking tool of choice for those engaged in the cryptocurrency regulation space. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Create a free account or book a demo today!

More Resources

How Vote Mama Lobby Empowers Moms By Using Plural for Tracking Policy

Vote Mama Lobby is dedicated to transforming the political landscape for moms. Its team advocates to break the institutional barriers moms face in running for and serving in office, and gives voice to the solutions that allow everyday families to thrive.  Vote Mama Foundation is a leading non-partisan 501(c)(3) entity that provides research and analysis […]

READ MORE →

Here’s Why Leadership Wants Your GR, Legal & Compliance Teams to Use AI

In an era of accelerating regulation, geopolitical uncertainty, and rising stakeholder expectations, the margin for error in corporate governance has never been thinner. For senior leaders — CEOs, GCs, COOs, and Chief Risk Officers — ensuring that their legal, compliance, and government relations (GR) teams are equipped to respond quickly and strategically is essential. Enter […]

READ MORE →

What I Learned from Working at a Startup Company as a College Student

By Jay Oliveira My time at Plural has been transformative.  As a third-year policy student at Suffolk University in Boston, I had already spent hours pouring over legislative websites for my coursework. I struggled to use legislative sources that would open up dozens of unreadable files, or would make it unclear what chamber the bill […]

READ MORE →

The post Cryptocurrency Regulation and Laws in 2024 appeared first on Plural Policy.

]]>
Understanding Illinois’s Proposed Social Media Law with Plural https://pluralpolicy.com/blog/illinois-hb-5380/?utm_source=rss&utm_medium=rss&utm_campaign=illinois-hb-5380 Fri, 29 Mar 2024 14:14:01 +0000 https://pluralpolicy.com/?p=2004 Restricting minors' social media use is a rare bipartisan issue in today's political landscape. Learn more about Illinois HB 5380 today!

The post Understanding Illinois’s Proposed Social Media Law with Plural appeared first on Plural Policy.

]]>
Our current political landscape is marked by polarization and stagnation in Congress. In this context, it’s become common for highly partisan state governments to push novel legislative proposals. For instance, California has an overwhelming Democratic majority in its state legislature. With this partisan makeup, the legislature has established the strictest data privacy and ESG disclosure laws. Meanwhile, states with large conservative majorities like Alabama and Arkansas have recently leaned further right on abortion and gun laws.

It can be surprising to see both Republican and Democratic-controlled states moving in the same direction on legislation. But this is exactly what is happening when it comes to regulating children’s social media use. In recent years, many states have acted to regulate or restrict the use of social media by minors. This includes California, Texas, Ohio, and Arkansas, among others. Regulating children’s social media use is proving to be a uniting, bipartisan issue.

Just this week, Governor DeSantis signed Florida HB 3 into law. The new law, passed by Florida’s overwhelmingly Republican legislature, is expected to face legal challenges. If it survives, HB 3 would ban children under 14 from using social media. It would also require parental permission for 15 and 16-year-olds.

At the same time, Illinois’s Democratic House of Representatives is also advancing a bill that would regulate children’s activity online. HB 5380, known as the Parental Digital Choice Act or Sammy’s Law, advanced out of the House Consumer Protection Committee in mid-March. It is is expected to be voted on by the full House soon. Below, we use Plural’s AI capabilities to better understand HB 5380. How does Illinois HB 5380 fit within the bipartisan trend of regulating children’s social media use?

Summarizing HB 5380: The Parental Digital Choice Act

Plural provides access to both source-provided data and insights unlocked by our industry-leading, AI-powered models. As seen below, the AI summary typically adds clarity, context, and readability that can often be missing in source-provided summaries.

Illinois HB 5380: Source-Provided Summary

Creates the Let Parents Choose Protection Act. Provides that the Act may be referred to as Sammy’s Law. Provides that, before August 1, 2025, or within 30 days after a service becomes a large social media platform after August 1, 2025, a large social media platform provider shall create, maintain, and make available to any third-party safety software provider a set of third-party-accessible real time application programming interfaces, including any information necessary to use the interfaces, by which a child, if the child is 13 years of age or older, or a parent or legal guardian of a child, may delegate permission to the third-party safety software provider to: (1) manage the child’s online interactions, content, and account settings on the large social media platform on the same terms as the child; and (2) initiate secure transfers of user data from the large social media platform in a commonly used and machine-readable format to the third-party safety software provider. Sets forth disclosure requirements to the child and the parents or guardians of a child; requirements of third-party safety software providers; and liability of third-party safety software providers. Provides that a violation of any of the provisions of the Act is an unlawful practice under the Consumer Fraud and Deceptive Business Practices Act. Amends the Consumer Fraud and Deceptive Business Practices Act to make a conforming change. Effective June 1, 2025.

Illinois HB 5380: Plural AI Summary

The bill is proposing the following:

  1. Require large social media platforms to provide real-time APIs to authorized third-party safety software providers. This will allow parents/guardians to delegate permission to these providers to manage children’s online interactions and transfer user data.
  2. Define requirements for third-party safety software providers to access the APIs, including:
    • Being solely engaged in Internet safety
    • Only using user data to protect children from harm
    • Only disclosing user data as permitted
    • Disclosing how they operate and collect info to enable informed consent
  3. Provide liability protection for social media platforms that comply with the bill.
  4. Allow the Attorney General to enforce the bill under the Consumer Fraud and Deceptive Business Practices Act.
  5. Amend that Act to specify violations of this bill as unlawful practices.
  6. The bill would take effect on June 1, 2025.

In summary, the key proposals are mandating social media platforms provide access to third-party safety software providers to help parents manage children’s accounts and protect them from harm, with regulations and enforcement mechanisms.

How Does Illinois HB 5380 Compare to Other State Efforts?

HB 5380 does not go as far as legislation from other states seeking to implement a firm ban on the use of social media by minors of a certain age. Regardless, there are key similarities. HB 5380 would require social media companies to make significant changes to their platforms. For this reason, social media companies and their coalitions have opposed these efforts. 

It’s likely that the Illinois House will vote on HB 5380 in April. If successful, it will then pass on to the Senate after that. You can follow upcoming activity on HB 5380 and discover related social media regulation proposals in Plural.

Get Started With Plural

Plural is the policy tracking tool of choice for policy pros looking to monitor social media laws, including Illinois HB 5380. Create a free account or book a demo today!

More Resources for Public Policy Teams

Key Benefits of AI for Lobbying & Advocacy

Want to be able to explain the benefits of artificial intelligence for lobbying and advocacy? Everyone is talking about AI. And we get it, it’s not simple to understand. But as an AI-powered organization, Plural is here to help you get the most out of advancements in AI to make your job as a policy […]

READ MORE →

2025 Legislative Committee Deadlines Calendar

Staying on top of key deadlines is manageable in one state, but if you’re tracking bills across multiple states, or nationwide, it quickly becomes overwhelming. That’s why we created the 2025 Legislative Committee Deadlines Calendar. Stay ahead of important dates and download our calendar today. Get started with Plural. Plural helps top public policy teams get […]

READ MORE →

End of Session Report: Florida 2024 Legislative Session

The 2024 Florida legislative session saw significant activity in the realm of insurance and financial services, reflecting key themes of consumer protection, market stability, and regulatory modernization.

READ MORE →

The post Understanding Illinois’s Proposed Social Media Law with Plural appeared first on Plural Policy.

]]>
Privacy Regulation in the Digital Age  https://pluralpolicy.com/blog/tech-privacy-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=tech-privacy-regulation Thu, 28 Mar 2024 16:38:21 +0000 https://pluralpolicy.com/?p=1995 Regulating tech privacy is a key legislative issue that affects Americans' daily lives. What's the current status of privacy regulation? Read now.

The post Privacy Regulation in the Digital Age  appeared first on Plural Policy.

]]>
In 2016, researchers on communications and internet policy presented a paper on “the biggest lie on the internet.” The falsehood in question? “I agree to these terms and conditions.”

The paper made a splash because it confirmed what most of us already knew about how effectively users guard their personal privacy online, albeit in the starkest terms possible. The study demonstrated that when presented with terms of service that would sign away one’s first-born child to an online service provider, 98% of users clicked “yes.” Needless to say, the results inspired little confidence in the state of tech privacy. 

What is Data Privacy? 

Though intuitively understood by most people, there is no universal definition for the term “privacy.” Policymakers became particularly aware of this fact in the 1960s. At that time, early computers raised questions about the ethics of centralizing personal information. A legal scholar named Alan Westin set forth the following definition, which remains relatively well accepted:

Privacy is “the claim of individuals, groups, or institutions to determine for themselves when, how, and to what extent information about them is communicated to others.”

Tech Privacy as a Public Policy Issue

In today’s terms, policy questions central to tech privacy include:

  • What information can/should internet companies collect about users?
  • Who owns the personal information about an individual? (Read: Who is able to monetize it?)
  • Do users need to consent to the collection and/or sharing of their personal data? If so, what are the criteria for meaningful consent?
  • Can personal data collected in one context be used for an unrelated purpose? 
  • How transparent must tech companies be with users about their business models?
  • How are misuses of data identified? How are laws enforced?
  • What counts as personal data? Do social media posts, location data and/or search engine history?

Regulating Data Privacy

Scholars like Westin were only half successful in pushing Congress to protect the “claim” of people to their personal information. The U.S. Privacy Act of 1974, passed in the aftermath of Watergate, imposed limits on how the government could gather, store, and disseminate citizens’ data. For better or worse, no such constraints were applied to the private sector. Decades later, there is still no comprehensive privacy law that governs how personal data is collected or used in the United States. This puts the US in stark contrast with the European Union (EU), which notably enacted its General Data Protection Regulation, or GDPR, in 2018. 

Federal-Level: Stagnant Sector-Specific Regulation

There are limited situations where privacy regulations do exist at the federal level. In these instances, regulations take the form of sector-specific laws which impose requirements on organizations that meet certain criteria. Some examples include:

  • HIPAA for healthcare providers, health insurance companies, and healthcare clearinghouses
  • GLBA for financial institutions, including businesses that provide financial products or services to consumers
  • FERPA for all K-12 schools, colleges, and universities that receive federal funds

In today’s data economy, the piecemeal approach to privacy leaves many gaps in consumer protection unaddressed. Entities that fall outside the definition of “covered entity” specified by regulators can largely do what they want with consumer data. Health and wellness apps demonstrate this discrepancy. Although products like fertility monitors, weight loss trackers, and  cognitive behavioral therapy journals all collect medical information from individual users, they are not subject to HIPAA.

As the shortcomings of sector-specific laws have grown clearer, the pressure for policymakers to act has intensified. Over the last five years, the U.S. Government Accountability Office and the Federal Trade Commission (FTC) have both made repeated calls for Congress to enact comprehensive privacy legislation. The FTC in particular has repeatedly cited the failure of their fines in curbing the behavior of the largest tech companies, given how readily they can withstand billion-dollar penalties.  

State-Level: Taking the Lead on Comprehensive Privacy Laws

In the last three years, local jurisdictions across the US have signaled that they will no longer wait patiently for federal leadership on privacy regulation. Many states have begun experimenting with different approaches to tech privacy, creating a patchwork of emergent laws for companies to navigate.

As of March 2024, fourteen states have enacted comprehensive privacy laws. There are five currently in effect in California, Colorado, Connecticut, Utah, and Virginia. By early 2025, seven more will become enforceable in Delaware, Iowa, Montana, New Hampshire, New Jersey, Oregon, and Texas.

The sudden onslaught of new privacy laws may seem daunting for businesses seeking to ensure their compliance. However, upon closer inspection, most jurisdictions are following a similar script.

Broadly, there are two mechanisms of action for any privacy regulation: guaranteeing certain rights for individual users and requiring particular actions from tech companies. So far, most states have opted to combine aspects of these approaches in rather similar ways. 

Generally, consumers are expected to be empowered to access, correct, delete, and opt-out of the sale of their data. Tech companies, on the other hand, are required to disclose the information they collect about users. They are also expected to provide pathways for consumers to exercise their data protection rights. 

Where state privacy laws differ, the disparities relate to scope and enforcement.

Virginia’s Consumer Data Protection Act (VCDPA)

Virginia’s Consumer Data Protection Act (VCDPA) is considered by many privacy advocates to be a relatively lax and limited approach to regulation. The legislation does provide consumers with the rights to access, correct, delete, and obtain a copy of their personal data. However, the definition of “personal data” under VCDPA is rather dramatically narrowed by excluding “any information a business has reasonable grounds to believe falls within the public domain.” Given this definition, the VCDPA considers a person’s social media posts, for example, as fair game.

Additionally, the VDCPA does not include any private right of action to individual users. This leaves the responsibility of law enforcement solely to the attorney general. Virginia’s approach to tech privacy also includes exemptions for financial institutions, entities covered by HIPAA, nonprofits, and colleges and universities.

California’s Consumer Protection Act (CCPA)

By contrast, California’s Consumer Protection Act (CCPA) is more robust. For example, the legislation offers greater specificity regarding the structure and contents of privacy policies, as well as the procedures through which users may notice, access, delete, and opt-out of the sale of their personal data. Further, CCPA  grants a private right of action to consumers who believe tech companies have broken the law in certain circumstances, making California the only state to offer such a provision. 

CCPA has also set a precedent by seeking to fill some of the gaps created by the federal government’s approach to regulating privacy based on “covered entities.” Rather than granting exemptions for entire categories of organizations, California’s statute limits exemptions to particular data types. So, while a hospital system might not have to worry about CCPA requirements for its HIPAA-protected patient files, such a loophole would not apply to the other types of information it collects. 

What Is on the Horizon for Tech Privacy Policy?

For now, California has the toughest approach to privacy policy in the US. Other states are in the process of proposing stricter privacy laws. The Maine legislature is currently considering two competing bills: the Maine Consumer Privacy Act (MCPA) and the Data Privacy and Protection Act (DPPA). While the former is closely aligned with states like Virginia, the latter would introduce compliance dimensions that are unprecedented in any US jurisdiction. Notably, the DPPA includes “data minimization” requirements, which present a challenge to the fundamental business models of many of the largest tech companies.

Policies like Maine’s DPPA would have likely been dismissed as completely untenable a few years ago. However, recent developments suggest that it might be time for change in the internet economy. Even in the absence of regulatory pressure, businesses are feeling the negative effects of amassing troves of personal data. In 2022, over 80% of organizations suffered a data breach. It’s projected that by 2025, cybercrime will cost the world $10.5 trillion annually. Users are also becoming less tolerant of companies that fail to take their privacy concerns seriously — 65% of consumers have indicated that “misuse of personal data” is the top reason they would lose trust in a brand. 

Of course, the same factors encouraging states like Maine and California to push the envelope may also move the needle in Congress. In 2022, the American Data Privacy and Protection Act (ADPPA) passed the House Committee on Energy and Commerce with near unanimity. The ADPPA represented the most promising effort to regulate federal consumer data privacy to date. Opinions vary on prospects for the legislation in 2024, with a main point of contention being whether the statute will effectively undo stricter state privacy laws. 

Monitor Tech Privacy Policy With Plural

Plural is the legislative tracking tool of choice for policy teams seeking to monitor tech privacy policy. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Create a free account or book a demo today!

More Resources for Public Policy Teams

Key Benefits of AI for Lobbying & Advocacy

Want to be able to explain the benefits of artificial intelligence for lobbying and advocacy? Everyone is talking about AI. And we get it, it’s not simple to understand. But as an AI-powered organization, Plural is here to help you get the most out of advancements in AI to make your job as a policy […]

READ MORE →

2025 Legislative Committee Deadlines Calendar

Staying on top of key deadlines is manageable in one state, but if you’re tracking bills across multiple states, or nationwide, it quickly becomes overwhelming. That’s why we created the 2025 Legislative Committee Deadlines Calendar. Stay ahead of important dates and download our calendar today. Get started with Plural. Plural helps top public policy teams get […]

READ MORE →

End of Session Report: Florida 2024 Legislative Session

The 2024 Florida legislative session saw significant activity in the realm of insurance and financial services, reflecting key themes of consumer protection, market stability, and regulatory modernization.

READ MORE →

The post Privacy Regulation in the Digital Age  appeared first on Plural Policy.

]]>
Is Congress About to Ban TikTok? https://pluralpolicy.com/blog/tiktok-ban/?utm_source=rss&utm_medium=rss&utm_campaign=tiktok-ban Fri, 15 Mar 2024 13:55:13 +0000 https://pluralpolicy.com/?p=1939 Is a TikTok ban imminent? While TikTok's virality may still feel novel, official U.S. concern over the app surfaced almost five years ago. On March 13, the House passed HR. 7521. The bill now heads to the Senate with many more eyes tracking its progress.

The post Is Congress About to Ban TikTok? appeared first on Plural Policy.

]]>
Is a TikTok ban imminent? While TikTok’s virality may still feel novel, official U.S. concern over the app surfaced almost five years ago. On March 13, the House passed HR. 7521. The bill now heads to the Senate with many more eyes tracking its progress.

The federal legislative process has always been opaque, and therefore difficult for ordinary citizens to follow and connect with. This is, after all, a driving force behind the creation of Plural as a source of open public policy data. The journey from an idea to an enacted law is far more complex than Schoolhouse Rock made it seem. This legislative process can often take years. The ongoing congressional battle over TikTok encapsulates this halting, confusing path. The recent passage of H.R. 7521 out of the House took many by surprise and has users of the app wondering how we got here.

While TikTok’s virality may still feel novel, official U.S. concern over the app surfaced almost five years ago. At that time, the FBI and military leaders cited national security risks related to the app. ByteDance, the company that develops and owns TikTok, has a close relationship with the Chinese government. The Trump administration then pressured TikTok to agree to host all of its U.S. user data under Oracle’s infrastructure. 

Despite this move to protect user data, lawmakers were still eager to act on TikTok. Beginning in 2022, we saw a wave of state legislation aimed at banning TikTok. Most of these bills, like Texas’ SB 1893, sought to ban the use of TikTok by government officials and on government devices. The Biden administration followed a similar path by banning the use of TikTok on federal devices. Montana’s SB 419, enacted in May 2023, went a step further. The new law banned the use of TikTok by anyone in Montana. A federal judge later blocked this ban before it went into effect. 

After years of debate, Congress wanted to go further. After being introduced on March 5, the House of Representatives overwhelmingly passed HR. 7521, the Protecting Americans from Foreign Adversary Controlled Applications Act, on March 13. The bill now heads to the Senate with many more eyes tracking its progress.

What would H.R. 7521 do?

H.R. 7521 would prohibit companies from providing distribution or hosting services to “foreign adversary-controlled applications.” This would force companies like Apple and Google to remove TikTok from their app stores. It would also prevent internet service providers from supporting access to the application. The bill narrowly defines “foreign adversary-controlled applications” to apply to TikTok. However, it does provide an avenue for other applications to be banned in this way. 

The bill doesn’t include penalties for individual users of TikTok, and it wouldn’t remove TikTok from anyone’s phone. But, without web hosting services or the support of app distributors like Apple and Google, the application would quickly become buggy and unusable.

The bill also provides an exemption for certain action taken by ByteDance. If ByteDance divests from TikTok within 165 days of enactment, application support would not be banned. In short, the bill gives ByteDance 165 days to sell TikTok, or be banned from the U.S. 

So – Is a TikTok Ban Imminent?

H.R. 7521 passed out of the House by a wide margin and President Biden has signaled he will sign it if it reaches his desk. However, there are still many barriers between where we stand on March 15th and a TikTok ban. 

First, the House vote caught many by surprise, in part because the bill moved so quickly from introduction to passage. The reaction to the House vote has ensured that any debate on this bill in the Senate will be met with significant attention from all sides. This additional attention may not change any vote, but it will certainly slow down the process. 

Second, even if the bill does pass, its divestment exemption provisions pave the way for TikTok to stay usable in the U.S., as long as ByteDance is willing to sell the application. A sale could be complicated by a lack of willingness to sell from ByteDance or anti-trust concerns here in the U.S. 

Finally, even if the bill does successfully pass, its enactment would be swiftly followed by litigation from ByteDance and others. ByteDance has hinted that they would continue their fight in the courts if H.R. 7521 passes. The American Civil Liberties Union is also organizing opposition to H.R. 7521. It would likely support legal challenges to the law, among many other groups.

Taken together, these obstacles will slow the momentum of this past week. While a TikTok ban might feel imminent, it’s unlikely that any enforcement of the bill, if passed, would begin before the end of 2024. There remain many hurdles to pass before H.R. 7521 becomes law.

Get Started With Plural

Plural is the legislative tracking tool of choice for policy teams looking to gain greater insights into the policies that matter. With Plural, you’ll:

  • Access superior public policy data 
  • Be the first to know about new bills and changes in bill status
  • Streamline your day with seamless organization features
  • Harness the power of time-saving AI tools to gain insights into individual bills and the entire legislative landscape
  • Keep everyone on the same page with internal collaboration and external reporting all in one place

Create a free account or book a demo today!

More Resources for Congress

The post Is Congress About to Ban TikTok? appeared first on Plural Policy.

]]>
AI Policy in 2024: National Legislative Trends https://pluralpolicy.com/blog/ai-policy-2024/?utm_source=rss&utm_medium=rss&utm_campaign=ai-policy-2024 Wed, 21 Feb 2024 20:49:27 +0000 https://pluralpolicy.com/?p=1887 What's the landscape of AI policy in 2024? In this blog, we analyze trends on the federal and state levels with an eye towards the 2024 elections.

The post AI Policy in 2024: National Legislative Trends appeared first on Plural Policy.

]]>
Artificial intelligence (AI) captivated the attention of the public in 2023. Conversations about AI’s capabilities were sparked by the rollout of ChatGPT in late 2022. These discussions were quickly followed by debates among lawmakers over how to regulate AI. Given the rapid advancement in AI technology and the slow progression of policymaking, especially at the federal level, it’s unsurprising that these discussions are still ongoing. We find ourselves in 2024 with many of the same questions about the future of the AI policy that we had in 2023. 

As in recent ESG and data privacy debates, the European Union (EU) has raced ahead of the U.S. and other countries in developing AI policy. The EU’s proposed AI Act would apply reporting and transparency requirements broadly. It would also ban high-risk uses of AI. The Act will likely be approved this year, and will influence AI policymaking throughout the rest of the world. 

In the United States, no such measure has passed. While there is no national framework legislation regulating AI, actions and proposals at both the state and federal levels provide insight into the direction of AI policymaking in the United States. Following state and regulatory action on AI is key, given the low probability of robust federal action. Below we summarize the trends we have seen so far in AI policy proposals, and detail what may come next. 

Federal Approaches to AI Policy

In recent years, federal policymaking decisions have shifted away from Congress towards regulatory agencies and the courts. Since 2011, Congressional majorities have been slim and partisan divides have been significant. This has led to challenges in passing complex, robust legislation through Congress. As a result, recent administrations have aimed to effect change through rulemaking. Without the likelihood of shepherding a bill through Congress, federal lawmakers impact policy through statements, hearings, and bill introductions. The first year of active AI policymaking followed these trends. 

Trend 1: A Non-Legislative Approach to AI Policymaking

Especially in an election year, the Biden administration does not want to be perceived as inactive on such a hot-button issue such as AI. Over the summer in 2023, the administration secured voluntary commitments from leading AI companies to manage risk. The White House built on these commitments in October of that year with the release of an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The Executive Order directs the federal government to initiate rulemaking or policy changes. New rules or policies will increase transparency and reduce risk around the use of AI. They will also promote responsible use of the technology.

Following the release of the executive order, many of the advised actions taken place. The National Institute of Standards and Technology created a leadership group for its new AI Safety Institute. Another significant development was a Department of Commerce proposal that would require cloud providers to alert the government of foreign use of powerful AI models. 

While these developments are significant, it’s worth noting that there are limitations to a strictly regulatory approach to policymaking. Executive orders and many administrative actions are reversible by any subsequent administration. Additionally, these rulemaking processes can be slower than the legislative process and subject to their own uncertainties, including court cases. 

Trend 2: High-profile Hearings Drive Media Coverage

Just like Presidents, congressional leaders can also find themselves stymied by the challenge of passing legislation through a gridlocked Congress. Recently, many legislators have turned to high profile committee meetings with industry leaders to communicate their agenda. Some of the most closely covered committee hearings of the past decade have given legislators a highly-visible opportunity to question Mark Zuckerberg, Sam Bankman-Fried, and others.

This trend has continued with hearings on AI in 2023 and early 2024. Recent committee hearings have included a wide range of guests, including leaders from Microsoft and Nvidia as well as representatives of the music industry. Senate Majority Leader Chuck Schumer has been especially active in this regard. Senator Schumer has initiated a series of forums bringing together tech leaders, consumer rights groups, and civil rights advocates. Even if these conversations don’t directly lead to new policy, they help shape the debate on the use of AI in the U.S.

Trend 3: A focus on Discrimination, Misinformation, and Transparency

Executive Actions, committee hearings, and legislation proposals have made clear the areas of greatest concern for U.S. lawmakers in relation to AI. If significant action on AI does take place in 2024, it will likely relate to preventing discrimination and misinformation, or increasing transparency. 

AI’s risk of contributing to existing societal inequities is well-established and concerning. Some lawmakers have centered their concerns about AI around issues of bias and discrimination. The recently introduced S 3478 aims to account for this risk. The bill would require federal agencies that use algorithmic systems to have an office of civil rights focused on bias and discrimination. The White House and Senator Schumer have also centered race in their discussions of AI. They have aimed to incorporate diverse voices in the conversations shaping AI policy. 

Increased focus on AI is paired significant consternation about the safety of our democratic process. With 2024 being an election year, we can expect a focus on combating AI-related misinformation in the run up to November. In the fall of 2023, lawmakers proposed a bipartisan bill that would prohibit the distribution of deceptive AI-generated election-related content. Whether such a bill can become law, as well as whether it can be enforced, remains to be seen. 

Finally, there does appear to be some consensus regarding the need for transparency in AI. President Biden’s executive order calls for the establishment of best practices regarding the detection and labeling of AI-generated content. Legislation calling for watermarking AI-generated content and encouraging training in the use and detection of AI for federal employees have also been introduced.

State Approaches to AI Policy

At the state level, lawmakers are often learning about AI as they begin to craft regulations. State activity in 2023 was widespread and it is expected that the pace of this work may increase in 2024. As “laboratories of democracy,” states play a crucial role in developing new policy to meet new needs. In an increasingly nationalized political environment, we also see policy trends moving from state-to-state more quickly. This has been seen in recent years with marijuana and gambling legalization efforts. Tracking AI policy trends across state governments is essential to ensuring compliance and in assessing what’s to come.

Trend 1: California Leads the Way

California is the largest sub-national economy in the world. It’s also home to one of the largest technology innovation hubs. Governor Newsom and California Democrats have shown an interest in being the first to act on hot-button issues like abortion, gun rights, and ESG regulations. It isn’t surprising that significant legislative action on AI is expected to occur in Sacramento this year. 

California has adopted measures requiring an inventory of current “automated decision system” use in state government. The legislature has also expressed support for President Biden’s approach to AI regulation. Efforts to come in 2024 are headlined by Senator Weiner’s proposed Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. This bill would regulate the development and use of advanced AI systems. It would require AI developers to report to the state on testing protocols and safety measures.          

Trend 2: A Focus on Labor

One of the most common concerns associated with any new technology is its potential to cause job displacement. Because it simulates human cognition, AI poses a risk to disrupt certain industries and displace those working in them. While AI poses threats to oft-threatened industries like manufacturing, it is also places at risk industries not commonly thought of in this context. Organizations representing reporters, screenwriters, and lawyers have all sounded the alarm about the labor risks of AI. 

There is still much we don’t know about how AI will affect our workplaces. State responses to AI’s impact on labor show a desire to learn more while preventing some overreach. New Jersey’s A 5150 and New York’s A 7838 are both propose requiring their state’s Department of Labor to collect data on job losses due to automation. Massachusetts’s An Act preventing a dystopian work environment, perhaps the most interestingly named of the bills in this category, seeks to ban the use of AI in certain hiring and workplace productivity practices. 

Trend 3: Task Forces, Commissions, and Studies

When it comes to complex policymaking discussions, it’s worth remembering that the vast majority of state legislators don’t come from the field which they are regulating. This isn’t a dismissal of these legislators or their ability to regulate AI; however, it underscores the need for state legislators to study these issues before they act. As such, much of the AI legislation that has passed so far have established groups dedicated to studying its impact and making recommendations. It will be important to follow the work of these groups to anticipate what their impact on policymaking will be. 

Looking Ahead: AI Policy

As we anticipate what action on AI awaits us through the rest of 2024, upcoming elections stand out as a monumental factor. Along with the presidency, all House seats, 34 Senate seats, and a majority of state legislator seats are up for election in November. AI policymaking will be heavily impacted by these elections, both in the lead up to and aftermath of election day. 

As mentioned, AI poses a real risk to exacerbate the growing trend in the U.S. of election misinformation. Conversations about preventing this challenge have already begun, many focusing on preventing deep fakes or erroneous content. It seems likely that at least some misinformation will reach voters this fall. How the public and our elected officials react to it will shape any legislative action following the election.

There doesn’t yet appear to be consensus partisan positions on AI that the average voter will weigh in their decisions. However, the impact of AI should not be underrated as a campaign issue. After all, AI will have profound effects on healthcare, education, the economy, and civil rights; the issues that are perennially on the mind of the American electorate. 

More Resources for Public Policy Teams

Key Benefits of AI for Lobbying & Advocacy

Want to be able to explain the benefits of artificial intelligence for lobbying and advocacy? Everyone is talking about AI. And we get it, it’s not simple to understand. But as an AI-powered organization, Plural is here to help you get the most out of advancements in AI to make your job as a policy […]

READ MORE →

2025 Legislative Committee Deadlines Calendar

Staying on top of key deadlines is manageable in one state, but if you’re tracking bills across multiple states, or nationwide, it quickly becomes overwhelming. That’s why we created the 2025 Legislative Committee Deadlines Calendar. Stay ahead of important dates and download our calendar today. Get started with Plural. Plural helps top public policy teams get […]

READ MORE →

End of Session Report: Florida 2024 Legislative Session

The 2024 Florida legislative session saw significant activity in the realm of insurance and financial services, reflecting key themes of consumer protection, market stability, and regulatory modernization.

READ MORE →

The post AI Policy in 2024: National Legislative Trends appeared first on Plural Policy.

]]>