We Need Regulation To Save Humanity From AI... And To Save AI Stocks - CryptoInfoNet

We Need Regulation To Save Humanity From AI… And To Save AI Stocks – CryptoInfoNet

We Need Regulation To Save Humanity From AI... And To Save AI Stocks - CryptoInfoNet PlatoBlockchain Data Intelligence. Vertical Search. Ai.

As artificial intelligence (AI) developments thrust the technology center stage, investors naturally smell opportunity in the air. They also smell the freshly printed forms and red tape of the regulator just waiting to take their cut and hamper the roaring machine of AI innovation. But to those worried that Uncle Sam could crush the industry through new regulations and restrictions, I’d argue that the exact opposite is true here: Regulations can save the industry from itself. And by extension of that, more regulations for the industry protect, not harm, investors. 

In most new industries, the word “regulation” is taboo. Now, the AI industry is not exactly new. The modern concept goes back to the 1950s, and both private and public investments in the field have waxed and waned over the past 70 years or so. The 1980s and early 1990s saw a boom-and-bust cycle in artificial intelligence investments. Investments by the Japanese government in the 80s kickstarted the first big commercial AI boom. However, by 1993, “over 300 companies closed their doors” as the bubble popped. However, modern advances in computing power and large language models (LLMs) have given the industry new life, and its potential isn’t just attracting investors but regulators too.

AI Regulations: A Mess of Interests and Risks

The question of what “AI regulation” should or even can be is one for politicians, policymakers and ethicists. What investors want to know is what it would mean for their portfolios, naturally. What are the biggest risks? And this is where laws and regulations can provide some protection and help manage those risks.

The biggest risks to investors boil down to three core overlapping concerns: fraud, intellectual property protection and privacy. Of course, there are laws already on the books that address all three of these issues individually. The problem, though, is that AI represents a uniquely complicated blending of all three risks that, without clear frameworks, laws, and regulations, threaten the entire industry’s progress.

The most pressing concern on that list for investors is fraud. Almost everyone can agree that fraud prevention is an important and vital role of regulation.

Fraudulent Wire Riding Apes: Two Case Studies

Two case studies show the potential future of AI regulations, the risk of fraud, and the regulatory time frames investors should be expecting. Both also epitomize how fraud will shape the regulatory actions to come.

The first is the world of cryptocurrencies and non-fungible tokens (NFTs). A significantly newer industry than AI, crypto has seen its fair share of booms and busts and, most importantly, fraud. The Securities and Exchange Commission (SEC) and Federal Trade Commission (FTC) have spent a good decade trying to figure out how to fit crypto into their regulatory schemes. Congress has yet to pass any explicit crypto-related legislation despite some attempts.

In that time, numerous exchanges have risen and collapsed. NFTs went from being all the rage in 2021 and 2022 to losing 95% of their value, taking billions of investors’ dollars down with them. Infamously, the collapse of FTX and the recent trial of Sam Bankman-Fried involved billions of dollars of fraudulently used funds.

The second case study here is that of cybersecurity. Unlike with crypto, there are quite a few established core laws on the books for the industry. The first two “true” cybersecurity laws were the Computer Fraud and Abuse Act of 1986 and the Comprehensive Crime Control Act of 1984. Both relied on creative and relatively new understandings of “wires” (as in telegraph wires) and wire fraud.

In the decades since, Congress has passed piecemeal laws on cyber topics with mixed results. This has resulted in states picking up the slack. The cybersecurity world also provides an example of an industry with deep intersecting interests, many of which are not dissimilar to the risks and regulatory blind spots facing the artificial intelligence industry. One of the most notable is privacy. Concerns over individual privacy, commonly associated with social media and the Internet of Things (IoT), also arise with AI training models.

Both examples here provide lessons for the rapidly growing AI industry. The crypto world’s high-risk, high-reward, low-regulatory environment is rife with fraud and instability. Cybersecurity is a much older and established industry, but the regulatory environment is still patchy, especially regarding privacy.

The Current State of AI Regulations

So, to get an idea of which of these regulatory paths investors should be expecting, let’s look at the current regulatory environment for artificial intelligence.

Starting with the domestic scene, well… there’s not much, at least legislatively. President Joe Biden, on the other hand, has been busy forging a regulatory path via a voluntary pledge and, most recently and importantly, a landmark and sweeping Executive Order.

Earlier this year, the White House announced a non-binding voluntary pledge to “manage the risks posed by AI.” Among the signatories of this pledge are some big names like Amazon (NASDAQ:AMZN), Meta Platforms (NASDAQ:META), Alphabet (NASDAQ:GOOG, NASDAQ:GOOGL) and OpenAI. The Office of Science and Technology Policy (OSTP), a department within the White House, has also published a “Blueprint for an AI Bill of Rights.” Another notably voluntary framework for safe and ethical AI usage.

According to the White House, “safe and ethical AI usage” requires rigorous “pre-deployment testing” and is created with “consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system.” AI systems should also have “[i]ndependent evaluation and reporting” to make sure that they stay safe in the long run.

Biden’s AI Executive Order

In the early morning hours of Oct. 30, the White House announced the most comprehensive regulatory push regarding AI. Driving this effort was a sweeping Executive Order (and a sleek new website) covering everything from safety and security to privacy, civil rights and more. This Executive Order builds upon the aforementioned voluntary pledge and the AI Bill of Rights, and it predominately focuses on what most Executive Orders do: mobilizing the Executive Branch’s many departments and agencies into action.

There are many details to be ironed out regarding how this Executive Order will impact the industry, but the most significant takeaways for investors are:

1. It will take quite a while for regulators to develop these new guidelines and policies.

2. Whatever specific regulations come out of this EO will be built on shaky legal ground until Congress passes AI-related laws. It is still dependent on voluntary compliance, with one major exception: The Defense Production Act (DPA).

Biden’s invocation of the DPA is as notable as it is confounding. The DPA was the only actual explicit law the EO references with some potentially powerful implications. The DPA was most recently used in the context of the Covid-19 pandemic but is typically associated with wartime production. Biden is using it here in a purely national security context:

“…the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.”

It’s unclear who is covered under this DPA-backed “review process” as other agencies have more specific regulatory responsibilities. For example, the National Institute of Standards and Technology (NIST) is to develop AI safety standards and the Department of Homeland Security (DHS) is to implement them for critical infrastructure. Perhaps more importantly, clarification is needed regarding which agency will even implement this policy.

There is one notable candidate that the DPA would almost definitely cover due to its existing defense contracts: Palantir (NYSE:PLTR). The Big Data and increasingly AI-focused defense contractor isn’t a signatory of the White House’s voluntary pledge. This might have more to do with Palantir Chairman Peter Thiel’s conservative-libertarian political leanings and support for former President Donald Trump than an outright rejection of further regulation. However, this omission is notable as Palantir has big plans for “taking the whole AI market.”

Taken together, the regulatory framework set forth by Biden’s Executive Order is groundbreaking and tees up Congress to build the rest of the regulatory house, so to speak.

Unfortunately, we might be waiting for quite a while for lawmakers to start “pouring the concrete.”

What About Congress?

The White House’s AI Executive Order makes only two references to Congress, but both are calls for Congress to pass bipartisan laws on AI (one was explicitly about passing a privacy law).

According to the Brennan Center For Justice, Congress has roughly 60 AI-related bills sitting in various committees.

However, as of this writing, the House of Representatives just finished agreeing on a new Speaker of the House and has “bigger fish to fry” with yet another impending government shutdown deadline and accompanying budget battle looming. Not to mention contentious Israel and Ukraine aid bills and a host of other more pressing concerns.

That leaves two other sources for AI regulations: individual U.S. states and international actors. The former group, comprised of just a handful of the country’s 50 states, has passed a patchwork of relevant laws, with AI and consumer privacy being the primary focus. Internationally, China is leading the way in building a complex and advanced set of AI regulations. The European Union’s comprehensive regulatory framework, simply titled “AI Act,” is expected to be finalized and passed by the year’s end.

AI Regulations and What the Future Holds

So where does this leave this rapidly growing, potentially highly disruptive industry? Will it take the crypto path to regulation, which has been rife with fraud and instability? Or the slower, more stable yet still patchy cybersecurity path. Well, for now, at least in the United States, it will likely be a mix of the two.

AI has the disruptive and money-making potential that the crypto industry can only dream of. Yet, it also has the mainstream potential and utility that the cybersecurity industry provides. For investors, and not to sound too sensationalist here, for humanity, that’s a risky combo.

There are myriad potential real-world applications of AI, from agriculture to defense to finance and healthcare. A crypto rug pull could defraud investors of their money, or a hacker could steal money from a bank, but the risks from AI accidents or malicious behavior could be catastrophic.

The hypotheticals for what could go wrong are endless as AI is further incorporated into everyday life. But we’re already seeing troubling malicious use cases for AI. The recent start of the Israel-Hamas war has seen a flood of misinformation on social media platforms like X, formerly Twitter. Some of the fake images being shared online are AI-generated, often created with easily accessible tools like Bing’s Image Generator. With ever-improving technology, it will become harder to identify fake images and videos.

We also are butting up against risks that once were only found in science fiction, like “rogue AIs.” While an AI meal planner accidentally suggesting a recipe for chlorine gas is worth some chuckles today, it would be far less humorous if it were an AI in charge of, say, a large-scale automated farm accidentally (or worse, intentionally) contaminating a crop of vegetables.

As the saying goes: “Safety regulations are written in blood.” And we really shouldn’t have to wait until blood has been shed before taking action.

Legally, there’s already a “sledgehammer” of a case against Google that, according to the company, would destroy the concept of generative AI. What the industry needs to avoid this fate are clear, enforceable regulations that can protect both the public and AI firms from each other’s legal wrath.

For the sake of investors and for everyone, there needs to be more regulatory oversight over the artificial intelligence industry before something goes horribly wrong. The White House’s new Executive Order provides a very comprehensive framework on numerous AI-related issues and is a good start. However, without laws passed by Congress providing a solid foundation for regulators to build upon, we’ll end up with a crypto-style mess of confused regulators. This will only lead to confused market participants and confused investors. And with the potential of AI being so great and dangerous, that’s not something that anyone should want.

So no, AI regulations are not “the enemy,” as one venture capitalist’s manifesto put it, but they can act as safety rails that can help protect the industry and investors from enormous risk.

What Investors Should Do Now

Without clear guardrails, investing in the artificial intelligence world is risky business. Investors who aren’t terribly concerned about the impact of these scrapped-together regulations can make riskier bets on the raft of startups trying to strike it rich. Or on established but regulation-dismissive plays like Palantir.

Otherwise, investors would be better off seeing which companies are “playing ball” with the White House’s voluntary pledge. Or those that are adapting to the international regulatory changes coming out of the EU and China. These companies likely either see these new regulations as something they can live with or something they can use to their advantage.

Either way, the hammer of regulation will fall at some point or another. It would be best for everyone, not just investors, for it to fall before the latter half of the “move fast and break things” expression breaks the AI industry.

On the date of publication, Andrew Bush held a LONG position in GOOGL and AMZN stock. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Andrew Bush is a financial news editor for InvestorPlace and holds two degrees in International Affairs. He has worked in education, the tech sector and as a research analyst for a DC-based national security-focused consulting firm.

Source link

#Regulation #Save #Humanity #AI.. #Save #Stocks

Time Stamp:

More from CryptoInfonet