• Future of AI: Between Innovation and Caution

    Read More
  • “Balancing the Scales: The Science and Debate Behind Intermittent Fasting”

    Read More
  • Leaks, Whistleblowing, and Citizens’ Rights

    Read More
Silence accomplices and sexual harassment
This is why we - a group of science communicators who participated Projected Futures 3 Workshop - see hope for the future of our craft.

Future of AI: Between Innovation and Caution

A report commissioned by the United States Department of State and prepared over 13 months by an American company recently presented is among the most alarming reports published on artificial intelligence and its potential dangers. However, this report’s severe critiques cannot be overlooked.

The final report, “Action Plan for Enhancing the Safety and Security of Advanced Artificial Intelligence,” recommends extensive and unprecedented political actions. It even suggests that AI model training should be limited to a certain level of computational power and that Congress should make using more than this level illegal.

How Was the Report Prepared?

In November 2022, the U.S. Department of State, through a $250,000 contract, assigned a company called “Goldstone AI” to prepare a report on the dangers of artificial intelligence and the measures available to address them.

This four-person company, co-founded by two brothers, primarily conducts training courses on AI operation and explanation for government employees.

After 13 months, the company released a 247-page report. Access to the full version is only possible through a direct request from the company and has not been publicly released. However, a summary has been made available to the public.

The report, claiming to be based on interviews with over 200 security, political, military experts, and several artificial intelligence specialists – both directly and indirectly – is one of the most alarming documents published to date about artificial intelligence and its potential dangers, outlining a roadmap for mitigating the risks discussed in the report.

Warnings in the Report

The report warns about what it calls the dangers of AI, akin to weapons of mass destruction, events leading to human extinction, and AI uprisings taking control of global affairs, emphasizing the technology’s risks to national and global security.

On one hand, by pointing to the rapid development of AI and the possibility of its misuse, the report warns of the dangers of developing general artificial intelligence leading to human extinction.

Overall, the picture that this report paints of the future risks of artificial intelligence might be among the darkest outlooks on the future of this technology.

General artificial intelligence, an idea and technology that is – for now – fictional and unreal, assumes that a machine can achieve a level of intelligence and self-awareness similar to human intelligence. For more on this, you can listen to the episode “Which Artificial Intelligence? Which Danger?” from the Chista podcast (Farsi Language).

Recommendations and Solutions

The report’s authors recommend that to mitigate these risks, the U.S. government should implement and enforce restrictions on the development of this technology and use this opportunity to introduce technological controls into national law and through its influence, incorporate these restrictive laws into international legislation.

This roadmap also suggests establishing a global organization to monitor the global progress of this technology based on laws designed for AI regulation and oversight, while security and intelligence agencies should prepare programs to address and respond to potential disaster scenarios.

Furthermore, the report seriously recommends investing in the education of U.S. government employees about AI trends and advancements, a suggestion that could potentially constitute a conflict of interest given the company’s primary activity of conducting educational courses for government entities.

Criticisms of the Report

The issue of artificial intelligence and its future, along with concerns about its misuse, is a real and significant matter in our world today. From Canada and the United States to the European Union and China, plans and laws have been proposed and even enacted to monitor AI development, specifically to protect citizens’ rights against potential abuses.

In the United States, the President’s executive order mandates certain standards in AI development, and in the European Union, a law was recently passed in Parliament to establish AI development standards.

However, the roadmap’s recommendations include widely accepted collective wisdom, such as being prepared for potential error scenarios in AI development.

A significant point in this roadmap or report is the fear-mongering exaggeration of a doomed technological future.

The report does not make a clear distinction between current AI and general artificial intelligence, while many experts fundamentally consider these two types of technology to be different, with the development of one not necessarily leading to the other.

Moreover, the report heavily relies on the narrative and statements of security and military figures, attempting to portray the worst-case scenario as the most likely. The absence of AI scientists and experts among the primary sources for the report, along with a lack of technological data and scientific foundations, has led to scenarios based on incomplete understandings of the technology.

Another point of challenge for the report is the set of solutions it proposes to address potential dangers.

In summary, these solutions involve limiting technology, granting government oversight and control capabilities, monitoring its growth in other countries by American or international agencies (established by the United States), and creating a judicial system and criminalization of technology development outside government oversight.

This approach, which might be described as securitizing technology, has been tried and failed in the development of other technologies that were significantly harder to develop.

Perhaps there’s another path; instead of securitizing, limiting, censoring, and driving the development of technology underground, emphasis should be placed on opening up the resources of this technology.

Rather than handing control over to governments, attention could be paid to the role of civil institutions composed of experts in this field and related fields, allowing collective intelligence to guide development.

Simultaneously, by removing this arena from corporate monopolies and opening it up, civil forces in this field could be allowed to develop tools to control and neutralize potential abuses of artificial intelligence using the same technology, without hindering the path of progress.

“Balancing the Scales: The Science and Debate Behind Intermittent Fasting”

With the advent of Ramadan, the discussion around the benefits and harms of fasting has picked up in many media outlets and social gatherings yet again. However, a new form of fasting has entered the scene recently. This new fasting has nothing to do with Islam and among various health-conscious groups, has gained popularity and now has become one of the growing trends in many people’s lives. A substantial financial market has been created around it.

Keep Reading

Leaks, Whistleblowing, and Citizens’ Rights

In recent days, the release of documents by a hacker group known as “Justice of Imam Ali,” which they obtained from the judiciary and made publicly searchable with over three million entries, has been a significant topic of discussion in the news and media space.

Keep Reading

Back to the Moon

After half a century, America’s triumphant return and landing on the Moon with Odysseus marked a significant event.

Keep Reading
1 2 3 5