Alphabet X’s Bellwether harnesses AI to help predict natural disasters
–
A new AI policy proposal from AI Policy US is causing quite a stir. It's seen as one of the most strict tech rules in recent years. Many people are talking about how it could change a lot.
The proposal sets up a system with four levels of AI concerns. Each level has different rules. The policy focuses on the computing power used by AI to decide its level. For example, AI trained on fewer than 10 to the 24 FLOPS (floating-point operations per second) isn't watched closely. Those trained on more are watched more closely.
The goal is to manage risks that AI might cause, like risks to safety or security. This includes stopping AI systems from acting on their own without human control.
Some experts, though, don't like this approach. They think the rules are too broad and don't really fit how AI works. They worry that this might slow down good AI work.
Another interesting part of the proposal is about early AI training. If an AI system does better than expected, the training must stop. The creators then need to show it's safe before they can go on.
This proposal isn't a law yet. It's just an idea that people are talking about. But it shows that as AI gets more powerful, the rules about it will likely get tighter.
People are watching this closely because it could really change how AI is made and used in the future. If this becomes law, AI development could look very different soon.