He says it's to reduce the risk of hacking, like shutting down the electric grid or stealing money from banks. This bill realistically reduces HACKING RISK, how?
The legislators should speak less with academia and more with real practitioners. Why are Canadians Hinton and Bengio mentioned in support of California bill - because both places start with CA?
The bill is overly broad, the $500,000 limit is arbitrary. It establishes an unelected, all powerful board of AI overlords. It will kill the open source and bust open the Ai divide. All the useful Ai models will be behind a paywall. Risk/Reward: All top open source models are developed by major companies, like Meta and Google, that have the expertise & resources. With SB 1047, the increased LEGAL LIABILITY and RISK will make developing these models too risky and maybe even impossible. For instance, Meta's LLaMA 3.1 is currently the leading open source, LLM, model. $500,000 Limit: Does this figure include the electricity costs for data centers? What about the overall costs of AI hardware and the licensing deals for training data...ect? Can you use initialized variables, weights, from larger models to begin training new models, or would these count toward the $500,000 limit? Unelected AI Overlords: ?
He says it's to reduce the risk of hacking, like shutting down the electric grid or stealing money from banks. This bill realistically reduces HACKING RISK, how?
He says, "startups aren't apart of this" should signals that he and other lawmakers don't understand the technology.
Perhaps above a certain level it should rise above a state to state scope to avoid needless delays and complexities.
They can’t shut down the electricity grid
The legislators should speak less with academia and more with real practitioners. Why are Canadians Hinton and Bengio mentioned in support of California bill - because both places start with CA?
What if AI is already being ran by imposters or outliers using the models for undermining the company
The bill is overly broad, the $500,000 limit is arbitrary. It establishes an unelected, all powerful board of AI overlords. It will kill the open source and bust open the Ai divide. All the useful Ai models will be behind a paywall.
Risk/Reward: All top open source models are developed by major companies, like Meta and Google, that have the expertise & resources. With SB 1047, the increased LEGAL LIABILITY and RISK will make developing these models too risky and maybe even impossible. For instance, Meta's LLaMA 3.1 is currently the leading open source, LLM, model.
$500,000 Limit: Does this figure include the electricity costs for data centers? What about the overall costs of AI hardware and the licensing deals for training data...ect? Can you use initialized variables, weights, from larger models to begin training new models, or would these count toward the $500,000 limit?
Unelected AI Overlords: ?
Senator Scott Wiener proposed SB 1047 legislation which regulates AI seems vary reasonable and needed IMO!
Look at some of this man’s pictures online…