Two recent decisions of the Supreme Court, the Major Questions Doctrine and to a lesser degree Chevron, will make it virtually impossible to regulate AI. In addition, historically the legal community has been reluctant to take on the Supreme Court in venues outside the court room. Consequently the future for a possible regulation of AI is dismal unless immediate action is taken to mobilize informed and interested personnel to implement the aforementioned link which leads to the implementation of an out of court strategy.
Under Development
AI Library
The Center for Regulatory Effectiveness (CRE)
–
CRE Recognition
Regulatory Officials By Adminstration
A number of our readers have requested that we compile a list of references that CRE has reviewed in addressing the possible regulation of AI. In reference to this request we have decided the following:
(1) We will post references to only those documents that we believe deserving of the most serious attention.
(2) We will solicit the views of our readers as to the documents that fulfill the requirement in (1) above. When we post a document so submitted to CRE ( contact@thecre.com ) we will not identify its sponsor unless requested to do so.
Artificial Intelligence in Weapon Systems: The Overview
CRE: A must read. What constraints (regulations), if any, does the military impose on AI?
—
What Are The Lessons Learned From Attempts to Regulate The Social Media?
—
CRE Editorials
CRE editorials on a wide range of topics are in preparation; we welcome your recommendations at contact@thecre.com. It should be noted that existing resource constraints limit the number of editorials we can produce.
—
Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations for Discrimination Law
—
AI By Presidential Administration
Nixon Foundation
Ford School
Reagan Institute
Bush 1 School of Government
Clinton
Bush 2 Center
Obama
Trump
Biden
A Look Back
—
Eye Openers
The Vanishing Policy Analysts?
—
Google Scholar
Google on AI
—
Homepage
Questions and Answers
The material on this page is the result of questions raised by the following post:
Can and Should AI Be Regulated?
Q: How do I submit a question?
A: Use this link:contact@thecre.com. In the alternative you may use the comment section below.
Q: Should a new organization be formed to address this issue?
A: No. An interagency tribunal with full time staff provided by federal agencies and led by a Presidential appointee would take ownership of the issue: Can and Should AI Be Regulated?
For years, we thought Google was asleep at the switch because they did little to expand upon the reach of their search engine as now witnessed by the emphasis on AI which might well be simply characterized as a search engine on steroids. Apparently the corporate strategy was to adopt the conventional strategy of maximizing shareholder welfare not consumer welfare.
New challenges need to be addressed by adopting a range of different disciplines; to this end is there a role for the inclusion of a counterpart to an antiballistic missile system strategy in the development of a system to regulate AI? More precisely, in that AI is being used in antiballistic missile systems how are the resultant ethic, legal and operational issues being addressed in this most critical of systems and are the resultant findings relevant to other systems?
The Center for Regulatory Effectiveness (CRE)
We are answering the aforementioned question in large part as a result of an AI publication regarding the historical development of the White House Office of Management Budget review of federal regulations at ChatGPT.
It appears that some of our followers believe that the individuals that can take on a sizeable portion of the federal government ought to, at a minimum, be able to give some valuable insights into the possible regulation of AI. We at the Center for Regulatory Effectiveness are appreciative of the recognition but we believe we have more than met our match.