IE 11 Not Supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Beyond Novelty: Lawmakers (Cautiously) Add ChatGPT to Their Toolbox

In Massachusetts, the latest piece of technology to take the Internet by storm — ChatGPT— helped craft a bill aimed at regulating AI. But, the lawmaker behind the bill says the tech isn't ready to write laws without help.

In the background, the ChatGPT logo and text that says "Welcome to ChatGPT" in white font on a black background. In the foreground a hand holds up a smartphone with the ChatGPT app open on the screen.
Shutterstock
Earlier this month, a Massachusetts lawmaker filed a bill to regulate generative artificial intelligence models. However, unlike prior legislation, this particular bill was partially drafted by ChatGPT, an AI chatbot auto-generative system created by OpenAI.

ChatGPT uses a deep learning technique to sift through terabytes of data to create answers to prompts or questions, according to context.news. Essentially, a user can type a question into ChatGPT’s database and receive a detailed response that has been vetted through copious amounts of online data.

For example, when you type in the question “What is ChatGPT?” it says: “ChatGPT is a conversational AI language model developed by OpenAI. It is trained on a large corpus of text data to generate human-like responses to text-based prompts. It can perform a variety of tasks, including answering questions, generating text, translating languages, and summarizing text, among others.”

In the novel case of the Massachusetts legislation, SD 1827, the AI chatbot helped draft more than half of the bill, according to Sen. Barry Finegold. However, it had some limitations.

“ChatGPT got us 70 percent of what we needed when we wanted to draft this bill, but it didn’t get us all the way,” Finegold said, adding that it was not unlike the limitations facing autonomous vehicles in the snow.

“Recently, up here in Boston, we got a lot of snow, and it’s like autonomous vehicles,” Finegold said. “It’s great when the streets are dry, but autonomous vehicles don’t really work when it snows.”

The analogy translates to ChatGPT in the sense that while it came up with a large portion of the bill, it still posed some limitations, like rejecting the request to draft the bill or just stopping halfway through generating a response.

Other minor issues that popped up along the way included replicating user inputs in a way that wasn’t particularly helpful to not fully knowing how to format the bill within the style of the Massachusetts General Laws.

Aside from that, though, ChatGPT contributed various original ideas to the bill.

“It defined a key term, expanded on what the core operating standards of generative AI models should be, and clarified the process for registering with the attorney general’s office,” a spokesperson from Sen. Finegold’s office said via email.

The bill itself would implement the following guardrails if it were enacted:

  • Require the largest generative AI companies to register with the attorney general’s office and disclose information about their algorithms.
  • Require regular risk assessments by companies to identify reasonably foreseeable risks and cognizable harms from their AI models.
  • Require companies to implement reasonable security measures and prevent their models from being used to discriminate against protected classes.
  • Require consent from users before processing their information.
  • Require companies to de-identify information as appropriate.
  • Require large-scale AI models to be programmed to generate all text with a distinctive watermark or offer an authentication process to detect plagiarism.

With these guardrails in place, Finegold said, ChatGPT could be used as a powerful tool to create change.

For instance, one example he shared was citizens using ChatGPT to draft a bill to present to lawmakers during a commission meeting to address a specific issue or students using the AI to compare notes after they completed an assignment.

However, he reiterated, this could only happen with the proper guardrails in place.

“I think where we failed in government is we weren’t working with Facebook and other social media companies to try and put up proper guardrails,” Finegold said. “I think this could be a very positive, powerful tool, but we can’t be late to the game. There need to be proper guardrails in place, so we’re hoping that with the legislation we have filed, we can put up those proper guardrails.”
Katya Diaz is a staff writer for Government Technology. She has a bachelor’s degree in journalism and a master’s degree in global strategic communications from Florida International University.