Skip to content

City of St. Albert staff await AI guidelines

Administration says it is assessing effective, ethical use of new technology
1605-citys-ai-policy
This AI-generated (not real) City of St. Albert employee, as well as his real-life colleagues, have been directed not to use generative AI technology in their work. FOTOR/Image

If you are one of the 200 or so people who watched the May 7 St. Albert city council meeting stream on YouTube, you may have noticed a presentation to council about property taxes featured an image of a senior with seven fingers on her right hand.

Now, it's not the case this resident has polydactyly — the genetic condition where people are born with more than five fingers or toes — because the resident featured in the presentation isn't real. The image was created using generative artificial intelligence (AI).

Like the name suggests, generative AI refers to artificial intelligence technology that is capable of producing images, text, videos, and other types of content. This type of AI has dominated headlines in recent years because of the technology's advancement, especially by the U.S.-based company OpenAI and its flagship product ChatGPT.

While the boom of the language-based ChatGPT and its competitors, as well as dozens of AI image generators, has been met with much excitement (and government investment), the technology hasn't been without controversy.

For example, a Stanford University study last year found that thousands of images of children being sexually abused were contained in a database that was being used to train AI, which aims to improve AI and expands its knowledge base. Another example is that ChatGPT has been known to produce misinformation when answering questions or responding to prompts submitted by real people, which has created concerns about how AI will be used to spread election disinformation and fake news.

Now, those issues are much more serious than an AI-generated image of a senior with seven fingers used to illustrate impacts of property taxes, but those issues, as well as data privacy concerns, led city administration last summer to direct staff not to use generative AI in their work until internal guidelines are developed, city spokesperson Kathy deJong said in an email.

“Currently, as part of guideline and policy development, administration is assessing the effective and ethical use of generative AI tools,” deJong said. “In this instance of the image used in a presentation to council, it was not intentionally chosen as an AI-generated graphic.”

“As City of St. Albert employees often source their own graphics for day-to-day use in presentations and projects, despite everyone’s best efforts, there can be times when clip-art or imagery sourced online may be unknowingly AI generated.”

deJong also confirmed the city's communications department does not use generative-AI tools to produce content published on the city's website or social media accounts.

“Currently, the city has a working group assessing effective and ethical use of generative AI and operational considerations in a municipal context, as well as guideline development in other municipalities as part of [our] guideline and policy development.”

From what the Gazette could find, few municipalities in Canada are using generative AI in day-to-day operations, but many are considering their options.

Earlier this year the city council of Kitchener, Ont., received a presentation from a director of the University of Waterloo's Artificial Intelligence Institute, as well as a former leader of the Digital Kitchener Innovation Lab, on potential uses of generative AI. One idea presented to Kitchener's council was an AI chatbot (like the ChatGPT program) that residents could ask questions about zoning or other city bylaws to save residents the time needed to read lengthy and complicated city documents.

Just one month before that presentation, Ontario's privacy and information commissioner, Patricia Kosseim, called on the province to implement strict rules and regulations around the use of AI by public servants.

The Ontario government has been developing a “Trustworthy AI Framework” for the past few years to set out a list of principles that guide the government's use of AI, and some preliminary principles include not using the technology secretly and setting up a process to challenge governmental decisions “made with the use of AI.”

In an email, Jonathan Gauthier, the press secretary for Alberta Technology and Innovation Minister Nate Glubish, said the province “is speaking with stakeholders and experts about how and when we should regulate the use of AI.”

“We know AI tools are more accessible than ever and we know that governments across the world, including some municipalities in the province, are exploring how to best use AI,” Gauthier said. “In fact, Alberta’s government established GovLab.ai in partnership with AltaML, an Alberta-based leader in the machine learning space, to help deliver AI-drive solutions that solve everyday problems and result in better, faster, smarter services to Albertans.”

deJong said the city will continue to assess how and for what purpose generative AI could be used for staff, but until internal guidelines are developed, staff will be directed not to use the technology.

She did not say when the guidelines will be finalized.

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks