As publications that cover technology, we examine both the opportunities that new technology offers and the risks that it presents. Sometimes we come across technology that pose unique challenges to publishers like ours: technologies that challenge journalistic norms, shift media business models, or augment newsroom operations, for example. Within the closed doors of our newsrooms we discuss these developments regularly, thoughtfully, and passionately; but rarely do we feel compelled to issue public statements. In the case of generative AI, we do.
We will not publish content that is primarily authored, edited, or otherwise created by generative AI (ChatGPT, GPT-4, etc.). Further, we will approach any secondary or supplementary use of the technology in our newsroom with extreme caution, because:
- We cannot trust the accuracy of the tools’ statements / results, because we can’t rely on the accuracy of the information the tools were trained upon.
- We cannot verify or fact-check the accuracy of the tools’ statements / results, because we cannot consult the original source of the information the tools were trained upon.
- We cannot provide appropriate context around the tools’ information, because we don’t know how the tools prioritize or analyze the information they are trained upon. We don’t know if the information is out of date, is disputed, is provided by a knowledgeable source (or just the most famous one), etc.
- We cannot cite the original source of information or data. Therefore, we cannot properly credit the original source and ensure our compliance with international intellectual property laws, nor give the reader the opportunity to consult original sources for themselves.
- Use of the tools puts us at a high risk of using plagiarized material, directly or indirectly.
- We respect digital rights licenses, copyrights, and other rights of photographers, artists, writers, musicians, and other creatives to receive appropriate credit and compensation for their work. Some AI tools are trained upon work created by others, without those creators’ permission. Use of those tools, would thus, violate our policies, principles, and applicable laws.
- We are devoted to our sustainability principles and are always looking to make editorial operations greener. AI demands tremendous amounts of energy and has a significant carbon footprint. Gratuitous or frivolous use of the technology would therefore violate our policies and principles.
Today we are not using generative AI at InformationWeek, and are actively avoiding it with minor exceptions. (One exception was when Jessica Davis asked ChatGPT to write a story about itself in December 2022, to test its effectiveness. The other exception was the column, “The Blinking of ChatGPT,” by English teacher Joe Kuglemass, who told a story of actively trying to confuse ChatGPT into giving up on writing an essay. In these cases, the use of ChatGPT was essential to the stories themselves.)
We are, nevertheless, a technology publication that covers business use of AI. As these tools mature and we investigate them more thoroughly, we may find trustworthy uses for them in our newsroom — if so, we will use these tools to support, but not replace, our work.
In that case, our commitments to our readers are:
- We will disclose any and all use of generative AI in published content to our readers. Therefore:
- In the event any body text was written by a machine, it will be clearly credited and acknowledged.
- If any body text was written by a machine, we will clarify which text and distinguish it visually.
- If any image was generated by a machine, it will be listed in the image credits.
- If generative AI was used in any reporting, data crunching, image creation, etc., we will clarify where and how.
- Our content management system, ContentStack, has just added ChatGPT to its capabilities. If we use those capabilities for producing or editing content in some way, we will disclose that.
- On-assignment reporters and columnists must receive approval from an editor before using generative AI in any way, shape, or form, even in content that is on the topic of generative AI. Reporters with questions should contact their assigning editor.
- Contributors are prohibited from using generative AI in their commentary pieces. We want to hear commentary from contributors’ personal insights and experiences. If you think you have a reason for an exception, contact us at [email protected]
- We ask that all press/media relations representatives disclose any and all use of generative AI in their releases, statements, press kits, and other contributions. As we would expect your professional ethics to prohibit you from attributing one person’s words to another, you must not attribute AI-generated words to a human being. Our reporters will continue to push for direct conversations with sources–not emailed statements–to further avoid this.
- Sponsors must not use generative AI as a primary author in any of their submissions to us and must disclose any and all use of generative AI in their submissions to us, even if those are custom products, including sponsored articles, whitepapers, custom research, webinar presentations, etc. We reserve the right to fact-check, edit, or omit machine-generated copy if it is found to be factually incorrect or otherwise violates our editorial policies and principles.
We will continue to review this issue and post any revisions or updates to our policy as necessary here.