Can AI . . . create research articles?

Can AI . . . create research articles?

Yes!  Artificial Intelligence (AI) offers new tools for creating content.  In the case of STORM, it can draft long-form Wikipedia-style articles through the interaction of AI agents.

STORM (Official Website)

What is STORM?  “[A] writing system for the Synthesis of Topic Outlines through Retrieval and Multi-perspective Question Asking. STORM models the pre-writing stage by (1) discovering diverse perspectives in researching the given topic, (2) simulating conversations where writers carrying different perspectives pose questions to a topic expert grounded on trusted Internet sources, (3) curating the collected information to create an outline.”  To learn more, read the developers’ research paper, “Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models.”

CREATE AN ARTICLE

In STORM (article generation) mode, users enter a prompt, and STORM responds by sharing an estimated time to task completion, the processing step it is currently on, and a request for the writing’s purpose.

 STORM generates:

·       A complete article with a summary and hyperlinked in-text citations

·       The BrainSTORMING Process: the question-and-answer interactions of multiple agents that created the context for drafting the article

o   The expertise of each agent is explained along with their responses to a moderator’s prompt-related questions

       ·       An interactive Table of Contents (accessed by clicking the dark page icon with a blue dot on the left-hand side of the page)

o   clicking the icon opens the interactive menu on the left-hand side of the screen

o   clicking the three horizontal lines next to the STORM dropdown menu collapses the Table of Contents

·       A feedback form (found at the article’s conclusion)

·       A PDF version of the article (available via the PDF icon in the lower right-hand corner)

o   the PDF can be downloaded or printed

Sources are referenced by number or via a citation (see below).  Clicking on the citation number opens a pop-up box sharing the reference title, URL, and highlight summary.

CO-STORM

In addition to STORM, users can select “Co-STORM” from the dropdown menu. Co-STORM lets users engage in a topic-centered discussion (aka “Roundtable Conversation”) with STORM AI agents.

·       As with STORM, Co-STORM users are asked to share their writing’s purpose

·       The double discussion bubbles icon represents the conversational mode, to switch back to the standard article mode, click on the stack of papers icon

·       “See Topic Background Discussions” shares the exchanges between a “Background discussion moderator” and a “Background discussion expert” as they develop an iterated informational context for responding to the prompt

·       Users can join a roundtable conversation by typing questions, comments, or feedback in the chatbox

o   They can also extend the conversation by clicking on the “Generate” button associated with an included agent

·       Responses include in-text citations; clicking on the number triggers a pop-up window including the reference’s title, URL, and a content summary

FEEDBACK

The following questions represent the feedback form users are asked to complete (this is voluntary and not mandatory).

Thank you for trying our research preview, STORM! We would love to get your feedback to continue improving.

      1. How would you rate the generated article?

      2.  What are the strengths (e.g. comprehensive outline, accurate information, etc.) and limitations (e.g. improper handling of time-sensitive information, associating unrelated sources, etc.) you see in this article produced by STORM?

      3.  STORM organizes the information using a hierarchical outline (check out the "Table of contents" panel on the left!). Is there any additional content you expected to be included? You can briefly describe it or share any follow-up questions you have about the topic.

     4.      Anything else you would like to share?

STORM CONSENT FORM

The following is the text of STORM’s consent form.

DESCRIPTION

You are invited to try out a research preview of our NAACL 2024 paper and EMNLP 2024 paper. To use it, you need to first agree with our “Terms of Service” displayed on the web demo and verify you are a real human user by logging in your Google account. On our web demo, you can input the topic you want to learn in depth and your purpose of researching this topic. You can also input questions to our system, and our system will retrieve additional information, update the hierarchical outline and references, and provide a synthesized response. Based on your input, our system will generate a report with hierarchical outline and references. You can read the report on our web demo. If you would like to, you can provide feedback of the generated report using the feedback box on our web demo. Your input (input topic, purpose of writing the article, and follow-up questions) and feedback (if provided) will be securely stored associated with the report generated by our system. Your Google account information will only be used to maintain your login status and will not be combined with data we collected.

USER'S RIGHTS

If you have decided to try out our research preview, please understand you have the right to stop using it at any time. The results of this research study may be presented at scientific or professional meetings or published in scientific journals. Your individual privacy will be maintained in all published and written data resulting from the study. For individuals who prefer not to have their data collected and shared, you may instead use our open-source software available at https://github.com/stanford-oval/storm. For organizations with concerns, please feel free to reach out to us at genie@cs.stanford.edu.

POTENTIAL RISKS

The risks associated with this study are minimal. Study data will be stored securely, in compliance with Stanford University standards, minimizing the risk of confidentiality breach.

CONTACT INFORMATION

If you have any questions, concerns or complaints about this research, its procedures, risks and benefits, contact the Protocol Director, Yijia Shao - (650) 407-9690 - shaoyj@stanford.edu.

Independent Contact

If you are not satisfied with how this study is being conducted, or if you have any concerns, complaints, or general questions about the research or your rights as a participant, please contact the Stanford Institutional Review Board (IRB) to speak to someone independent of the research team at (650)-723-2480 or toll free at 1-866-680-2906, or email at irbnonmed@stanford.edu. You can also write to the Stanford IRB, Stanford University, 1705 El Camino Real, Palo Alto, CA 94306.

Please print or save a copy of this page for your records.

 


 
    • Related Articles

    • Can AI . . . create rubrics?

      START WITH A PROMPT When drafting the prompt, include as many specifics regarding your output expectations as possible; the details associated with a creating a grading rubric for a history course are bolded in the example below. SUGGESTED MODEL: ...
    • Are there AI-powered search engines beyond ChatGPT (that promote privacy)?

      What's Out There? Yes! There are AI search engines (beyond ChatGTP and Perplexity) worth exploring! And, for more good news, these place an emphasis on privacy! Here are some options for those of us weary of the same old approaches to AI . . . Andi ...
    • Everyone loves Claude AI . . . should you?

      INTRODUCING CLAUDE AI Anthropic's Claude AI is a very popular rival to OpenAI's ChatGPT! What can it offer you? The information below outlines the key features of Claude (as of 05.21.25 ) for both paid and unpaid accounts, as well as provides some ...
    • Is it possible to limit how much information I share with an Artificial Intelligence (AI) model?

      Yes! Limiting AI Model Access to User Inputs One of the most serious concerns users have when using an AI model is how much of their interaction is being kept and used to train/improve the platform's performance. Beyond privacy concerns, it's a fair ...
    • A T.R.E.A.T. For Your Syllabus: An AI Syllabus Policy Framework

      Why Have an AI Use Policy? Students continue expressing confusion, fear, and uncertainty over allowable uses of Artificial Intelligence (AI) in Higher Education. Syllabi represent a reliable, go-to location where faculty can outline their positions ...