The first of two articles will look at how data is handled by generative AI applications, such as chatGPT or Bard.
I am looking for analysts or other industry experts who can address the following points:
- When you use a chatbot or generative AI tool, where does all the generated content go, in terms of physical storage?
- What are the compliance implications of using a chat tool?
- Does a user’s data go into the training pool for AI learning?
- How is chat-like content backed up? How do I backup chat content in the enterprise.
- If my company wants to train and offer chat-like functionality, what are the storage (and compute) requirements?
Note the deadline for input for this first article is Wednesday 7th June, 1700 London time.
The second part will look at how generative AI could be (or even is) being used to manage data storage and compliance.
- Can it be used for storage configuration, setting up backups, checking compliance etc?
- Can chat tools be used to streamline reporting back to users?
- Can they be used to categorise data?
- Can they be used to protect data against ransomware or other threats?
The deadline for input for this article is Wednesday 14th June, 1700 London time.
Please email with your suggestions/leads.