@kristiedegaris the issue with so called AI (we ain't anywhere near AI today), which really just LLMs mimicking intelligence, is that we can't be sure the result is correct, we need to verify it's correct.
Just a short example, asking an LLM if the Swedish city of Gothenburg was bombed during second world war, you will most likely end up with a false answer, this for many cities in Europe got bombed during this time and in the data which the LLM has been feed with you will read about all the cities which were bombed, not the cities that weren't targets.
As many LLMs are feed with what ever is found online (easier to process than what you find in a Library), you will have a lot of false information mixed into it's data, those they have had a tendency to be pro-white-power as such groups are broadly visible online.
Then you have the issue with wording, two prompts that for us humans may ask for the same thing can generate different answers.
Sure it can be useful and helpful tool if you look past the environmental effects and the greedy people owning the systems.