I've been using some AI tools for a bit, and I think people both under and over estimate it a lot. LLM tools like Claude2 are really good at looking through docs and getting answers... most of the time as long as they're not hallucinating. Like with simple scripts (computer programs sort of) or needing to know the one command for this vendor vs that vendor tool - it works pretty well IMO. What can trip it up is when there's a paid and a free version of the same tool - it doesn't seem to know to ask you which you're using, and can conflate the things like file locations.
I find for summarizing they do pretty well. But to some extent there's been computer summarizing for almost 30 years of varying quality, tending towards the better over time. So I think processing reviews for a consensus is something it can do well. What would be better is also just checking the review meta or competitors for whether the reviews seem to be fake or not (or somehow be able to do that in line). Taking into consideration how many reviews are 1 star vs 5 star, and looking at the review quality. A lot of that can also be done with traditional systems because there's currently pretty simple heuristics in use anyway, but with some of the processing LLMs can do should be able to make it better.
So in a lot of ways, as an assistant I think a lot of people underestimate the tools. In a lot of ways they're also really helpful for things like D&D games (not a business case obviously) - coming up with better flavor text, instant NPC generation, Quickish image creation...
Where AI falls down is still the higher level cognitive decision making. I haven't had a lot of luck feeding it a question like xxx error message on yyy computer - what should I do? It tends to come up with the same basic sorts of things that low effort forums would - probably cause that's what it's pulling from. It often creates like the most generic Microsoft or whatever KB sort of page of text that rarely if ever actually helps, but does get you to reboot, reinstall, repair some stuff a few times with no fix. Given that, I probably wouldn't currently rely on it to fully solve a problem the way I could an experienced colleague.
But now that I have some contractual private AI access, I'll be testing some stuff - I haven't really fed it any info to start with yet.