OpenAI’s “Deep Research” AI tool is a game-changer for the future of research. It can process and analyze data incredibly fast, potentially transforming industries like science, finance, and engineering. But as we move toward a future with AI handling more of our research, we need to think about the role of human oversight. Will we trust AI’s findings completely, or will we still need human experts to double-check everything? How will this shift affect the way we innovate and the ethical standards we uphold in research?
OneNaive56 on
That means everyone can be a PhD holder. Research dissertation got whole new name.
teachersecret on
I spent the morning messing with it.
It’s bad. Worse than a regular request to o1 pro. I asked a variety of different questions.
In a deepseek deepdive twenty minutes long it hallucinated and provided an amateurish response with plenty of errors. The thinking process was full of concern about OpenAI policy.
In an attempt to get it to provide some writing guidance it decided to spend fifteen minutes reading a lady’s blog, then spat out an uninspired outline that didn’t answer my original prompt in any way.
In an attempt to get it to do some categorization work I realized it can’t ingest large lists and loses context of what it’s doing.
As it is right now, it’s a deeply disappointing product.
Far as I can tell, it literally just surfs the web a bit randomly, crams a bunch of stupid context, then gives a regular o1 style response twenty minutes later using what it found – which is awful web content. No magic.
I’d love to see an example of this thing doing something even remotely useful, because every test I’ve given it was twenty wasted minutes to produce nonsense.
3 Comments
OpenAI’s “Deep Research” AI tool is a game-changer for the future of research. It can process and analyze data incredibly fast, potentially transforming industries like science, finance, and engineering. But as we move toward a future with AI handling more of our research, we need to think about the role of human oversight. Will we trust AI’s findings completely, or will we still need human experts to double-check everything? How will this shift affect the way we innovate and the ethical standards we uphold in research?
That means everyone can be a PhD holder. Research dissertation got whole new name.
I spent the morning messing with it.
It’s bad. Worse than a regular request to o1 pro. I asked a variety of different questions.
In a deepseek deepdive twenty minutes long it hallucinated and provided an amateurish response with plenty of errors. The thinking process was full of concern about OpenAI policy.
In an attempt to get it to provide some writing guidance it decided to spend fifteen minutes reading a lady’s blog, then spat out an uninspired outline that didn’t answer my original prompt in any way.
In an attempt to get it to do some categorization work I realized it can’t ingest large lists and loses context of what it’s doing.
As it is right now, it’s a deeply disappointing product.
Far as I can tell, it literally just surfs the web a bit randomly, crams a bunch of stupid context, then gives a regular o1 style response twenty minutes later using what it found – which is awful web content. No magic.
I’d love to see an example of this thing doing something even remotely useful, because every test I’ve given it was twenty wasted minutes to produce nonsense.