Great video. I’m glad you took the time to figure out how ChatGPT might have come up with that false reference. I’d be curious to know if ChatGPT would self correct if you asked it to produce a list of papers by Wald in that journal from 1947 within that same chat thread.
Thanks for the positive feedback! And a great follow-up idea too. Things have moved on fast since I published this just a few months ago, and it would be really interesting to see how the newer AI models deal with this issue.
I had a similar experience with ChatGPT while looking for references and books. Sometimes it gave me real papers or books and other times the sources were completely fabricated.
It's going to be an unpleasant surprise for any students who submit AI-generated work for grading, without fact-checking it first! Hopefully the professors will be vigilant!
Wow! That is fascinating. ChatGPT is doing the "Turing Test"! Or, as Alan Turing called it, the imitation game. Excellent video.
Thanks for the positive comment!
Great video. I’m glad you took the time to figure out how ChatGPT might have come up with that false reference. I’d be curious to know if ChatGPT would self correct if you asked it to produce a list of papers by Wald in that journal from 1947 within that same chat thread.
Thanks for the positive feedback! And a great follow-up idea too. Things have moved on fast since I published this just a few months ago, and it would be really interesting to see how the newer AI models deal with this issue.
I had a similar experience with ChatGPT while looking for references and books. Sometimes it gave me real papers or books and other times the sources were completely fabricated.
It's going to be an unpleasant surprise for any students who submit AI-generated work for grading, without fact-checking it first! Hopefully the professors will be vigilant!