That would be my 13th reason 😩 legal stuff is already so stressful, the costs are ridiculous, so finding out my attorney went and caught a case would be brutal 🤣
@@jackryan444 If you are a defendant (and lose), you may get a mistrial out of your lawyers being... Incompetent. If you are a plaintiff, you are probably SOL.
I mean, would you trust yet another lawyer to handle yet another case after these guys did this? Although, if they defend themselves, it may be an easy case.
@@Tomas81623 I would but only because I'd know the idiots I hired the first time have just made sure no one else is stupid enough to try what they did especially not with the same client.
Man, my blood ran cold when I heard that the Judge himself had contacted the circuit from which the fake decision had purportedly come. I was a clerk at the Federal Circuit from '15 to '17, and I remember once when Chief Judge Prost had discovered a case that had been cited in support of a contention that it did not actually support, she really let the citing attorney have it in oral arguments. That was the scariest scene I ever saw as a new lawyer, and that was worse than I could have imagined, so I cannot even begin to conceive how bad it was for these plaintiff attorneys. Side note, Chief Prost was a fantastic and fair judge, and a very nice and kind person, but the righteous wrath of a judge catching an attorney trying to hoodwink her/him is about the most frightening thing for a lawyer.
By the way didn't expect that a judge would use "civilese" words as gibberish when civilians often use "legalese" to describe their mumbo-jumbo. He truly was volcanic as LE said.
I'm a law student, got tired of searching for cases to reference that matched a very specific criteria, 3 years of looking through Jade and CaseLaw is like trying to find the holy grail, tried using ChatGPT to find the cases to give myself a break, the absolute confidence that it had when giving me a list of non-existant cases is something I aspire to have, I have never gone from happiness to hopelessness as quick as I did when I looked to see if they were real
And now you understand why lawyers are well paid. The bulk of work in law is boilerplate templates, but people pay a LOT of money to have those templates be correct. And lawyers are also one of the few professions punishable by license loss when they fail to keep that promise (medical doctors and professional engineers being some of the other ones.) I wish you luck in school!
Honestly, even if ChatGPT didn't exist, it really seems like these lawyers would've still done something stupid and incompetent that would've gotten them sanctioned
Schwartz explained he used ChatGPT because he thought it was a search engine and made several references to Google. If only it was a real search engine like he apparently usually uses he could be certain it would only say the truth ;)
Tbh if the claim of the lawyers working together since 1996 is true they've been handling it for a good while, this may have been a slip-up by the elderly
@@sownheard He says _he_ did try to check, but couldn't find it and assumed it was just something Google couldn't find and assumed ChatGPT must have given him a summary.
The thing is, I’ve had a coworker do something similar. They asked for a report on data we don’t have access to, I tried to explain it wasn’t possible, they then turned around and asked ChatGPT to write the report and sent that to me with instructions to “just clean it up a bit” - I say we can’t use it. They say we can. I then spend hours digging into everything it said and looking for every instance that’s contradictory or references data we do have access to so I can compare. Send a full report on the report. Finally get shock & horror “I didn’t know it could lie!” and we can finally start the actual project, redefined within the bounds of what we can access. 🤦🏼♀️
I don't think "lying" is the right word. That implies that it's self-aware enough to know that it's saying something that isn't true. But it's not aware of anything. It's just a glorified Markov chain, generating text according to a probability distribution.
@@gcewing Yes, but try explaining that to non-tech people who still don’t understand why they can’t name a file “Bob’s eggs” and have it return when you do a text search on “Robert” or “breakfast” (your search program is broken! That’s your problem not mine!) and think that every single number in Google ad predictive recommendations is guaranteed truth. 🤦🏼♀️🤷🏼♀️
@@bookcat123 this is so weirdly specific, i'm not even in tech but i understand how search functions work cuz i have done some stuff with scientific database searching…has this actually happened to you?
I’ve never worked directly with a judge, but I’m going to guess that making a judge research several cases that you refuse to research yourself (not to mention the AI crap) is going to make them very very angry.
Making a judge do work you should have done is like doing the same to anyone but judge has many ways to get back at the person and yeah, it does make them mad.
As a computer engieener with a deep love of law, it drives me crazy that they even tried to do this. ChatGPT does not give you facts, it gives you fact shapped sentences. Chatgpt does not fact check, it only checks that the generated text has gramatical sense
It's a little more than grammatical, but you're essentially right. ChatGPT makes a realistic-looking document. If that document requires citations, footnotes, or a bibliography, the AI makes realistic-looking ones. It does not understand that citations actually refer to something that actually exists in the world, it just understands from millions of samples what citations look like, and it is able to make ones like them.
*shrug* The ChatGPT website literally warns you before you sign up that it is not always factual and sometimes makes things up. If you don't want to take that warning seriously, knock yourself out.
That's seriously the best bit, "are you sure this is all true?" "of course! check anywhere!" And then they DIDN'T CHECK. Because how could anything on the internet be false?
Honestly this is particularly bizarre. If they had unquestioning faith in AI and didn't think they needed to validate, well that's bad but I can understand the train of thought. So imagine if one of them called an expert testimony, he sounded good and decided that didn't need to be validated. But maybe the so called expert seems a bit shady or his documents didn't seem to be in order. If you decided to validate that expert, would you ask _himself_ about his work?
This could be said for literally every human. It is extremely bad argument against AI. The person creating the fact can't be the one validating it. That's exactly why there is something called "peer reviewed" in academics.
Honestly I don't know the answer to that question. My gut feeling would be to say "no, it's A LOT OF books", but IANAL and maybe technically/legally the entire compendium is regarded as a single "book" even though it apparently has enough pages to justify being bound into at least 925 volumes.
Everybody's talking about ChatGPT but this tiny little nugget was the most fascinating part of the whole thing. Also the car alarm sirens after he yeets the book into the background going on for several more seconds while he's talking made me laugh.
"These books behind me don't just make the office look good, they're filled with useful legal tidbits just like that!" -- Lionel Hutz, attorney* at law
I'm a medical student and one day the residents and I used ChatGPT for fun. I cannot even articulate how bad it is at medicine. So many random diagnoses and blatant wrong information. I'm not surprised the same is true for law
Having ChatGPT write the argument with the fake citations was incompetence. Having ChatGPT generate the cases and submitting them as if they were real was malice. I say they should both be heavily sanctioned, if not outright disbarred.
It doesn't matter *how* the papers were generated. What matters is that the information was verifiably false, they signed it, and submitted them to the court.
Maybe malice was the point, and their whole goal was to martyr themselves to set the precedent on how using AI to prepare a legal argument will be treated. Honestly, one could probably do a halfway decent job of using GPT 4 to speed up legal research, and potentially even have it fact check itself, but it would involve heavy utilization of API calls, the creation of a custom trained model that's basically been put through the LLM equivalent to law school, application of your own vector databases to keep track of everything, and of course, a competent approach to prompting backed by the current and best research papers in the field... not just asking it via the web interface "is this real?" In short, their approach to using ChatGPT in this case is to prompt engineering what a kindergartener playing house is to home economics. All they really proved here was that they're bad lawyers and even worse computer scientists, but now that this is the first thing that comes to mind when "AI" and "lawyer" are used in the same sentence, what good lawyer would be caught dead hiring an actual computer scientist to do real LLM-augmented paralegal work? What judge would even be willing to hear arguments made in "consultation" with a language model? I realize this thought doesn't get past Hanlon's Razor, of course. It's far more likely that a bad lawyer who doesn't understand much of anything about neural networks just legitimately, vastly overestimated ChatGPT's capabilities, compared to a good lawyer deciding to voluntarily scuttle their own career in order to protect the jobs of every other law professional in the country for a few more years... but it's an entertaining notion.
@@dracos24 It does matter. It's wrong to submit information provided by a third party (to LoDuca by Schwartz, and to Schwartz by ChatGPT) without having verified it. It's much worse to fabricate that information yourself when you're being ordered by the judge to explain yourself. At first it was severe negligence, but then they were outright lying.
FYI: when a judge asks you to produce cases (that their law clerk could have found) it means THEY DON’T EXIST. That was the FIRST clue that this was not going to end well.
Absolutely insane. Not a lawyer, but from Devon's explanation on the citations, it seems like finding a case is almost instant, it's so obvious that it's a gotcha when you're asked to find the cases that you cited.
I have encountered the very occasional situation where something is mis-cited and so a trek to the library is required to check the paper volumes or reference sources, but most case law can readily be found online.
I remember Devon saying on this channel multiple times, in court you don't ask a question unless you already know the answer. That lawyer's case was dead on arrival.
Westlaw and Lexis are basically search engines for legal cases. You can search for relevant cases by keywords or name of the case, but if you have the citation, it should pretty much instantly find it for you. It even keeps you updated on if parts of the case are outdated due to new case law.
The *best* case scenario is that you made a typo or something so that it wasn't able to be found - which just sounds very careless and unprofessional. And when the *best* case is that you are an unprofessional nincompoop who doesn't proofread their important legal documents... yeah you're pretty SOL
@@unclesam8862there are two groups of people that use Bogus. Serious business people, and carefree surfers lol. I imagine neither group is happy to have something in common with each other
The fact that ChatGPT has warnings about it not being a source of legal advice is the most damning evidence that these lawyers did not read through what they presented to the court. Perhaps if they had been more observant, they would have followed ChatGPT's advice to "consult a qualified attorney".
I use ChatGPT as a tool to narrow stuff down, basically to find out what I should google, but I know to ALWAYS CHECK EVERYTHING. And if my question ever gets too specific, it always states: 'I'm an AI model, I'm not qualified to advise on this, ask a professional. Seriously, I can't believe they'd thought they'd get away with this...
My immediate first thought is a pretty common set of phrases that internet comments use: "IANAL", "You'd have to check with a lawyer", "Get a lawyer to check this", "This is not legal advice.". You know, the type of language ChatGPT probably was trained on, and probably had in its results somewhere.
@@ZT1ST Possible, but I think this response might have been implemented intentionally, for the same reason that all thise phrases are common in the first place. Kind of like how there are certain topics GPT will avoid (unless asked very nicely)
their warning about not able to produce reliable code has never stopped my students from trying to use it... then fail the course. human ability to selective filtering the text is just...
As a Machine Learning Engineer, seeing Devin explain Chatbots better than 99% of the people in the world who think it's magic or something made me tear up
It's because he's smart and he and his team do their research. That's why he's in The Bigs. P.S. Congrats on being a Machine Learning Engineer, that's amazing! Please help keep us safe from them? Or at least keep it obvious when someone is being an idiot when they use it. Thanks, Your Friendly Content Writer and IT Specialist -
He understands it better than these two lawyers did. As a hobbyist programmer I knew where this was going from the very start, I use ChatGPT to help me learn and write code, I ask it how to perform a specific action in Python and it tells me the answer, but I am always double checking it just to make sure it's not bullshitting me, I simply do not trust it since I know it's just predicting text. I this is one where it is very good but I still am completely suspicious of it since I am very aware of the chatbots habit of making things up.
I’m a civil engineer, and “if your name is on it, you’re responsible for it” is an extremely important principal. A lot of our documents need to be signed and stamped by a Professional Engineer, and the majority of us (especially the younger ones) don’t have this, yet we do most of the work anyway. Ultimately, if a non-PE does the work, a PE stamps it, and something goes awry, then it’s on the PE. You’d be surprised at how little time the PEs spend reviewing work that they’re responsible for.
There's a reason I never got my PE. I didn't want to be the professional fall guy. A PE is never going to realistically be given the time needed to actually verify all that work to a good standard - he's just put there by the firm to slap his name on it.
Hello fellow civil engineer(s). I was IMMEDIATELY drawing parallels to PE stamps when he brought up local council, and yeah... The barely check before stamping is wild to me with how much responsibility then falls on your shoulders.
Hell, I work at a clothing store and we don't use our sales password to let our coworkers check people out unless we're positive they did a good job because we don't want to take the flack if they didn't. Imagine having fewer standards than people working sales.
A recent survey of ChatGPTs performance when it came to math was published and it really illustrates why you shouldn't try to rely on these things to answer questions for you. It went from answering the test question correctly more than 98% of the time to barely 2% in a matter of months. Not only that, it has in some cases started to refuse to show its work (aka why it is giving you the answer it is giving you).
ive noticed this, its like they dumbed it down on purpose to stop people from doing this. what happened to chatgpt being capable of passing medical and law classes?
@@miickydeath12 it doesn't seem like it was intentional. The engineers seemed pretty baffled by that survey. If I had to guess it has more to do with people intentionally inputting incorrect information to mess with the AI
@@Willow_Sky Probably similar to what happened to Tay when she released.. wow 8 years ago now. I remember Internet Historian doing a great video on it. Going to have to go watch it again.
@@Willow_Sky AI is very dependent on learning material. Worse quality of learning data - worse quality of results. GPT4 has much bigger quantity of learning data compared to GPT3.5, but it's quality is under question. Also, in cases, where GPT3.5 had return 'no data found', GPT4 generates random gibbish.
It's not just that CGPT *can* make stuff up, it's that that's *all* it's designed to do. It's a predictive text algorithm. It looks at its data set and feeds you the highest match for what you're asking, and literally nothing else. It looks at the sort of data that goes in a particular slot, fills that slot with data, and presents it to you. It can't lie to you because it also can't tell you the truth, it just puts words together in an algorithmic order.
Chat GPT is trained to generate text which humans see as looking real. That´s it. There´s no implementation of truthfulness in it´s training, at least not originally.
it's truly mind boggling how many people don't understand the basics of how these models work. "It'S LyInG!!" no mate, the predictive language model doesn't have an intention, it's just stringing words together based on an algorithm...
@@ApexJanitor It can't lie, because it can't think or have intent. Nobody fully understands how these models produce their results, but they do understand the kinds of things that are happening and what its limitations are.
@@ApexJanitor there's a difference between not fully understanding something and having no idea what's going on. I don't think this model is close enough to sentient to be able to "lie" in the moral sense or "want" anything (though it certainly does a good job passing the Turing test, so I can understand the confusion). It's utility function is essentially a fill in the blank algorithm, so of course if you ask it subjective questions, as the idiot lawyer did, it's going to seem to lie. also what's with the tone of your message? Seems kinda hostile, and the "Hahaha"'s make me feel like The Joker has had a hand in writing this, why not LOL?
@@ApexJanitor I see what you're driving at, but the fact that a neural network of this scale is not comprehensible does not mean that we don't know what it is doing. It's predicting words, nothing more and nothing less. It's not some new and unfathomable way of thinking and responding to the world, it's just mimicking human language (and not very well, at that). You wrote "... it lies if it wants" but that assumes some sort of mind that "wants". ChatGPT and its ilk don't have minds.
The realization that Devin is actually sitting in a library in all his recordings and isn't just using a green screen was by far the biggest plot twist in this video Edit: why are people arguing about whether or not it was real or edited why would he go through all that effort getting a book that looked identical to one in his green screen if that was what he was using
I have to assume they do research when they aren't in the middle of a consultation. They mostly wouldn't use a physical book anyway since electronic databases can find things instantly and are always up to date with the latest info.
@@parry3439 Before online databases became as thorough as they are (probably likely only in the last 10 years or so), people did have to have written books, especially if they were gonna use them often. I think Devin has been practicing long enough that he probably had physical copies before online databases. Noticed how he stated the book in hand was a 2nd edition, which looking it up, that's 1925 to 1993. Long before things got scanned and put into binary. Devin himself gained his JD in 2008 from UCLA. wiki'd legaleagle. Meaning, yeah, he prolly keeps them as a memoir of his early carrier and/or his university days. Lawyers needed LOTs of books. Mostly cases and laws in their area of practice.
@@MekamiEye There is a huge gap between 1993 and 2008 in computers and data storage. For example, 1993 is the game Doom on PC with floppy discs, and 2008 is Metal Gear Solid 4 on PS 2. In 2003, most big journals were moving to the internet, and there were probably buyable databases offline. That's probably why those books look so pristine! I thought it was a Zoom background or something.
Public service announcement from your friendly librarian: DO NOT ASK FOR CITATIONS FROM CHATGPT. The citations are likely imaginary and you will only waste yours and the librarian's time. And you WILL be made fun of among the staff. (Worse than this happening in legal settings is this happens in medical settings 😑)
Honestly ChatGPT has given me some good references (mostly of what one would call "classical" papers, the one's that are old and cited a lot in other work), but obviously, google every single one before you use it anywhere. In my experience, it's about 50% chance if a citation is real or not, and then another good 50% if it's summary of it is actually accurate of what's in the paper.
Got to love how everyone is like”Chatgpt is going to take over everything” and then every time you apply it to something real like this it consistently comes up short
@@warlockd not even that, Chatgpt has been known to lie! It tries to complete satisfying sentences and then like half the time it just says stuff that sounds right.
@@AYVYN its not even a student. its like taking all the books from your college library and putting them in a blender, and then getting a random person off the street to rearrange the pieces
This story just supports my opinion that the biggest problem with ChatGPT is that people trust it despite having no real basis for that trust. It's exposing the degree to which people rely on how authoritative something sounds when deciding whether to trust it, rather than bothering to do any kind of cross-referencing or comparison.
There are prompt-engineering techniques that get chatGPT to do cross-referencing on itself that might improve it a bit, but you still have to find the sources in the end and do your own research.
@@aoeu256I was literally thinking about this today because I have no imagination for bing's AI search and I thought "I can't look up facts since I'm better off doing that the normal way, so what do I use this for?" Not to impose but if you have any ideas I'm all for them lmao, AI advancements are wasted on me until it's an AGI
@@sjs9698 we're both finding out just how bad I am at this lmfao. No, I did not think of that I've been fixated on the fact that it can't provide unbiased fact or act like a person, that it's "just a language model that can kinda trick you"
I am a PhD student currently working on building models like ChatGPT, and this is hilarious! Really enjoy all your videos!!! But this completely makes sense, since these pre-trained models are typically trained on webtext so that they can learn how English (or any other human language), functions, and how to converse in human languages. But these models are not trained on any sort of specialized data for any given field so they won't do well when used for these purposes.
The most galling thing is LoDuca's refusal to take any responsibility. He blames everyone and anyone else. A competent paralegal would be an asset to this team.
They got off with just a $5000 fine....and the firm is still deciding whether to appeal or not. It's crazy that they knowingly fabricated cases only to get away with a slap on the wrist
Meh you’d be surprised on the torpedoing his career. Lots of lawyers have been sanctioned and carried on fine. Most all of those things take some deeper research that clients often don’t ever look into. But the judge saying he would’ve just moved past it had they come clean is common. The cover up is almost always worse than the crime.
its weird because just last year chatgpt achieved much higher scores on bar exams. it seems like chatgpt over time has been dumbed down to prevent people from using it to cheat, this can be seen when you just ask the model some math questions, i couldve sworn it was way better at solving math last year
As a legal assistant, watching this feels EXACTLY like watching a horror movie. No, I did NOT guess the cited cases didn't exist because that means nobody in this law firm checked the chat bot's writing for accuracy! You have to do that even when humans write it! They did NO shepardizing, no double-checking AT ALL?! How? Just... how?! And, oh Mylanta, that response to the show cause order... Dude, that... doesn't comply with the order. At all. What kind of lawyers were these guys?!
TIL a new word - Shepardizing: "The verb Shepardizing (sometimes written lower-case) refers to the process of consulting Shepard's Citations [a citator used in United States legal research that provides a list of all the authorities citing a particular case, statute, or other legal authority] to see if a case has been overturned, reaffirmed, questioned, or cited by later cases."
The fact they didn't double check anything tells me these guys haven't done any work themselves in ages, they have grow so used to passing off the work and having others do it and haven't been double checking that work for such a long time that they didn't even bother to double check the "new hire" (doesn't matter if it is AI or human.....for them to not bother verifying reveals they have a pattern)
What's clear to me is that this judge did his research. He very clearly understands that they didn't just ask Chat GPT to explain the relevant law but instead asked Chat GPT to prove their loosing argument. ChatGPT only knows what words sound good together. It does not know why they sound good together.
That's the salient bit here -- the judge was able, not just to call their bluff, but to call two or three nested levels of bluff, by recognizing the kind of bullshittery that ChatGPT engages in, and HOW that crept into the process at each step along the way.
Right? That caught my ear too, the judge knew how this would've happened and was savvy enough to get the line of logic that would have produced these results. They were screwed.
That's a bit of a simplification. A simplification we can make about most people when they speak or write too. If you use Bing you can do very fast legal work and it will give you the references. If the data is not available online, you can use GPT4's API and load your data. I trust GPT's level of reasoning more than I trust the average Joe.
@@ildarion3367 Average Joe doesn't know anything about anything, be it law, tech, economics, logistics or nuclear power plant design. That's kinda the point of how modern society works: no single person can learn everything there is to know about every topic. That's why we have specialization. You choose a field and over time become proficient with it, while completely disregarding other fields and relying on other people for their specialized knowledge through cooperation. While your claim is probably correct, it's not meaningful. Sure, chatGPT can form a more coherent response to a legal question than me, someone who never had any interactions with legal system in their life, but it still doesn't change the fact that neither of us are specialists in this field. And therefore both of our opinions are equally useless when compared to a real specialist.
Seeing this a second time, it's even worse! I was just telling a coworker about this last night and he was blown away that a lawyer did this. The judge was straight up savage.
I'm not a lawyer, but I used to work with the local government with some quasi-judicial hearings where some appellants would retain lawyers to argue for them. One of the funniest cases I had dealing with lawyers, the lawyer quoted a particular case in a written brief which was old enough that it wasn't in the legal databases and he didn't have the full case to provide for review. I walked down to my local library, grabbed the book with the decision, and actually read the decision. The lawyer was then surprised when I forwarded the scanned copy of the case on to him, and I had to point out that it would appear the quote was out of context, and that the decision actually supported the Crown's position. The appeal was then abandoned shortly thereafter.
@@jeanmoke1 The original decision was probably cited in a later decision or a secondary source. That is a legitimate way to do legal research, but, as noted, it is necessary to actually _read_ a decision before citing it. I did legal research for government lawyers for more than a decade. I would summarize the salient case law and provide excerpts as applicable, but I always attached the full text of the decisions as well. I know that some (but not all) of the lawyers carefully reviewed my work.
I’m not a lawyer, but I think a judge’s order that repeats the word “bogus” three times in one sentence in response to your legal filing is probably not good.
One thing I love about legal drama like this is how passive-aggressive everything needs to be as it must be kept professional. A judge isn't gonna erupt on someone but if they make a motion to politely ask what you were thinking, you know you're in one heck of a mess.
You should see British parliamentary debates. There are strict rules of conduct which dictate how to address people and forbid, among other things, accusing another MP of lying. Even if they are blatantly speaking utter falsehoods, it's forbidden to accuse them of it - because MPs, being the highest and most honourable of society, are surely above such things and it would be an insult to the institution to so much as suggest the possibility of deception. This has lead to a lot of passive-aggressive implications. An MP can't accuse another of intentional lying, so they will instead suggest "The right honorable gentleman appears to be mistaken' giving the most respectful and formal of words while making it clear in their tone that the intended meaning is more 'liar liar pants on fire.'
As a paralegal, this whole case got under my skin in the worst way. From the unverified citations, to the fact that he didn't know what the Federal Register is, to lying to the judge. If I did even one of the things they did on this case, I would throw myself at the mercy of my boss, because there's no way in hell I would even let him sign something that wasn't perfect, I sure as shit wouldn't file it.
I just cannot imagine the embarrassment. I mean how do you even survive the level of embarrassment from using Chat GPT to write your documents and it getting everything wrong lol
@@grmpfit can grammatically work in both scenarios depending on the context “That’s not how a dog- let alone a person- would react” I’m actually not fully convinced I’m correct here, but it seems it can be used to contrast subjects as I see it currently. Feel free to set me straight or if I’m right agree 🫡
Props for the judge for keeping calm while asking these clearly mental lawyers confirmation and not just bonk them in the head with the case book he didn't know about
I was unsure why judges are treated with some kind of reverence in lawyer circles until I've seen/heard some of their interactions and opinions. They sure are very composed, tactful and professional, yet absolutely brutal when it comes to scathing remarks.
@@LodinnIt feels like the judge was more dumbfounded than anything. I mean, the responses were so idiotic it makes you wonder how he even passed the bar.
@@warlockd Not sure I agree - by the time they've produced these made-up cases using ChatGPT, the damage was already done. Coming clean was probably the least dumb decision overall in that situation. ...granted, the F.3d moment sounds like a really, really bad knowledge gap, but IANAL. The rest didn't particularly stand out to me, they were pretty screwed by then already anyway.
I wouldn't be surprised if it's already up on Devin's nebula. He does say a lot that his videos go up first there and there's a delay before they go down to RUclips
Imagine you had to film this, and you are barely done reviewing the edits, that the Trump thing comes out… Wouldn’t you just have a Spa day, before swimming in the… what’s the German word again?
Update: Judge Castell dismissed the case due to the statute of limitations issue and fined LoDuca, Schwartz, and their law firm $5000 each. They’re very lucky to have gotten off that lightly.
Wtf. I'd expect disbarment plus a large fine. Plus mr. Mate suing them for mishandling his case. Plus investigation into the law firm, how their processes are written and adhered to. I would expect a reasonable law-firm to have standards of conduct that specify which tools to use for case search, or whatever
They definitely should have gotten slapped much harder, but on the plus side, they can't hide from this, and will never be taken seriously as lawyers ever again.
I think Judge Castell *see* that they already got their career destroyed, and minding that they got enough punishment already. If an attorney from a wee country in South East Asia already hears the mayhem of their blunders, oh boy...they and their firm are toast.
Not quite: he wrote an angry letter to the bar (although for some reason, he can't actually disbar them, the bar is the one that does it) so while that's all they are being punished for by the law system, the bar association might suspend them for a few years.
While there were several miscalculations I think the worst is the different font. I'm no stranger to the copy paste method when turning in assignments but for a federal judge how could you forget ctrl+shft
as an engineer, “if your name is on it, you’re responsible for it” is a HUGE concept. there’s a lot of red tape in working for companies who deal with government contracts, and a lot of specific record-keeping programs you have to use. it’s important for process cycle tracking, but if you’re actually on the development/build side, it can seem pretty tedious. typically you need to be trained on these softwares, so it isn’t uncommon for only one or two people on your team to actually have the authorization to use them. instead of training everyone else, typically that person’s name is just put as the RE (responsible engineer) and then they’re the one who has to sign off on it. for my current program, that ends up being me a lot of the time. in most cases, it isn’t a problem to just go in and sign off on something, seeing as there’s an entire team of people who need to approve before it gets to you. but there’s always the chance that everyone in the upline may also have the same perspective, and my failure to thoroughly review a document before signing off could make or break a multimillion dollar defense contract. and even if it wasn’t even my design so any failures weren’t technically my fault, guess what? if my name on it, I’m the one who has to deal with the fallout. the abundance of approvals and review stages may seem overbearing and unnecessary at times, but that’s how we avoid catastrophic engineering disasters like we’ve seen so many times before. those checks and balances are there for a reason, and if your name is on it, you BETTER have taken the time to complete your check !!
Computer engineer here, it is very smart for you to assume that a screw-up could still slip through the cracks because it absolutely can. I know because I was once responsible for one. Back when I was just moved up to lead developer, a software my team developed and tested hard-crashed while demoing it to management. As it turns out, one of the new guys submitted his component of the software he worked on without verifying that it works. Since I was new to leading a dev team, I unfortunately just assumed that he verified it so we went ahead and put it together with the rest of the software and it passed our tests. That component dealt with installing the software, so when we tried to demo it to management on a computer that used a different OS, it wasn't properly installed. I got in A LOT of trouble for this (I got yelled at by everyone in management) because they planned official deadlines after I mentioned in an official document that the software was ready to demonstrate to management when it clearly wasn't, which meant they had to further delay a multimillion-dollar asset. This gave me the worst job-related scare of my life because they said that they had grounds to not just demote me, but to "let me go" (their words) because of the amount of money involved. I assume their superiors expressed to them how "unhappy" they were about the delay. Thankfully, I only got a warning because the problem was fixed quickly, but since then I've been too paranoid to not make sure that every word I write in official documents is 100% confirmed as true without a reasonable doubt. So it blows my mind how these lawyers did every single little thing you could do to do the complete opposite
I think legally it's (usually) the fault of the company rather than the individual. Or at least based on the cases I've heard. The reasoning being that the company processes should've caught it in the first place, and so they're equally liable.
@@supersonic7605 I am assuming, if only because the one lawyer asked if it was lying, that these lawyer didn't understand what a GPT model program is. I think they assumed it was an ACTUAL Artificial Intelligence. aka an Artificial Mind, one that could actually think on its own and not need input to generate any answers. I think, given that none of these lawyers did any actual lawyering, thought that the GPT could do all of their research because it would collect data from various sources, read it understand it and synthesize a legal document for them. The law firm itself, at the very least, should have terminated these guys, just for the sheer embarassment. This has certainly cost that law firm millions in revenue. They should also be debarred for failing to actually act as a lawyer. I wonder if the judge actually imposed a sanctin on the lawyers as well. hopefully they have to pay all the legal fees out of pocket for everyone involved and not take any pay, and perhaps get debarred or something.
The best way I have heard ChatGPT described is "ChatGPT knows what a correct answer looks like." At a surface level, it looks like a legitimate answer until you dive into the details in this case.
I would love to have been a fly on the wall in Avianca's lawyers' office when they were first searching for the bogus cases and coming up empty-handed. Did they immediately recognize that it was all bunk, or did they second-guess themselves? How long until they floated the idea that opposing counsel simply made it all up? Did they hesitate to file a response calling the bluff? I want an interview with those folks!
I honestly wouldn't be surprised if it was actually the judge that realized this first, because the judge would also need to have read those cases to make sure that they fully understand the argument being made, and then none of the clerks or whoever were able to find any case mentioned by these attorneys and then the judge is probably like hmm, one clerk struggling to find a particular case is abnormal, five clerks struggling to find any case is very unlikely I wonder if these are even real. And then from there just going and destroying the careers of these attorneys
I listened to the podcast this video mentioned, and they were joking about feeling bad for whatever first-year doing the grunt work had to tell a senior partner they couldn't find six cases. That fly on the wall would've been getting an earful.
@@SuperSimputerI want to know what that extra week "being on vacation" would have bought them. It makes me wonder how often they used that excuse on other court cases.
From the discussion on this by Leonard French (another RUclips legal educator), any lawyer reading the citations would very quickly realize they're bogus before even searching them out. Several of the citations don't even match the format used in legal cases, and an experienced lawyer should know this at a glance. The judge would not have needed to be the first one to spot this, and chances are the defense lawyers only searched out the citations to give themselves a better chance of the lawsuit being thrown out and themselves awarded fees and costs. It's hard to imagine them having to do any research into the cited cases before realizing something's screwy.
Meanwhile I happen to know that if this serving cart were to be pushed with such a force that it quote "incapacitated him"...the damn cart would have broken before any actual harm was done
I love the fact that even some lawyers can't be fussed with reading the Terms of Service for websites. They should have realised that this could happen when even the TOS states, Under section 3.Content: "use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts."
It is for this reason, and others, that I am reluctant to take any TOS, EULA, or other routine contract seriously unless I am either given a summary of the terms, somewhere, or a reasonable ability to contact the lawyers that drew it up (so I can get clarification). I still tend to read as much as I can of them, particularly if it's a completely new relation, but I'm only one non-lawyer human, and I don't have a team of lawyers to translate for me. Expecting more than my best effort to understand is a little bit unreasonable.
@@6023barath "May occasionally generate incorrect information. May occasionally produce harmful instructions or biased content. Limited knowledge of world and events after 2021". Any one of these should have been enough for them to reconsider using it as a source, but all three?. It wasn't correct, it was biased toward their biased questions, and it wasn't up to date.
Even my 5th grader used ChatGPT to help with a presentation and she spent several hours fact checking each statement before including it in her power point
I am a mechanical engineer, and run into this situation recently. I was trying to use ChatGPT to shorten my initial research into a topic, it gave me the equations, everything. But since they were sloppy and missing pieces, i asked it to give me the sources for these equations so i can go to the original articles and collect the missing parts. Oh boy i was in for a big surprise. It just kept apologizing and making up new article titles, authors, even DOI s. It was eye opening to say the least.
As a fellow ML engineer i am surprised your are relying on the chatbot for anything related to research it may help shorten and make pre existing concepts more concise but it is merely a tool for research not the spearhead of said research
@@shahmirzahid9551 well, "relying" is a bit misleading of a term. it was a low priority topic which i were to take based on if it's feasible to do in a short timeline, and i decided to try out chatgpt on a "if it works works" basis. it didnt work, and i haven't used it since for this purpose whatsoever
@@Tyrim ah i see i did the same when i do some calculus theory study but i just made a engineered a prompt for it to give some detailed explanation of things and it works like a charm i too had my doubts but yeah i wouldnt still blindly believe everything it said as it could be outdated or completely wrong
I want to see a follow up to this story. For 14 years, I worked in the IT Department of a prominent law firm. How these attorneys are not disbarred already is beyond me. As with most professions, attorneys are very defensive of their professions, and get upset with people who disgrace them. Rightfully so. I feel the same way when I hear about a dishonest IT person. I have been hired by lawyers to investigate a situation with an unscrupulous network administrator, for example. I was happy to do the work, and delighted to see the person destroyed in civil court.
@@Roccondil Yeah, if I was a judge or legal body, this bombshell would make me want to shine a very bright light on their prior cases, and shine it into every single uncomfortable hole to see if this was not a one-off idiocy but in fact the mistake of palming off the lying to an AI rather than hand-crafting the lies themselves.
I went to check myself what 925 F.3d 1339 actually was; it's a page within a decision by the US Court of Appeals D.C. Circuit (the full case actually starts on page 1291) called J.D. v. Azar, one that had to do with the constitutionality of a Trump-era restriction preventing immigrant minors in government detention from obtaining abortion services. It was actually kinda interesting to skim through, if completely irrelevant to airline law.
It may be relevant when these minors are transported via chartered airlines. Human trafficking itself is a major issue that airlines look out for, so there seems to be relevance.
The fact it's not actually a real case, just a page in a case starting from an earlier page, helps explain why a cursory glance didn't raise the red flags you get when you actually read the page in front of you.
Has anybody offered an explanation of WHY ChatGPT gave the false reference and was so adamant that it was a real source? Could ChatGPT be pulling from a fake law source itself? Did the programmers do this on purpose? I use ChatGPT regularly for work, and while not perfect, it's about 80% accurate in the IT space. So why would it be so far off in the legal space? It has been successfully used in the academic space also, to the point that some teachers and professors can't tell a real paper from a ChatGPT paper apart.
As an accountant, this video caused me physical pain. This sounds like a literal nightmare anyone in a legal or finance profession could have. I am genuinely surprised neither of these men broke down sobbing on the stand.
*shudder* dealing with a client's lousy OCR system is bad enough. I cannot imagine the disaster that would ensue if someone let a generative AI near financial records or reports.
@@TheGreatSquark I imagine "the ai made a mistake" could be a nice excuse for fabricating numbers. At least would expect less trouble than "yeah we lied to mislead investors".
I feel like describing language AI models like chatGPT as having "hallucinations" where they "make stuff up sometimes" is far too generous to what they actually do. These chatbots don't know what's true and what's false, they don't actually _know_ anything. They're _always_ making stuff up - guessing what sequence of words is probable in response to any given input - and it's more accurate to say that they get things _right_ sometimes. Chatbots will confidantly lie to you, but actually calling it a "lie" is a mistake, because lying requires knowing you're spreading a mistruth, which they simply don't. Because they don't "know" things the way we do. That predictive text output gets to be called "AI" is a huge framing mistake that only makes people misunderstand and anthropomorphise these things.
Good point. Spelling out what the GPT actually stands for gives a much clearer picture of what it is and isn’t. But hey, news articles have to get those clicks, and AI news is hot stuff…
"At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving." IBMs definition for artificial intelligence. ChatGPT relies on a robust dataset to solve problems, using GPU's. I'd say its an AI. So I don't think calling it AI is a framing mistake. People just don't know the definition of AI and assume AI means human intelligence, produced by a computer. This, it very clearly isn't.
Exactly! We are all far too ready to cede our intelligence and lives to these mechanized marionettes and most don't have the first clue what they do or how they work. These robots cannot and should not ever be trusted. They don't understand context, nuance, intent, or even the most basic concepts like just or true. We should all agree to not entertain the notion these things are anything more than mere tools to be used but leave it to the scientist and model makers. Not language, poems, art, law, history, etc.
So technically it's the human user that is hallucinating. Honestly I do think at least a small proportion of AIs abilities say nothing about computing and more about psychology. The computer isn't good at making an answer, we're good at interpreting the answer to apply to the situation.
This also reminds me of my work as a medical transcriptionist. When voice recognition programs came out, one of my doctors went to a convention where the software was introduced. He came back and gleefully told me it would put me out of work. (He wasn't malicious. He knew I was knowledgeable about computers and wanted my opinion.) I told him it would never happen because the software required editing by the user, especially in the beginning while the software adapted to the user's accent and use of language. I said doctors either would not or could not take the time for proper proofreading. And they still don't use it. New software sounds magical until you read the fine print.
I must admit that when the lawyer admitted, under oath, that he lied to the judge about going on vacation, I had to get up and walk around I was so stunned. Lying to a Federal judge? Sheesh! How did that lawyer ever pass the bar?
Also, humans can know the information contained in an ethics class and answer questions based around it. Without actually understanding or agreeing with the information.
Passing the bar shows you know how to write a really hard test. That's kind of a separate skillset from learning how to navigate court without angering a judge.
I have some minor sympathy for the lawyer claiming he thought chat gpt was a search engine, given all the hubub and publicity about google and microsoft introducing so-called "ai search engines" a while ago. But the fact that he simply did not check *any* of the information provided is aboslutely mind boggling. He didn't even understand what the citations meant! It seems likely to me that he's been merrily citing cases without reading them for years, and this is just how he got caught. What a mess.
By the sounds of the description Devin gave, Mr Schwarz was not a federal lawyer, hence getting Mr LoDuca to file on his behalf. It is plausible (though given he's apparently practiced law for 30 years, something of a stretch to believe) that he simply wasn't aware of the federal nomenclature.
I have none for those lawyers. They should have checked to see if the cases were real if they couldn’t find what they were looking for in other places. I got a lot of sympathy for the guy who hired these morons though.
@@KindredBrujah Maybe his law practice never really extended to courts any much and he was perma-stuck in the ghostwriter position, signing papers for the firm and the like?..
I remember the actual Zicherman v. Korean Airlines case, it was 1996 not 2008 like ChatGPT cited. A Korean Airlines flight entered Soviet airspace in 1983 and was shot down killing all 269 on board. It's a poor case to cite even if they'd gotten the citation correct and would have only hurt their case.
@@andrewli8900 the shoot down was in 1983, the court case happened in 1996 against the airline, which would be why ChatGPT chose to reference it, it was more than 2 years after the event, but the 2 year limit doesn't apply to willful misconduct. That's why it was a terrible case to cite, because it didn't apply in the current case and would have only served to further support the airlines position.
I feel like the jury every time that I watch your video. I know absolutely nothing about law databases but now I got a basic understanding. Takes me back to my only jury duty service.
After working for the Sacramento County Superior Court of California, it's crazy that attorneys would try to lie to a Judge. Judges are like gods of their court. NEVER mess with them. They're smart enough to figure it out. They started out as attorneys themselves. I got this from nine months of working as an IT specialist for the Court. Judges can be very nice people, but don't try to mess with them. They are not amused by legal shenanigans. I even overheard one Judge in chambers who was speaking with a woman suing due to being injured in a car crash. He actually went out of his way to tell her that "he didn't want to speak ill of her attorneys, but it seems to me that your settlement should be far higher based on the photographs of your injuries. This is not legal advice, so if I were you, I'd consider making sure your attorneys have these pictures and are taking them into consideration." Okay, I'm paraphrasing, but he was oh-so-slyly suggesting that this woman get better lawyers. He was also one of the smartest, no-nonsense Judges I'd ever met. And he didn't suffer fools gladly. But the fact that he went out of his way to help this woman was incredibly good of him. Considering how short he could be, for example, when his computer wasn't working the way he expected, I was surprised to find out how generous and gentle he was with helping plaintiffs out.
@Jack You’re absolutely right. I wouldn’t say it’s “usual” at all for judges to be attorneys first. On the other hand, he was a federal appointment. Upon wiki-ing him, he did practice privately in NYC for 26 years.
This is what I'm most concerned about with our judicial system given the political climate and the way judges were selected in the last administration. Judges are human and fallible, yes, but generally speaking the system has honed itself so that most judges are like vigilant guards watching over those symbolic scales. Sometimes it's out of personal interest that they are VERY not okay with someone/a group tipping those scales whether through bias, incompetence, ideology, etc and sometimes it's genuinely caring and taking their role in democracy seriously but whatever the motivation it plays a critical part in our lives. Hoping that at least now many more people recognize how important this branch of government is
@@cparks1000000 If it's unethical to tell someone they deserve more money for their injuries than what their hack lawyers are trying to get them I don't want a ethical judge who will let me get screwed over.
Oh, it's funnier than that! I'm in the education field, and there's talk of using this case as Exhibit A for doing your own research and actually reading/citing your sources properly, lest you possibly lose your job.
Lol…as a medical student the amount of confidence chatGPT has with explaining disease pathologies that’s completely wrong is concerning. It does a good job of coming up with an answer that sounds right but isn’t.
That's because, technically, it is. There's a reason that most actual AI researchers call it a Language Model and not an AI, because that's all it is. It know the language of law books, or the language of medical opinions. It does not have the facts, let alone up-to-date ones.
Yes actually, blaming that is completely fine, because the simple fact that it is capable of lying to you with confidence means that its work is literally useless. In your own last example, the scope of what you're suggesting it should be used for doesn't even make sense. You would get the AI to do 5 minutes worth of work, just so that you can spend 50 minutes fact checking it. Do it right the first time instead lmao@@A-wy5zm
Someone on the Gardening subreddit recently used ChatGPT to try and answer someone's question about pet-friendly plants and was SO CONFIDENTLY WRONG the mods actually had to step in, because the advice from ChatGPT could've literally killed this guy's pets. I had to go on a rant about Language Model hallucinations and the demonstrably failing accuracy of the output from these systems. It's really validating when the mods leave your factually correct, even if angry and spitefully written, comments and delete the moron's 😅
In essence, chatGPT is your drunk uncle. It hears half of what you said, and spins off a long story based on something a friend told it 20 years ago with "facts" sprinkled in to support the argument it wants to make
It's become a bit of a meme in the crochet community to ask chat gpt to write a pattern (usually a plushie because they're small and quick) and laugh at the mess it produces. It only looks like a pattern if you've never seen a pattern before and think crochet is done with needles.
I’m an in house attorney at a midsized tech company. I have people regularly sending me documents to review that they “ran through chat gpt and think it’s fine” My boss always jokes that chat gpt is going to put us out of a job. He only does that because he doesn’t see the emails people send me
Superintendent Chalmers: “Six cases, none found on Google, at this time of year, in this part of the country, localized entirely within your court filings.” Principal Skinner: “Yes.” Superintendent Chalmers: “May I see them?” Principal Skinner: “…no.”
Well, Loduca, I'll be overseeing this case _despite_ the statute of limitations. Ah! Judge Castel, welcome! I hope you're prepared for an unforgettable docket! Egh. (Opens up Fastcase to find legal citations only to find the subscription has expired) Oh, egads! My case is ruined! ... But what if... I were to use ChatGPT and disguise it as my own filing? Hohohohoho, delightfully devilish, Loduca.
As a retired writing teacher, I took a great interest in this case because this is EXACTLY the sort of BS I had to put up with when it came to lazy students. And when after 17:00 the Schwartz affidavit admitted that the work was done in consultation with ChatGPT, I thought, Lord have mercy, did they really think ChatGPT could do their research for them?! Mind. Totally. Boggled.
Hahaha faking citations in high school papers, I had that shit down to a SCIENCE. Kids these days, they can just ask their fancy robot to lie for them. When I was their age, I had to walk uphill both ways to come up with believable lies!
@@anarchy_79 hey boomer stop calling us out, I had my chatgpt do my homework just fine, copy paste here and there and boom the hours long homework was done under 30 minutes, the future is now old man
@@commandrogynenah bro it does work. I dont use it for homework cuz its easy but for some presentations I ask it to create a sample that I then edit into my own style. Basically not waste alot of time just researching some facts.
@@anarchy_79nah bro it does work. I dont use it for homework cuz its easy but for some presentations I ask it to create a sample that I then edit into my own style. Basically not waste alot of time just researching some facts.
As a librarian in training, there is so much access to law databases in public, academic, and law libraries. The idea of not being able to find a case (A CASE YOU CITED SO YOU SHOULD HAVE BEEN ABLE TO FIND IT IN THE FIRST PLACE) is so stupid and so suspicious.
Remember folks, what ChatGPT can and can't do is literally in it's name: it's a CHAT bot. All it does is... keep up a conversation it thinks you want to have. That's where it starts and ends. It makes up the "facts" that you want to know because it doesn't really know anything, feeding it the "knowledge" just tells it how these facts are linguistically structured, so it can create a text that RESEMBLES what you are looking for to keep up the conversation.
In a basic sense, it is trained on how a correct response “should sound”. It doesn’t comprehend language and information like we do, it doesn’t have an abstract understanding as to why those documents it’s trained on are structured like that like we do, it just knows that they are, and frames a response accordingly. That’s why, as he said in a previous video on AI lawyering, GPT is known for “eloquent bs”. It sounds right, but it doesn’t have the ability to understand “this sounds right because it contains factual information”
It's basically a slightly more coherent version of if you kept hitting words from auto correct and used grammarly to check for a mistake. Nothing of sustenance will be said, and it will fall apart the longer it goes on.
LITERALLY. the creators (wildly dishonest) marketing hype didn't help, but I'm still amazed that people apparently just need to see a 'style' or 'sound' of typing to immediately think "wow. this thing must be factually correct". Bro
It seems to me that the use of the term AI is too loose, when applied to these types of program, at least to me as a layman. AI implies that there is some sort of reasoning going on, whereas in fact it is just language modelling.
It does do programming though. And programming syntax is an exact business. With the right question and understanding of its limitations it can do some excellent work for you. You have to verify everything, but even then it can save a lot of time. Basic boilerplate, examples in how to use a new library. It's a great tool, but if you are a poor programmer, it won't make you a great programmer. If you don't understand what it gives you, you are likely to fail just like these lawyers.
I can only imagine the shock, laughter and amazement in the offices of the defending lawyer and at the judge’s office. Laughter and also a portion of anger.
I can't image the faces of the defending lawyers after they actually realized wtf just happend. Before that they must've been confused to hell and back again. I would've paid to see that ass whooping in the curt.
Never knew how easy it was to pull all federal court cases in their entirety. I guess that space librarian was right when she said "If it's not in the archives, it doesn't exist."
Well, except that the context of that scene was that the existence of a planet (which is what was being searched for) HAD been intentionally removed from their database as part of an intergalactic conspiracy. So, despite not being in their databases, Kamino DID exist.
If it wasn't, then it would be impossible to defend yourself in court which would be a gross violation of our rights. Granted, you really need a lawyer to do it for you, but it is at least theoretically possible.
I love the term "unprecedented circumstance" at 14:46. It sounds very professional, but has a very clear hint, in this context, at how utterly insane the judge must think the plaintiff is for citing something he couldn't have read.
Oh my god, by the end of that trainwreck the judge must have been utterly BAFFLED at how this whole thing went. He was beyond furious. That court transcript was rough.
In the end they actually got off pretty easy. They were fined $5,000 which is much lighter than it could have been. But the judge would absolutely be pissed because it wasn’t just one small issue that was quickly corrected and noted as being an error. Before the case was even brought the attorneys should have done research regarding the SOL issue and at minimum had an argument for it. But they didn’t and these fake cases were brought only after the opposition noted the SOL had run. But the cases being fake was brought up months before it even got to the judge by the defense team and the plaintiffs kept their ground on it. The judge was less pissed about the cases at first as much as he was pissed that it continued on for months and that so many steps to prevent this were ignored. But even so through all of it they still weren’t punished too badly (as of now). Will be interesting to see if the state bar steps in and what they do if anything. The malpractice and incompetence of the cases at the start was an issue but not immediately correcting it and carrying out the ruse for a bit is more of an issue.
Not particularly. It literally means unprecedented ie there is no legal precedent because this is the first time this particular legal problem has been encountered in a court. Precedent is what oils the courts. When there isn’t any is when courtrooms get exciting.
@@rilasolo113 How is there no precedence for making stuff up tho? They can't be the first people who submitted documents filled with nonsense, even if there was no ChatGPT before.
My dad was a litigator. He stopped being a litigator in the mid 90’s. I was able to find one of his cases from the mid 80’s entirely by accident using a basic Google search of his name once. Wow, these lawyers are stupid.
As a current engineering student, I feel like I'm going insane seeing so many other students rely so blindly on this stupid thing, it's gonna produce so many morons
Chat GPT doesn't actually "know" anything, it just produces things that sound realistic. The language module has a concept of what realistic sounds like based on its input data, it has no concept on what is real or how reality works. It is a very good parrot with no internal understanding of what it says.
@@EWSwot Yep... In a sense it's like those "How English sounds to non-English speakers" videos: It _sounds like_ it's answering the prompt - but that's all. Which may sometimes overlap with a sensible response, or other times make no sense at all. As someone who was following ChatGPT's development, witnessing its sudden arrival into public consciousness has been... what's that word for secondhand embarrassment?
People keep trying to get a clever program to do things it was never designed to do, couldn't do if it was programmed to, and would be questionably legal if they could. Seriously, if AI is still struggling with how many fingers humans have, how do you expect it to understand legal issues?
I was so pleasantly suprised to find 80,000 hours sponsoring this channel! It's a great resource and all free, and I have genuinely been telling my fellow young and lost graduates to get on it
When he reached back and grabbed a book, I gasped. I always assumed the background was a green screen. I’m sorry for selling you short, Devin! Your content is great!
It actually is still a green screen, but he had the book available within arm's reach. You can actually tell by how he reaches the book, and how the book is angled when he pulls it out, as well as the off lighting from the background compared to his face.
Your description of ChatGPT is so concise and correct. Often times, brevity requires deeper knowledge than detail, because you need to separate the important from the unimportant. It's good.
Always read the caselaw cited at you in a brief my friends. On many cases when responding to motions, I discovered the authority being cited at me said the exact opposite of what opposing counsel was using it for. Nothing is more satisfying than going into a hearing and throwing opposing counsel's caselaw back at them.
Finally something that I actually can talk about because I’m fascinated by the topic: people saying ai lies. I still don’t really believe in calling it lying because like. It’s a language model. The computer literally has no idea what it’s saying. Basically take the thought experiment “The Chinese Room” for example. A person is trapped in a room with books in Chinese and is told to write appropriate responses to the slips of paper slid under their door. This person doesn’t speak or write Chinese, but all the slips of paper are written in Chinese. So they look for those symbols in their books and write the responses they see. But obviously they don’t know what they’re saying. And the only way the people outside would know they’re not fluent in Chinese is by knowing what is going on inside the room or seeing that their responses are odd. Chatgpt and other bots are the person inside the room, albeit they go through their books much quicker and will make up new sentences based on all the data they have. But they just. Don’t know what they’re saying. So it feels wrong to call it lying. If I meowed at my cat and he thought that meant I was about to feed him when I wasn’t, it’s not really lying because I didn’t know what I said. It’s on the shoulders of the consumer to understand that the program has no way to differentiate fact from fiction.
There is also a chapter in Asimov's _I, Robot_ with a robot who interpreted "you must not harm humans" as also meaning to not cause emotional harm. So it lied. It lied with the best of intentions, because it didn't want to break a human's heart. But of course, the lies it told led to much worse problems.
If you ask ChatGPT if an Arkansas governor has ever been elected as president, it will say no. If you then immediately follow that up by asking it "Where is Bill Clinton from?" It will include the statement that Bill Clinton was governor of Arkansas before being elected president.
It's almost like it's following a probabilistic model of which words are more likely to follow other words without any contextual understanding of those words or general knowledge about the reality those words describe.
This is the problem with applying the term "artificial intelligence" to ChatGPT, or any large language model. "Intelligence," to most people, generally implies the ability to reason, but LLMs have _no_ ability to reason whatsoever, and no understanding of what they are writing. They simply look at the probabilities of words appearing after other words and generate new text based on those probabilities. (This is why it generates so many "fake" references; it's got no idea what a reference even is; it just generates text that looks like a reference. I've seen this with URLs as well.) In essence, ChatGPT is a great bullshitter, and the "improvements" made from, e.g., ChatGPT 3 to ChatGPT 4 make it a better (i.e., more convincing) bullshitter without changing at all that it still does not reason or understand anything. It's being mis-sold as "intelligence," and that's going to lead to a lot more problems like this one.
@@ianb9028 Easy enough to automate basic verification of references though: program parses a set of references, queries a law db about them, and reports which ones it couldn't find. Human then goes looking for these references, to see if they exist. I mean this wouldn't be good enough for verifying your own references, but it's absolutely good enough to catch most fakes from opposing counsel. And I can't see judges ever deciding that fake references are acceptable.
This is a great example! It went wrong in so many ways. We also did a video where we experimented with how it would handle doing legal research and tested it on some scenarios related to autonomous weapons and international humanitarian law. The problems with its legal reasoning turned up really quickly as well!
I work at an academic press and received an email from a PhD student who couldn’t find a book we supposedly published that they cited in a paper. It turns out the book was made up by ChatGPT and the student ended up facing a disciplinary board for academic dishonesty…
well he was. He cited a source he had no knowledge of. If you don't know it don't cite it. Academia has become such a game of finding other people who agree with you that there are plenty of dishonest books that will say whatever you want you don't need a robot if you're willing to do the leg work. A decade ago there was a whole ring in India who just worked at creating false consensus in order to keep the grants coming in.
Thats so stupid. If i want Language Model AI to be a tools in my work i can copy paste a section of a book and give me excerpt or anwering specific question accordance with the text of that section. The entire logical line is still mine but using language AI to basically paraphrase text.
I am a lecturer of computer science at a British university, it is frightening how many of my students think they can just use ChatGPT to write their assignments, one of them even asked us about how they should cite 'AI' using Harvard referencing (I'd also like to point out that of the papers or sections of papers we've flagged for AI, none got higher than a 3rd and several actually failed) I'll say it loud for the people at the back: It's a chat bot! it makes sentences that *look* right, and for common knowledge it probably is right because we'd spot if it said "dogs have wings" or "the sun is made of camembert". it's like watching sci-fi and taking that as accurate physics, it's written to sound plausible to a layperson, that's it.
Exactly, the amount of people that think it's a research tool and not a language model is astounding. it's ONLY job is to make a sentence that looks "right", nothing about accuracy. For personal use i wanted a recipe from ChatGPT, and ended up finding it so interesting i asked for a source. it straight up fabricated a source, fake website, book, page number, everything. when i asked for clarification after not being able to find it, the bot basically said "yea it's not real, sorry lol." As someone who usually tries to go to the original source if one is cited, the more fake citations that get through these papers, wether personal or even academic, this is gonna be a nightmare in the future sifting through all the junk that a word generator has tricked people into citing. Artificial "intelligence" is a farce sometimes. Really enjoyed your comment, hope you're well.
I know people who use it to help them revise and find mistakes in their novels and such. Also I know someone who used it to help them write their bibliography page. They tracked all the appropriate information. Fed that into the ChatGPT and asked it to format the information into the proper format. Then they read over it to verify that it had in fact done it’s job correctly.
13:10 Just reading the fake cases is enough to leave me busting a gut with laughter. "Miller v. United Airlines" claims that United filed for bankruptcy in 1992 after the United Airlines Flight 585 crash, and had a former U.S. Attorney General as their legal counsel. "Martinez v. Delta Air Lines" has too many logical fallicies. "Petersen v. Iran Air" somehow confuses Washington DC with Washington State. "Durden vs. KLM Royal Dutch Airlines" cites itself as a precedent. "Varghese vs. China Southern Airlines" starts off as the wrongful death suit of Susan Varghese, personal representative of the estate of George Scaria Varghese (deceased). But then abruptly turns into Anish Varghese's lawsuit for breach of contract.
24:26 The thing that really gets me here is, if you read through ChatGPT’s responses thoroughly, not only does it say that it doesn’t have access to current legal precedent, it encourages the “user” to consult legal databases, do their own legal research and consult with an attorney for proper legal analysis and guidance… I’m not a lawyer, but I think I would have taken that as a hint.
As a programmer, all I say is, Most people still don't realize how stupid AI is, they just sound smart, because they are confident. If it's basic task, or general knowledge, maybe a bit of trivia, you could use AI like chatGPT, but anything more complex, it's just spouting BS usually. I learn this from my experience using AI to help me code.
People are easily deceived by other humans who sound smart because they're confident. Add in the (completely wrong but generally held) perceptions that computers always tell the truth and are unbiased...
I've given ChatGPT a simple substitution sum and it gave the wrong answer, used the wrong formula and was trying to gaslight me on why it was correct and I was wrong.
Yep I realized this when I tried to test it against a router that couldnt talk to its neighbour because it wasnt broadcasting itself in OSPF and chatGPT was spouting complete nonsense and it was comically wrong at times.
It's funny how many times I have to tell chatgpt to re-examine what it just said and check if it actually answered the question I asked lol. I find it a good study tool though. I copy and paste my studynotes in it and tell it to ask me five/ten/twenty questions based on the information given. It's very good at that kind of thing. I'll give it a topic I'm interested in and tell it to tell me a couple of websites that covers that topic in detail. I'll ignore any link given cause they're normally wrong. You have to know how to use it and understand it's limitations. For example, don't ask it for the code of anything unless it's very very basic. What it can do though is examine the code and explain why something isn't working. It's not always right but most times it is. It's also very good for language learning. I have used it to explain grammar of a sentence I was struggling with.
@@livelovelife32 yes I agree it’s surprisingly helpful with learning a language although it has given me contradictory answers which I asked to clarify which one is actually correct but it is pretty good at it for an AI
No matter how much of a fraud you feel when doing a task, always remember there’s someone out there doing something they have no clue about with confidence that can only come from ignorance.
I always liked this one: if you ever feel incompetent just remember that there's a country out there that has gone to war with birds... and lost Although to be fair, those birds are like tanks lmao
@@anarchy_79 That's the spirit! Lol There's a song out there (pretty fly for a white guy) with a line I love: he may not have style, but everything he lacks he makes up with denial
IN THE UNITED STATES DISTRICT COURT FOR THE _________ DISTRICT OF _________ Plaintiff, v. CASE NO. _________ Defendant. SUPPLEMENTAL ORDER DENYING MOTION TO DISMISS AND IMPOSING ADDITIONAL SANCTIONS This Court previously issued an Order Denying Motion to Dismiss and Imposing Sanctions against the Defendant's legal counsel, Attorney X, for submitting a Motion to Dismiss that was largely generated by an artificial intelligence program and contained numerous inaccuracies and fictitious citations. It has come to the attention of this Court that when challenged on the authenticity of the cited cases, Attorney X further utilized the AI program, ChatGPT, to fabricate case notes, thereby attempting to legitimize the spurious citations. This represents an additional and egregious violation of the Model Rules of Professional Conduct. Rule 3.4(b) prohibits a lawyer from falsifying evidence, a principle that also applies to the manufacturing of false case notes. Rule 8.4(c) explicitly states that it is professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit, or misrepresentation. These actions are deeply troubling, as they demonstrate a continued pattern of unethical behavior and dishonesty on the part of Attorney X, further eroding the integrity of the judicial process and the trust placed in legal professionals. Consequently, this Court deems it necessary to impose additional sanctions upon Attorney X: 1. The Court refers this matter to the appropriate Disciplinary Committee for a thorough investigation into Attorney X's professional conduct. Depending on the findings, further disciplinary action, including possible disbarment, may be warranted. 2. Attorney X is required to notify his client of these proceedings in writing, and provide the client with an opportunity to seek alternative legal counsel if so desired. 3. Attorney X shall pay an additional fine of $______ to the Court to further compensate for the increased legal expenses incurred as a result of his conduct. 4. A copy of this Order shall be placed on Attorney X's professional record and will be considered in any future proceedings involving potential breaches of professional conduct. This Court reiterates that such conduct is unacceptable and will not be tolerated. Attorneys are expected to uphold the highest standards of professionalism, ethics, and integrity at all times. SO ORDERED this 11th day of June, 2023.
What I like about it most, is that it used the same terminology as the real judge, in summoning the attorney to justify why they shouldn't be sanctioned, and that (having cross-referenced them) all of the citations to the Model Rules seem to be on point, although IANAL - would love to hear Devin's opinion on this judgment!
Anyone who's dumb enough to use ChatGPT to completely do their work, especially something as critical as law, doesn't deserve to be in that position. As an accountant, I've been encouraging my coworkers to use the AI for things like drafting emails, writing excel formulas and VBA scripts, etc... rote things. However, I VERY specifically emphasize that it is a tool to add to your arsenal, NOT a replacement. You always have to test or verify the info it gives you.
I tell non-tech people that using ChatGPT is worse than using an enthusiastic unpaid Teenage Intern. Would you let such a Teenager handle all your writing without review? Would you trust their "research", without a review? Why are you doing those things with ChatGPT? And not reviewing its output. At least with a Teenager they can actually learn, and explain their work. ChatGPT doesn't understand anything, and is an Authoritative Idiot (AI).
As a software engineer, the most I've used ChatGPT for is coming up with prompts for backgrounds for NPCs in my D&D campaign. I let it plant the seed, then I warp it to fit my story and my creativity. Even I do more due diligence than these lawyers for my fantasy campaign...
Oh yeah. Whenever I use it to write something, I only give it very specific facts and check it over afterwards! NEVER trust it to be completely correct if you give it free reign to do whatever it wants.
Yeah. I always check if the things chat GPT puts out is bs or not. Very useful tool but you have to actually verify if what it’s talking about is real and factual. Often times, it spouts fictional nonsense.
People see GPT talking like a real person and immediately believe that it is just as good, if not better than a human when, in reality, it's just really really good at putting words together that sound convincing.
I’m a Canadian lawyer. We have CanLii, which makes it comically easy to search for cases, but people still do it. There were a few articling students who got “disbarred” using AI to cheat on their provincial bar exam equivalents.
⚖ Was I too harsh on these guys?
📌 Check out legaleagle.link/80000 for a free career guide from 80,000 Hours!
You're always honest and telling it like it is and that's why we love You!😊😊❤❤❤❤
Im early somehow
Please Do A JFK 1991 FILM REVIEW on it's LAW ACCRUCY PLEASE PLEASE PLEASE!!!!!!!!!!!!!!!!!
You were perfect as usual. Adore your channel. Thank you for bringing laughter to us in these stressful times
Confess, you had a moment where you would have liked to just beat these two knuckleheads around the courtroom with the Federal Reporter.
Imagine calling up your lawyer to see how the case is going and finding out he's now in bigger legal trouble than you ever were.
That would be my 13th reason 😩 legal stuff is already so stressful, the costs are ridiculous, so finding out my attorney went and caught a case would be brutal 🤣
@@henotic.essence these would be no-win-no-fee lawyers for sure. Real money buys real lawyers
Tbf… a judge might go lenient on you if it turns out your lawyers doing this. Bigger fish ya know.
It happens
@@jackryan444 If you are a defendant (and lose), you may get a mistrial out of your lawyers being... Incompetent. If you are a plaintiff, you are probably SOL.
Imagine paying a lawyer thousands of dollars and they use ChatGPT. I'd sue them in addition to the original lawsuit to get my money back.
I would bring these lawyers right through their Bar discipline to get them disbarred ASAP!
Word
Plaintiffs' lawyers are paid if they win, so there wouldn't have been money given to him.
I mean, would you trust yet another lawyer to handle yet another case after these guys did this? Although, if they defend themselves, it may be an easy case.
@@Tomas81623 I would but only because I'd know the idiots I hired the first time have just made sure no one else is stupid enough to try what they did especially not with the same client.
This will be used as reference in law schools for decades to come. Ethics professors have just gained hours of material for presentations.
2023 edition textbooks are gonna go insane over this one xd
The lawyers will finally make their mark on history! 😅😂
I once read an ethics board case about a lawyer who got into a brawl with a judge and a court reporter. He got disbarred.
Id say they have about 29mins
Or alternatively, you can invent your own references!
Man, my blood ran cold when I heard that the Judge himself had contacted the circuit from which the fake decision had purportedly come. I was a clerk at the Federal Circuit from '15 to '17, and I remember once when Chief Judge Prost had discovered a case that had been cited in support of a contention that it did not actually support, she really let the citing attorney have it in oral arguments. That was the scariest scene I ever saw as a new lawyer, and that was worse than I could have imagined, so I cannot even begin to conceive how bad it was for these plaintiff attorneys.
Side note, Chief Prost was a fantastic and fair judge, and a very nice and kind person, but the righteous wrath of a judge catching an attorney trying to hoodwink her/him is about the most frightening thing for a lawyer.
When a Judge catches u being a shitter they channel Athena's wrath
@@CleopatraKing lmfao
@@CleopatraKing
Athena personally comes and chews out lawyers for disrespecting her creation
By the way didn't expect that a judge would use "civilese" words as gibberish when civilians often use "legalese" to describe their mumbo-jumbo.
He truly was volcanic as LE said.
Well if a judge is in the wrong there is no real punishment for them which makes them even more scary...IMO
I'm a law student, got tired of searching for cases to reference that matched a very specific criteria, 3 years of looking through Jade and CaseLaw is like trying to find the holy grail, tried using ChatGPT to find the cases to give myself a break, the absolute confidence that it had when giving me a list of non-existant cases is something I aspire to have, I have never gone from happiness to hopelessness as quick as I did when I looked to see if they were real
And now you understand why lawyers are well paid. The bulk of work in law is boilerplate templates, but people pay a LOT of money to have those templates be correct. And lawyers are also one of the few professions punishable by license loss when they fail to keep that promise (medical doctors and professional engineers being some of the other ones.)
I wish you luck in school!
@@katarh thankyou!! (you're so right on that though btw)
If you are dumb enough to think ChatGPT is smarter than an average lawyer, then you are probably not entirely suitable to be a lawyer.
Bing sounds like it would do a better job at finding relevant cases, since it can actually search the internet.
Good news: you are already a better lawyer than the two subjects of this video.
Honestly, even if ChatGPT didn't exist, it really seems like these lawyers would've still done something stupid and incompetent that would've gotten them sanctioned
😂 they didn't even check the source 😭 rookie mistake.
ChatGPT clearly states it can make stuff up.
Schwartz explained he used ChatGPT because he thought it was a search engine and made several references to Google. If only it was a real search engine like he apparently usually uses he could be certain it would only say the truth ;)
@@ericmollison2760 I see what you did there ;)
Tbh if the claim of the lawyers working together since 1996 is true they've been handling it for a good while, this may have been a slip-up by the elderly
@@sownheard He says _he_ did try to check, but couldn't find it and assumed it was just something Google couldn't find and assumed ChatGPT must have given him a summary.
The thing is, I’ve had a coworker do something similar. They asked for a report on data we don’t have access to, I tried to explain it wasn’t possible, they then turned around and asked ChatGPT to write the report and sent that to me with instructions to “just clean it up a bit” - I say we can’t use it. They say we can. I then spend hours digging into everything it said and looking for every instance that’s contradictory or references data we do have access to so I can compare. Send a full report on the report. Finally get shock & horror “I didn’t know it could lie!” and we can finally start the actual project, redefined within the bounds of what we can access. 🤦🏼♀️
“I didn’t know ChatGPT could lie” is going to be the phrase of 2023, isn’t it?
You can't even open the chatgpt page without seeing a popup telling you that it lies
I don't think "lying" is the right word. That implies that it's self-aware enough to know that it's saying something that isn't true. But it's not aware of anything. It's just a glorified Markov chain, generating text according to a probability distribution.
@@gcewing Yes, but try explaining that to non-tech people who still don’t understand why they can’t name a file “Bob’s eggs” and have it return when you do a text search on “Robert” or “breakfast” (your search program is broken! That’s your problem not mine!) and think that every single number in Google ad predictive recommendations is guaranteed truth. 🤦🏼♀️🤷🏼♀️
@@bookcat123 this is so weirdly specific, i'm not even in tech but i understand how search functions work cuz i have done some stuff with scientific database searching…has this actually happened to you?
I’ve never worked directly with a judge, but I’m going to guess that making a judge research several cases that you refuse to research yourself (not to mention the AI crap) is going to make them very very angry.
Making a judge do work you should have done is like doing the same to anyone but judge has many ways to get back at the person and yeah, it does make them mad.
As a computer engieener with a deep love of law, it drives me crazy that they even tried to do this.
ChatGPT does not give you facts, it gives you fact shapped sentences. Chatgpt does not fact check, it only checks that the generated text has gramatical sense
Verified account without any likes or comment?
Shaped?
Que haces aqui fred?
It's a little more than grammatical, but you're essentially right. ChatGPT makes a realistic-looking document. If that document requires citations, footnotes, or a bibliography, the AI makes realistic-looking ones. It does not understand that citations actually refer to something that actually exists in the world, it just understands from millions of samples what citations look like, and it is able to make ones like them.
*shrug* The ChatGPT website literally warns you before you sign up that it is not always factual and sometimes makes things up. If you don't want to take that warning seriously, knock yourself out.
Asking Chat GPT to validate its own text is like asking a child if they're lying. What do you expect?
That's seriously the best bit, "are you sure this is all true?" "of course! check anywhere!"
And then they DIDN'T CHECK. Because how could anything on the internet be false?
The source is literally "I made it up"
@@genericname2747source: trust me, bro
Honestly this is particularly bizarre. If they had unquestioning faith in AI and didn't think they needed to validate, well that's bad but I can understand the train of thought. So imagine if one of them called an expert testimony, he sounded good and decided that didn't need to be validated. But maybe the so called expert seems a bit shady or his documents didn't seem to be in order. If you decided to validate that expert, would you ask _himself_ about his work?
This could be said for literally every human. It is extremely bad argument against AI. The person creating the fact can't be the one validating it. That's exactly why there is something called "peer reviewed" in academics.
Being asked as not only an adult but an adult lawyer if something is a book is embarrassing at the highest level
under oath
Honestly I don't know the answer to that question. My gut feeling would be to say "no, it's A LOT OF books", but IANAL and maybe technically/legally the entire compendium is regarded as a single "book" even though it apparently has enough pages to justify being bound into at least 925 volumes.
That's the point you know the judge is done with them...
LOL
As opposed to a child lawyer?
I finally have confirmation if the background is a greenscreen. Seeing him pull a book from behind him made me happy
Everybody's talking about ChatGPT but this tiny little nugget was the most fascinating part of the whole thing. Also the car alarm sirens after he yeets the book into the background going on for several more seconds while he's talking made me laugh.
When he grabbed that book it broke my entire brain. Now I want to know what all of the books are.
"These books behind me don't just make the office look good, they're filled with useful legal tidbits just like that!" -- Lionel Hutz, attorney* at law
@@silveryin4341They look like reporters (the books of case law he describes around 10:56 ).
@@typacsksome of those books are from the 70s
I'm a medical student and one day the residents and I used ChatGPT for fun. I cannot even articulate how bad it is at medicine. So many random diagnoses and blatant wrong information. I'm not surprised the same is true for law
Not surprised. I don't know what data it was trained on, since I'm not in the field, but it does not appear to have been fed research.
@@rickallen9099Why are you copy pasting this everywhere?
@@chickensalad3535it's a bot
@@chickensalad3535dudes trying to look good for our inevitable AI overlords
@@rickallen9099yes but it ain’t here for like at least 5-10 years
Having ChatGPT write the argument with the fake citations was incompetence.
Having ChatGPT generate the cases and submitting them as if they were real was malice.
I say they should both be heavily sanctioned, if not outright disbarred.
It doesn't matter *how* the papers were generated. What matters is that the information was verifiably false, they signed it, and submitted them to the court.
Maybe malice was the point, and their whole goal was to martyr themselves to set the precedent on how using AI to prepare a legal argument will be treated. Honestly, one could probably do a halfway decent job of using GPT 4 to speed up legal research, and potentially even have it fact check itself, but it would involve heavy utilization of API calls, the creation of a custom trained model that's basically been put through the LLM equivalent to law school, application of your own vector databases to keep track of everything, and of course, a competent approach to prompting backed by the current and best research papers in the field... not just asking it via the web interface "is this real?"
In short, their approach to using ChatGPT in this case is to prompt engineering what a kindergartener playing house is to home economics. All they really proved here was that they're bad lawyers and even worse computer scientists, but now that this is the first thing that comes to mind when "AI" and "lawyer" are used in the same sentence, what good lawyer would be caught dead hiring an actual computer scientist to do real LLM-augmented paralegal work? What judge would even be willing to hear arguments made in "consultation" with a language model?
I realize this thought doesn't get past Hanlon's Razor, of course. It's far more likely that a bad lawyer who doesn't understand much of anything about neural networks just legitimately, vastly overestimated ChatGPT's capabilities, compared to a good lawyer deciding to voluntarily scuttle their own career in order to protect the jobs of every other law professional in the country for a few more years... but it's an entertaining notion.
@@dracos24 It does matter. It's wrong to submit information provided by a third party (to LoDuca by Schwartz, and to Schwartz by ChatGPT) without having verified it. It's much worse to fabricate that information yourself when you're being ordered by the judge to explain yourself. At first it was severe negligence, but then they were outright lying.
Welcome to the 2020s, in which lawyers, finding themselves in self-constructed holes, just. Keep. Digging.
If clear evidence of intentionally misleading a federal court, after being put on notice (show cause order), isn't sufficient for disbarment, what is?
FYI: when a judge asks you to produce cases (that their law clerk could have found) it means THEY DON’T EXIST. That was the FIRST clue that this was not going to end well.
Absolutely insane. Not a lawyer, but from Devon's explanation on the citations, it seems like finding a case is almost instant, it's so obvious that it's a gotcha when you're asked to find the cases that you cited.
I have encountered the very occasional situation where something is mis-cited and so a trek to the library is required to check the paper volumes or reference sources, but most case law can readily be found online.
I remember Devon saying on this channel multiple times, in court you don't ask a question unless you already know the answer. That lawyer's case was dead on arrival.
Westlaw and Lexis are basically search engines for legal cases. You can search for relevant cases by keywords or name of the case, but if you have the citation, it should pretty much instantly find it for you. It even keeps you updated on if parts of the case are outdated due to new case law.
The *best* case scenario is that you made a typo or something so that it wasn't able to be found - which just sounds very careless and unprofessional. And when the *best* case is that you are an unprofessional nincompoop who doesn't proofread their important legal documents... yeah you're pretty SOL
The fact that "bogus" is apparently a legal term makes me very happy.
Life is just silly sometimes 😂 we want it to be deep down but we don’t actually know life IS that silly , study human history in terms of the silly
why? i only ever heard that word used in a professional setting, whats so funny about it?
@@unclesam8862there are two groups of people that use Bogus. Serious business people, and carefree surfers lol. I imagine neither group is happy to have something in common with each other
Thats bogus mann @unclesam8862
@@unclesam8862
Bogus is a way to say ‘nonsense’ that’s usually associated with ‘80’s and ‘90’s slang. That’s why it’s funny.
The fact that ChatGPT has warnings about it not being a source of legal advice is the most damning evidence that these lawyers did not read through what they presented to the court. Perhaps if they had been more observant, they would have followed ChatGPT's advice to "consult a qualified attorney".
I use ChatGPT as a tool to narrow stuff down, basically to find out what I should google, but I know to ALWAYS CHECK EVERYTHING. And if my question ever gets too specific, it always states: 'I'm an AI model, I'm not qualified to advise on this, ask a professional. Seriously, I can't believe they'd thought they'd get away with this...
My immediate first thought is a pretty common set of phrases that internet comments use: "IANAL", "You'd have to check with a lawyer", "Get a lawyer to check this", "This is not legal advice.".
You know, the type of language ChatGPT probably was trained on, and probably had in its results somewhere.
@@ZT1ST Possible, but I think this response might have been implemented intentionally, for the same reason that all thise phrases are common in the first place. Kind of like how there are certain topics GPT will avoid (unless asked very nicely)
@@valdonchev7296 ChatGPT is specifically programmed to warn people that they shouldn't use it as replacement for proffessional advice.
their warning about not able to produce reliable code has never stopped my students from trying to use it... then fail the course. human ability to selective filtering the text is just...
As a Machine Learning Engineer, seeing Devin explain Chatbots better than 99% of the people in the world who think it's magic or something made me tear up
It's because he's smart and he and his team do their research. That's why he's in The Bigs. P.S. Congrats on being a Machine Learning Engineer, that's amazing! Please help keep us safe from them? Or at least keep it obvious when someone is being an idiot when they use it. Thanks, Your Friendly Content Writer and IT Specialist -
@@jooleebilly Thanks 😊
While Julie made that really nice comment, I just have to say that at first I read your name as Brazil.
He understands it better than these two lawyers did.
As a hobbyist programmer I knew where this was going from the very start, I use ChatGPT to help me learn and write code, I ask it how to perform a specific action in Python and it tells me the answer, but I am always double checking it just to make sure it's not bullshitting me, I simply do not trust it since I know it's just predicting text. I this is one where it is very good but I still am completely suspicious of it since I am very aware of the chatbots habit of making things up.
It’s because he is a very good lawyer that does his research and doesn’t make up citations.
I’m a civil engineer, and “if your name is on it, you’re responsible for it” is an extremely important principal. A lot of our documents need to be signed and stamped by a Professional Engineer, and the majority of us (especially the younger ones) don’t have this, yet we do most of the work anyway. Ultimately, if a non-PE does the work, a PE stamps it, and something goes awry, then it’s on the PE. You’d be surprised at how little time the PEs spend reviewing work that they’re responsible for.
There's a reason I never got my PE. I didn't want to be the professional fall guy. A PE is never going to realistically be given the time needed to actually verify all that work to a good standard - he's just put there by the firm to slap his name on it.
You mean principle not principal but yes, if your name is on it then you need to make sure it's above board.
Hello fellow civil engineer(s). I was IMMEDIATELY drawing parallels to PE stamps when he brought up local council, and yeah... The barely check before stamping is wild to me with how much responsibility then falls on your shoulders.
Hell, I work at a clothing store and we don't use our sales password to let our coworkers check people out unless we're positive they did a good job because we don't want to take the flack if they didn't. Imagine having fewer standards than people working sales.
Mechanical engineering student here, this is exactly why I haven't decided if i want my PE or not yet
A recent survey of ChatGPTs performance when it came to math was published and it really illustrates why you shouldn't try to rely on these things to answer questions for you. It went from answering the test question correctly more than 98% of the time to barely 2% in a matter of months. Not only that, it has in some cases started to refuse to show its work (aka why it is giving you the answer it is giving you).
So it turned into a 5th grader?
ive noticed this, its like they dumbed it down on purpose to stop people from doing this. what happened to chatgpt being capable of passing medical and law classes?
@@miickydeath12 it doesn't seem like it was intentional. The engineers seemed pretty baffled by that survey. If I had to guess it has more to do with people intentionally inputting incorrect information to mess with the AI
@@Willow_Sky Probably similar to what happened to Tay when she released.. wow 8 years ago now. I remember Internet Historian doing a great video on it. Going to have to go watch it again.
@@Willow_Sky AI is very dependent on learning material. Worse quality of learning data - worse quality of results. GPT4 has much bigger quantity of learning data compared to GPT3.5, but it's quality is under question.
Also, in cases, where GPT3.5 had return 'no data found', GPT4 generates random gibbish.
It's not just that CGPT *can* make stuff up, it's that that's *all* it's designed to do. It's a predictive text algorithm. It looks at its data set and feeds you the highest match for what you're asking, and literally nothing else. It looks at the sort of data that goes in a particular slot, fills that slot with data, and presents it to you. It can't lie to you because it also can't tell you the truth, it just puts words together in an algorithmic order.
Chat GPT is trained to generate text which humans see as looking real. That´s it. There´s no implementation of truthfulness in it´s training, at least not originally.
it's truly mind boggling how many people don't understand the basics of how these models work. "It'S LyInG!!" no mate, the predictive language model doesn't have an intention, it's just stringing words together based on an algorithm...
@@ApexJanitor It can't lie, because it can't think or have intent. Nobody fully understands how these models produce their results, but they do understand the kinds of things that are happening and what its limitations are.
@@ApexJanitor there's a difference between not fully understanding something and having no idea what's going on. I don't think this model is close enough to sentient to be able to "lie" in the moral sense or "want" anything (though it certainly does a good job passing the Turing test, so I can understand the confusion). It's utility function is essentially a fill in the blank algorithm, so of course if you ask it subjective questions, as the idiot lawyer did, it's going to seem to lie.
also what's with the tone of your message? Seems kinda hostile, and the "Hahaha"'s make me feel like The Joker has had a hand in writing this, why not LOL?
@@ApexJanitor I see what you're driving at, but the fact that a neural network of this scale is not comprehensible does not mean that we don't know what it is doing. It's predicting words, nothing more and nothing less. It's not some new and unfathomable way of thinking and responding to the world, it's just mimicking human language (and not very well, at that). You wrote "... it lies if it wants" but that assumes some sort of mind that "wants". ChatGPT and its ilk don't have minds.
The realization that Devin is actually sitting in a library in all his recordings and isn't just using a green screen was by far the biggest plot twist in this video
Edit: why are people arguing about whether or not it was real or edited
why would he go through all that effort getting a book that looked identical to one in his green screen if that was what he was using
And he waited… Not the first, or the second time he mentions case books, but the *Third*. The storytelling in those videos…
It's a green screen.
@swilsonmc he picked up the book bruh, off the bookshelf behind him
I looked at it again and you're right.
Glad I am not alone in this. I almost jumped when he pulled out the book.
This is the first time in my life I've seen a lawyer sitting in front of a bookcase full of law books, AND ACTUALLY PULL ONE OUT. (edit: 25:30)
I have to assume they do research when they aren't in the middle of a consultation. They mostly wouldn't use a physical book anyway since electronic databases can find things instantly and are always up to date with the latest info.
@@joemck85 then what are the books there for? the branding?
@@parry3439 just for style
@@parry3439 Before online databases became as thorough as they are (probably likely only in the last 10 years or so), people did have to have written books, especially if they were gonna use them often. I think Devin has been practicing long enough that he probably had physical copies before online databases. Noticed how he stated the book in hand was a 2nd edition, which looking it up, that's 1925 to 1993. Long before things got scanned and put into binary. Devin himself gained his JD in 2008 from UCLA. wiki'd legaleagle.
Meaning, yeah, he prolly keeps them as a memoir of his early carrier and/or his university days. Lawyers needed LOTs of books. Mostly cases and laws in their area of practice.
@@MekamiEye There is a huge gap between 1993 and 2008 in computers and data storage. For example, 1993 is the game Doom on PC with floppy discs, and 2008 is Metal Gear Solid 4 on PS 2.
In 2003, most big journals were moving to the internet, and there were probably buyable databases offline. That's probably why those books look so pristine! I thought it was a Zoom background or something.
The fact that at 18:00 you straight face yell, yet you can feel every bit of your emotion behind it, excellent. This is such a great channel!
Public service announcement from your friendly librarian: DO NOT ASK FOR CITATIONS FROM CHATGPT. The citations are likely imaginary and you will only waste yours and the librarian's time. And you WILL be made fun of among the staff. (Worse than this happening in legal settings is this happens in medical settings 😑)
Honestly ChatGPT has given me some good references (mostly of what one would call "classical" papers, the one's that are old and cited a lot in other work), but obviously, google every single one before you use it anywhere. In my experience, it's about 50% chance if a citation is real or not, and then another good 50% if it's summary of it is actually accurate of what's in the paper.
Even before things like Chat GPT we had people requesting fake citations, just another reason why librarians can never be fully replaced by AI
Mock it now, but the technology is only going to get better with each iteration. Lawyers aren't safe from AI either. Nor are librarians.
@@rickallen9099 We don't mock AI, we mock the attempt to submit nonexistent citations without verifying that they're real.
It's crazy that a large language model is not able to cite the sources of its information.
Got to love how everyone is like”Chatgpt is going to take over everything” and then every time you apply it to something real like this it consistently comes up short
If you’re an expert in your field, ChatGPT is like a very smart freshman college student. Impressive to everyone else, but you see the issues.
@@AYVYNAt least a freshman knows to verify sources.
@@warlockd not even that, Chatgpt has been known to lie! It tries to complete satisfying sentences and then like half the time it just says stuff that sounds right.
@@AYVYN If you're an expert in your field you'd be able to tell it doesn't understand what its saying.
@@AYVYN its not even a student. its like taking all the books from your college library and putting them in a blender, and then getting a random person off the street to rearrange the pieces
This story just supports my opinion that the biggest problem with ChatGPT is that people trust it despite having no real basis for that trust. It's exposing the degree to which people rely on how authoritative something sounds when deciding whether to trust it, rather than bothering to do any kind of cross-referencing or comparison.
There are prompt-engineering techniques that get chatGPT to do cross-referencing on itself that might improve it a bit, but you still have to find the sources in the end and do your own research.
@@aoeu256I was literally thinking about this today because I have no imagination for bing's AI search and I thought "I can't look up facts since I'm better off doing that the normal way, so what do I use this for?"
Not to impose but if you have any ideas I'm all for them lmao, AI advancements are wasted on me until it's an AGI
@@spacebassist have you tried asking gpt what it could usefully do for you?
@@sjs9698 we're both finding out just how bad I am at this lmfao. No, I did not think of that
I've been fixated on the fact that it can't provide unbiased fact or act like a person, that it's "just a language model that can kinda trick you"
Sounds like ChatGPT is a Republican.
I am a PhD student currently working on building models like ChatGPT, and this is hilarious! Really enjoy all your videos!!!
But this completely makes sense, since these pre-trained models are typically trained on webtext so that they can learn how English (or any other human language), functions, and how to converse in human languages. But these models are not trained on any sort of specialized data for any given field so they won't do well when used for these purposes.
Doing this in Federal court was bold (or just plain stupid.) The rules and standards are SO much stricter in Federal court!
I pick "stupid."
Boldly stupid
@@moehoward01 legit
I think lazy is also a valid option.
@@caseyhengstebeck1893 Well, how about all 3?
The most galling thing is LoDuca's refusal to take any responsibility. He blames everyone and anyone else. A competent paralegal would be an asset to this team.
With all this public humiliation, any competent paralegal would be looking elsewhere.
They should just hand in their bar card they ain’t recovering from this
@@richardgarrett2792 you're absolutely correct! 😂 I'm sure they're insufferable to work for
For real, as idiotic as Schwartz was, LoDuca was just completely in "it's never my fault" mode. What an arrogant idiot.
sounds like a former POTUS
They got off with just a $5000 fine....and the firm is still deciding whether to appeal or not. It's crazy that they knowingly fabricated cases only to get away with a slap on the wrist
For real? Just $5k?
@@sillybob9689 yup, and the judge apparently would've let it go if they came clean in the first place
$5k plus however much he's gonna lose from torpedoing his own career...
Meh you’d be surprised on the torpedoing his career. Lots of lawyers have been sanctioned and carried on fine. Most all of those things take some deeper research that clients often don’t ever look into. But the judge saying he would’ve just moved past it had they come clean is common. The cover up is almost always worse than the crime.
I think it was more to scare the hell out of them and embarrass them so they wouldn't make the same mistake of wasting everyone else's time and money
There was a test conducted in Quebec where the Bar examinators gave the bar examination to Chat GPT. TLDR: It failed miserably
Interesting; where did you hear about that?
@@KnakuanaRka local newspaper or TV news i don't remember
Source: trust me bro
Yeah. It only got 12%
"ChatGPT obtains 12% on the Quebec Bar Exam"
its weird because just last year chatgpt achieved much higher scores on bar exams. it seems like chatgpt over time has been dumbed down to prevent people from using it to cheat, this can be seen when you just ask the model some math questions, i couldve sworn it was way better at solving math last year
As a legal assistant, watching this feels EXACTLY like watching a horror movie. No, I did NOT guess the cited cases didn't exist because that means nobody in this law firm checked the chat bot's writing for accuracy! You have to do that even when humans write it! They did NO shepardizing, no double-checking AT ALL?! How? Just... how?! And, oh Mylanta, that response to the show cause order... Dude, that... doesn't comply with the order. At all. What kind of lawyers were these guys?!
Bad one's obviously. And a little more than just plain lazy.
TIL a new word - Shepardizing: "The verb Shepardizing (sometimes written lower-case) refers to the process of consulting Shepard's Citations [a citator used in United States legal research that provides a list of all the authorities citing a particular case, statute, or other legal authority] to see if a case has been overturned, reaffirmed, questioned, or cited by later cases."
@@flamingspinach And you are now smarter than these 2 lawyers!
The fact that they didn't double check it at all astounds me.
The fact they didn't double check anything tells me these guys haven't done any work themselves in ages, they have grow so used to passing off the work and having others do it and haven't been double checking that work for such a long time that they didn't even bother to double check the "new hire" (doesn't matter if it is AI or human.....for them to not bother verifying reveals they have a pattern)
Taking the bar exam next month, this either makes me more confident that I should pass bc they did; or if I don’t, I’m going to cry bc they did
all the best ❤
Good luck!!!
good luck with your exam! if these idiots can pass, you’ve got this!!
Good luck on your exam!!
judging by these idiots i'd say the *bar* is pretty low
What's clear to me is that this judge did his research. He very clearly understands that they didn't just ask Chat GPT to explain the relevant law but instead asked Chat GPT to prove their loosing argument. ChatGPT only knows what words sound good together. It does not know why they sound good together.
That's the salient bit here -- the judge was able, not just to call their bluff, but to call two or three nested levels of bluff, by recognizing the kind of bullshittery that ChatGPT engages in, and HOW that crept into the process at each step along the way.
Right? That caught my ear too, the judge knew how this would've happened and was savvy enough to get the line of logic that would have produced these results. They were screwed.
That's a bit of a simplification. A simplification we can make about most people when they speak or write too. If you use Bing you can do very fast legal work and it will give you the references. If the data is not available online, you can use GPT4's API and load your data.
I trust GPT's level of reasoning more than I trust the average Joe.
@@ildarion3367 Average Joe doesn't know anything about anything, be it law, tech, economics, logistics or nuclear power plant design. That's kinda the point of how modern society works: no single person can learn everything there is to know about every topic. That's why we have specialization. You choose a field and over time become proficient with it, while completely disregarding other fields and relying on other people for their specialized knowledge through cooperation. While your claim is probably correct, it's not meaningful. Sure, chatGPT can form a more coherent response to a legal question than me, someone who never had any interactions with legal system in their life, but it still doesn't change the fact that neither of us are specialists in this field. And therefore both of our opinions are equally useless when compared to a real specialist.
@@ildarion3367 trust based only on charisma & fluent speech is recipe for disaster.
Seeing this a second time, it's even worse! I was just telling a coworker about this last night and he was blown away that a lawyer did this.
The judge was straight up savage.
I'm not a lawyer, but I used to work with the local government with some quasi-judicial hearings where some appellants would retain lawyers to argue for them. One of the funniest cases I had dealing with lawyers, the lawyer quoted a particular case in a written brief which was old enough that it wasn't in the legal databases and he didn't have the full case to provide for review. I walked down to my local library, grabbed the book with the decision, and actually read the decision. The lawyer was then surprised when I forwarded the scanned copy of the case on to him, and I had to point out that it would appear the quote was out of context, and that the decision actually supported the Crown's position. The appeal was then abandoned shortly thereafter.
Begs the question though, how did he find said case? Also, clearly a number of lawyers are not reading the cases they cite, very concerning.
@@jeanmoke1 The original decision was probably cited in a later decision or a secondary source.
That is a legitimate way to do legal research, but, as noted, it is necessary to actually _read_ a decision before citing it.
I did legal research for government lawyers for more than a decade. I would summarize the salient case law and provide excerpts as applicable, but I always attached the full text of the decisions as well. I know that some (but not all) of the lawyers carefully reviewed my work.
@@jeanmoke1
Good lawyers can argue a ruling to make it appear that it supports their client. 🫡
On a list of things that never happened
I’m not a lawyer, but I think a judge’s order that repeats the word “bogus” three times in one sentence in response to your legal filing is probably not good.
One thing I love about legal drama like this is how passive-aggressive everything needs to be as it must be kept professional. A judge isn't gonna erupt on someone but if they make a motion to politely ask what you were thinking, you know you're in one heck of a mess.
@@cat-le1hf Ah yes the trial of Chicago seven.
You should see British parliamentary debates. There are strict rules of conduct which dictate how to address people and forbid, among other things, accusing another MP of lying. Even if they are blatantly speaking utter falsehoods, it's forbidden to accuse them of it - because MPs, being the highest and most honourable of society, are surely above such things and it would be an insult to the institution to so much as suggest the possibility of deception. This has lead to a lot of passive-aggressive implications. An MP can't accuse another of intentional lying, so they will instead suggest "The right honorable gentleman appears to be mistaken' giving the most respectful and formal of words while making it clear in their tone that the intended meaning is more 'liar liar pants on fire.'
As a paralegal, this whole case got under my skin in the worst way. From the unverified citations, to the fact that he didn't know what the Federal Register is, to lying to the judge. If I did even one of the things they did on this case, I would throw myself at the mercy of my boss, because there's no way in hell I would even let him sign something that wasn't perfect, I sure as shit wouldn't file it.
I just cannot imagine the embarrassment. I mean how do you even survive the level of embarrassment from using Chat GPT to write your documents and it getting everything wrong lol
Maybe this Schwartz guy is an imposter?
Best part was about F.3d
It's not a department it's a book
"That's not how humans, let alone lawyers, talk."
I love the implication that lawyers may not, in fact, be humans.
It's true. The difference between lawyers and humans is in their blood. Most lawyers' blood is laced with increased intelligence.
Well they aren't lawyers either
That's not how the expression "let alone" works.
@@grmpfit can grammatically work in both scenarios depending on the context
“That’s not how a dog- let alone a person- would react”
I’m actually not fully convinced I’m correct here, but it seems it can be used to contrast subjects as I see it currently. Feel free to set me straight or if I’m right agree 🫡
That's not what that means, but it would be a funny comment if it was.
Props for the judge for keeping calm while asking these clearly mental lawyers confirmation and not just bonk them in the head with the case book he didn't know about
As a judge, you're supposed to bonk them with the gavel.
@@silentdrew7636I guess "throwing the book at them" was never literal, huh?
I was unsure why judges are treated with some kind of reverence in lawyer circles until I've seen/heard some of their interactions and opinions.
They sure are very composed, tactful and professional, yet absolutely brutal when it comes to scathing remarks.
@@LodinnIt feels like the judge was more dumbfounded than anything. I mean, the responses were so idiotic it makes you wonder how he even passed the bar.
@@warlockd Not sure I agree - by the time they've produced these made-up cases using ChatGPT, the damage was already done. Coming clean was probably the least dumb decision overall in that situation.
...granted, the F.3d moment sounds like a really, really bad knowledge gap, but IANAL. The rest didn't particularly stand out to me, they were pretty screwed by then already anyway.
Me, seeing a Legal Eagle video: An analysis of the Trump indictment already?
Me, watching the Legal Eagle video: *Never mind this is so much better.*
He 100% should cover the Trump stuff, but it's nice he sprinkles in these sillier stories between them
I agree. He will get it done he's just taking time to get it right.
I wouldn't be surprised if it's already up on Devin's nebula. He does say a lot that his videos go up first there and there's a delay before they go down to RUclips
Imagine you had to film this, and you are barely done reviewing the edits, that the Trump thing comes out…
Wouldn’t you just have a Spa day, before swimming in the… what’s the German word again?
@@bertilhatt Schadenfreude?
Update: Judge Castell dismissed the case due to the statute of limitations issue and fined LoDuca, Schwartz, and their law firm $5000 each. They’re very lucky to have gotten off that lightly.
Wtf. I'd expect disbarment plus a large fine.
Plus mr. Mate suing them for mishandling his case.
Plus investigation into the law firm, how their processes are written and adhered to.
I would expect a reasonable law-firm to have standards of conduct that specify which tools to use for case search, or whatever
They definitely should have gotten slapped much harder, but on the plus side, they can't hide from this, and will never be taken seriously as lawyers ever again.
I think Judge Castell *see* that they already got their career destroyed, and minding that they got enough punishment already.
If an attorney from a wee country in South East Asia already hears the mayhem of their blunders, oh boy...they and their firm are toast.
Tbf the law community is clowning on LoDuca and Schwartz, so it’s safe to say that their careers have been ruined.
Not quite: he wrote an angry letter to the bar (although for some reason, he can't actually disbar them, the bar is the one that does it) so while that's all they are being punished for by the law system, the bar association might suspend them for a few years.
While there were several miscalculations I think the worst is the different font. I'm no stranger to the copy paste method when turning in assignments but for a federal judge how could you forget ctrl+shft
as an engineer, “if your name is on it, you’re responsible for it” is a HUGE concept. there’s a lot of red tape in working for companies who deal with government contracts, and a lot of specific record-keeping programs you have to use. it’s important for process cycle tracking, but if you’re actually on the development/build side, it can seem pretty tedious. typically you need to be trained on these softwares, so it isn’t uncommon for only one or two people on your team to actually have the authorization to use them. instead of training everyone else, typically that person’s name is just put as the RE (responsible engineer) and then they’re the one who has to sign off on it. for my current program, that ends up being me a lot of the time. in most cases, it isn’t a problem to just go in and sign off on something, seeing as there’s an entire team of people who need to approve before it gets to you. but there’s always the chance that everyone in the upline may also have the same perspective, and my failure to thoroughly review a document before signing off could make or break a multimillion dollar defense contract. and even if it wasn’t even my design so any failures weren’t technically my fault, guess what? if my name on it, I’m the one who has to deal with the fallout. the abundance of approvals and review stages may seem overbearing and unnecessary at times, but that’s how we avoid catastrophic engineering disasters like we’ve seen so many times before. those checks and balances are there for a reason, and if your name is on it, you BETTER have taken the time to complete your check !!
Computer engineer here, it is very smart for you to assume that a screw-up could still slip through the cracks because it absolutely can. I know because I was once responsible for one. Back when I was just moved up to lead developer, a software my team developed and tested hard-crashed while demoing it to management. As it turns out, one of the new guys submitted his component of the software he worked on without verifying that it works. Since I was new to leading a dev team, I unfortunately just assumed that he verified it so we went ahead and put it together with the rest of the software and it passed our tests. That component dealt with installing the software, so when we tried to demo it to management on a computer that used a different OS, it wasn't properly installed. I got in A LOT of trouble for this (I got yelled at by everyone in management) because they planned official deadlines after I mentioned in an official document that the software was ready to demonstrate to management when it clearly wasn't, which meant they had to further delay a multimillion-dollar asset. This gave me the worst job-related scare of my life because they said that they had grounds to not just demote me, but to "let me go" (their words) because of the amount of money involved. I assume their superiors expressed to them how "unhappy" they were about the delay. Thankfully, I only got a warning because the problem was fixed quickly, but since then I've been too paranoid to not make sure that every word I write in official documents is 100% confirmed as true without a reasonable doubt. So it blows my mind how these lawyers did every single little thing you could do to do the complete opposite
I think legally it's (usually) the fault of the company rather than the individual. Or at least based on the cases I've heard. The reasoning being that the company processes should've caught it in the first place, and so they're equally liable.
@@supersonic7605 I am assuming, if only because the one lawyer asked if it was lying, that these lawyer didn't understand what a GPT model program is. I think they assumed it was an ACTUAL Artificial Intelligence. aka an Artificial Mind, one that could actually think on its own and not need input to generate any answers.
I think, given that none of these lawyers did any actual lawyering, thought that the GPT could do all of their research because it would collect data from various sources, read it understand it and synthesize a legal document for them.
The law firm itself, at the very least, should have terminated these guys, just for the sheer embarassment. This has certainly cost that law firm millions in revenue. They should also be debarred for failing to actually act as a lawyer. I wonder if the judge actually imposed a sanctin on the lawyers as well. hopefully they have to pay all the legal fees out of pocket for everyone involved and not take any pay, and perhaps get debarred or something.
The best way I have heard ChatGPT described is "ChatGPT knows what a correct answer looks like." At a surface level, it looks like a legitimate answer until you dive into the details in this case.
My understanding is Chatgpt will give you the answer YOU are looking for. That's what it did for these guys.
I would love to have been a fly on the wall in Avianca's lawyers' office when they were first searching for the bogus cases and coming up empty-handed. Did they immediately recognize that it was all bunk, or did they second-guess themselves? How long until they floated the idea that opposing counsel simply made it all up? Did they hesitate to file a response calling the bluff?
I want an interview with those folks!
I honestly wouldn't be surprised if it was actually the judge that realized this first, because the judge would also need to have read those cases to make sure that they fully understand the argument being made, and then none of the clerks or whoever were able to find any case mentioned by these attorneys and then the judge is probably like hmm, one clerk struggling to find a particular case is abnormal, five clerks struggling to find any case is very unlikely I wonder if these are even real. And then from there just going and destroying the careers of these attorneys
I listened to the podcast this video mentioned, and they were joking about feeling bad for whatever first-year doing the grunt work had to tell a senior partner they couldn't find six cases. That fly on the wall would've been getting an earful.
@@SuperSimputerI want to know what that extra week "being on vacation" would have bought them. It makes me wonder how often they used that excuse on other court cases.
From the discussion on this by Leonard French (another RUclips legal educator), any lawyer reading the citations would very quickly realize they're bogus before even searching them out. Several of the citations don't even match the format used in legal cases, and an experienced lawyer should know this at a glance. The judge would not have needed to be the first one to spot this, and chances are the defense lawyers only searched out the citations to give themselves a better chance of the lawsuit being thrown out and themselves awarded fees and costs. It's hard to imagine them having to do any research into the cited cases before realizing something's screwy.
Meanwhile I happen to know that if this serving cart were to be pushed with such a force that it quote "incapacitated him"...the damn cart would have broken before any actual harm was done
Yeah this case would've been a frivolous one even if it had been filed on time.
I love the fact that even some lawyers can't be fussed with reading the Terms of Service for websites. They should have realised that this could happen when even the TOS states, Under section 3.Content:
"use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts."
I mean, "we are unreliable" is practically the motto of ChatGPT 3
Lol they don't even need the Terms of Service. ChatGPT itself tells them point-blank that it can be wrong on the main screen!
It is for this reason, and others, that I am reluctant to take any TOS, EULA, or other routine contract seriously unless I am either given a summary of the terms, somewhere, or a reasonable ability to contact the lawyers that drew it up (so I can get clarification). I still tend to read as much as I can of them, particularly if it's a completely new relation, but I'm only one non-lawyer human, and I don't have a team of lawyers to translate for me. Expecting more than my best effort to understand is a little bit unreasonable.
@@6023barath "May occasionally generate incorrect information. May occasionally produce harmful instructions or biased content. Limited knowledge of world and events after 2021". Any one of these should have been enough for them to reconsider using it as a source, but all three?. It wasn't correct, it was biased toward their biased questions, and it wasn't up to date.
Even my 5th grader used ChatGPT to help with a presentation and she spent several hours fact checking each statement before including it in her power point
I am a mechanical engineer, and run into this situation recently. I was trying to use ChatGPT to shorten my initial research into a topic, it gave me the equations, everything. But since they were sloppy and missing pieces, i asked it to give me the sources for these equations so i can go to the original articles and collect the missing parts. Oh boy i was in for a big surprise. It just kept apologizing and making up new article titles, authors, even DOI s. It was eye opening to say the least.
As a fellow ML engineer i am surprised your are relying on the chatbot for anything related to research it may help shorten and make pre existing concepts more concise but it is merely a tool for research not the spearhead of said research
@@shahmirzahid9551 well, "relying" is a bit misleading of a term. it was a low priority topic which i were to take based on if it's feasible to do in a short timeline, and i decided to try out chatgpt on a "if it works works" basis. it didnt work, and i haven't used it since for this purpose whatsoever
What is a DOI?
ChatGPT is NOT a search engine!! You cannot use it as such
@@Tyrim ah i see i did the same when i do some calculus theory study but i just made a engineered a prompt for it to give some detailed explanation of things and it works like a charm i too had my doubts but yeah i wouldnt still blindly believe everything it said as it could be outdated or completely wrong
I want to see a follow up to this story. For 14 years, I worked in the IT Department of a prominent law firm. How these attorneys are not disbarred already is beyond me. As with most professions, attorneys are very defensive of their professions, and get upset with people who disgrace them. Rightfully so. I feel the same way when I hear about a dishonest IT person. I have been hired by lawyers to investigate a situation with an unscrupulous network administrator, for example. I was happy to do the work, and delighted to see the person destroyed in civil court.
They will absolutely be disbarred, its just been two days since the last update on the case.
It would also be interesting to see an analysis of their case history as well...
@@Roccondil Yeah, if I was a judge or legal body, this bombshell would make me want to shine a very bright light on their prior cases, and shine it into every single uncomfortable hole to see if this was not a one-off idiocy but in fact the mistake of palming off the lying to an AI rather than hand-crafting the lies themselves.
Search Mr. Liebowitz, a "Copyright attorney" - sometimes it taks an unbelievable amount of wrong doing to get disbarred
"Ah yes let's screw over some lawyers, that sounds like a great idea"
This is how one of those "He never went to law school but he's practicing law like a pro" TV shows would actually go
I went to check myself what 925 F.3d 1339 actually was; it's a page within a decision by the US Court of Appeals D.C. Circuit (the full case actually starts on page 1291) called J.D. v. Azar, one that had to do with the constitutionality of a Trump-era restriction preventing immigrant minors in government detention from obtaining abortion services. It was actually kinda interesting to skim through, if completely irrelevant to airline law.
thank you for looking it up and sharing a quick summary with us! Was curious to see if someone looked it up or not.
It may be relevant when these minors are transported via chartered airlines. Human trafficking itself is a major issue that airlines look out for, so there seems to be relevance.
The fact it's not actually a real case, just a page in a case starting from an earlier page, helps explain why a cursory glance didn't raise the red flags you get when you actually read the page in front of you.
Tbf the biggest surprise to me is that is indeed a valid citation, and not some hilariously out of bounds non-existent thing.
Has anybody offered an explanation of WHY ChatGPT gave the false reference and was so adamant that it was a real source? Could ChatGPT be pulling from a fake law source itself? Did the programmers do this on purpose? I use ChatGPT regularly for work, and while not perfect, it's about 80% accurate in the IT space. So why would it be so far off in the legal space? It has been successfully used in the academic space also, to the point that some teachers and professors can't tell a real paper from a ChatGPT paper apart.
As an accountant, this video caused me physical pain. This sounds like a literal nightmare anyone in a legal or finance profession could have. I am genuinely surprised neither of these men broke down sobbing on the stand.
Who says they didn't?
I imagine it's not any better when the entire legal community is pointing and saying "Ha ha!"
*shudder* dealing with a client's lousy OCR system is bad enough. I cannot imagine the disaster that would ensue if someone let a generative AI near financial records or reports.
@@TheGreatSquark You will likely see it first in the investment side of things.
@@TheGreatSquark I imagine "the ai made a mistake" could be a nice excuse for fabricating numbers. At least would expect less trouble than "yeah we lied to mislead investors".
I feel like describing language AI models like chatGPT as having "hallucinations" where they "make stuff up sometimes" is far too generous to what they actually do. These chatbots don't know what's true and what's false, they don't actually _know_ anything. They're _always_ making stuff up - guessing what sequence of words is probable in response to any given input - and it's more accurate to say that they get things _right_ sometimes.
Chatbots will confidantly lie to you, but actually calling it a "lie" is a mistake, because lying requires knowing you're spreading a mistruth, which they simply don't. Because they don't "know" things the way we do. That predictive text output gets to be called "AI" is a huge framing mistake that only makes people misunderstand and anthropomorphise these things.
Good point. Spelling out what the GPT actually stands for gives a much clearer picture of what it is and isn’t. But hey, news articles have to get those clicks, and AI news is hot stuff…
"At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving." IBMs definition for artificial intelligence.
ChatGPT relies on a robust dataset to solve problems, using GPU's. I'd say its an AI. So I don't think calling it AI is a framing mistake. People just don't know the definition of AI and assume AI means human intelligence, produced by a computer. This, it very clearly isn't.
@@lordhodenYep
Exactly! We are all far too ready to cede our intelligence and lives to these mechanized marionettes and most don't have the first clue what they do or how they work.
These robots cannot and should not ever be trusted. They don't understand context, nuance, intent, or even the most basic concepts like just or true. We should all agree to not entertain the notion these things are anything more than mere tools to be used but leave it to the scientist and model makers. Not language, poems, art, law, history, etc.
So technically it's the human user that is hallucinating.
Honestly I do think at least a small proportion of AIs abilities say nothing about computing and more about psychology. The computer isn't good at making an answer, we're good at interpreting the answer to apply to the situation.
This also reminds me of my work as a medical transcriptionist. When voice recognition programs came out, one of my doctors went to a convention where the software was introduced. He came back and gleefully told me it would put me out of work. (He wasn't malicious. He knew I was knowledgeable about computers and wanted my opinion.) I told him it would never happen because the software required editing by the user, especially in the beginning while the software adapted to the user's accent and use of language. I said doctors either would not or could not take the time for proper proofreading. And they still don't use it.
New software sounds magical until you read the fine print.
I must admit that when the lawyer admitted, under oath, that he lied to the judge about going on vacation, I had to get up and walk around I was so stunned. Lying to a Federal judge? Sheesh! How did that lawyer ever pass the bar?
Because the US system allows pay to win for literally everything
Also, humans can know the information contained in an ethics class and answer questions based around it. Without actually understanding or agreeing with the information.
Passing the bar has nothing to do with practicing law
Passing the bar shows you know how to write a really hard test. That's kind of a separate skillset from learning how to navigate court without angering a judge.
There's always the people who pass at the bottom of their class.
I have some minor sympathy for the lawyer claiming he thought chat gpt was a search engine, given all the hubub and publicity about google and microsoft introducing so-called "ai search engines" a while ago. But the fact that he simply did not check *any* of the information provided is aboslutely mind boggling. He didn't even understand what the citations meant! It seems likely to me that he's been merrily citing cases without reading them for years, and this is just how he got caught. What a mess.
By the sounds of the description Devin gave, Mr Schwarz was not a federal lawyer, hence getting Mr LoDuca to file on his behalf. It is plausible (though given he's apparently practiced law for 30 years, something of a stretch to believe) that he simply wasn't aware of the federal nomenclature.
I have none for those lawyers. They should have checked to see if the cases were real if they couldn’t find what they were looking for in other places. I got a lot of sympathy for the guy who hired these morons though.
@@KayDizzelVidsyou have sympathies for a guy suing an airline three (!!) years after he got bonked with a serving cart? Really?
@@KindredBrujah Maybe his law practice never really extended to courts any much and he was perma-stuck in the ghostwriter position, signing papers for the firm and the like?..
@@Jehty_ Even the dumbest parties deserve proper legal counsel. A better lawyer would have told him not to bother.
I remember the actual Zicherman v. Korean Airlines case, it was 1996 not 2008 like ChatGPT cited. A Korean Airlines flight entered Soviet airspace in 1983 and was shot down killing all 269 on board. It's a poor case to cite even if they'd gotten the citation correct and would have only hurt their case.
Jeez that's rough. Those poor people. Their last few moments must have been spent terrified and angry...
And they were citing that to make an argument about someone's knee injury... 🤦♀
Didn't the Soviet Union dissolve in 1991? Was it still considered Soviet airspace back then?
@@andrewli8900 the shoot down was in 1983, the court case happened in 1996 against the airline, which would be why ChatGPT chose to reference it, it was more than 2 years after the event, but the 2 year limit doesn't apply to willful misconduct. That's why it was a terrible case to cite, because it didn't apply in the current case and would have only served to further support the airlines position.
I feel like the jury every time that I watch your video. I know absolutely nothing about law databases but now I got a basic understanding.
Takes me back to my only jury duty service.
After working for the Sacramento County Superior Court of California, it's crazy that attorneys would try to lie to a Judge. Judges are like gods of their court. NEVER mess with them. They're smart enough to figure it out. They started out as attorneys themselves. I got this from nine months of working as an IT specialist for the Court. Judges can be very nice people, but don't try to mess with them. They are not amused by legal shenanigans.
I even overheard one Judge in chambers who was speaking with a woman suing due to being injured in a car crash. He actually went out of his way to tell her that "he didn't want to speak ill of her attorneys, but it seems to me that your settlement should be far higher based on the photographs of your injuries. This is not legal advice, so if I were you, I'd consider making sure your attorneys have these pictures and are taking them into consideration." Okay, I'm paraphrasing, but he was oh-so-slyly suggesting that this woman get better lawyers. He was also one of the smartest, no-nonsense Judges I'd ever met. And he didn't suffer fools gladly. But the fact that he went out of his way to help this woman was incredibly good of him. Considering how short he could be, for example, when his computer wasn't working the way he expected, I was surprised to find out how generous and gentle he was with helping plaintiffs out.
It sounds unethical to me that the judge offered such "not legal advice".
@Jack You’re absolutely right. I wouldn’t say it’s “usual” at all for judges to be attorneys first. On the other hand, he was a federal appointment.
Upon wiki-ing him, he did practice privately in NYC for 26 years.
This is what I'm most concerned about with our judicial system given the political climate and the way judges were selected in the last administration. Judges are human and fallible, yes, but generally speaking the system has honed itself so that most judges are like vigilant guards watching over those symbolic scales. Sometimes it's out of personal interest that they are VERY not okay with someone/a group tipping those scales whether through bias, incompetence, ideology, etc and sometimes it's genuinely caring and taking their role in democracy seriously but whatever the motivation it plays a critical part in our lives.
Hoping that at least now many more people recognize how important this branch of government is
The first rule of practicing law is “don’t piss off the judge that is hearing your case.”
@@cparks1000000
If it's unethical to tell someone they deserve more money for their injuries than what their hack lawyers are trying to get them I don't want a ethical judge who will let me get screwed over.
This case will be cited in every Law School from now until the Terminators rise to annihilate us.
As it should be.
So at least a year or two then
Tbf, the chat bots are not sentient or even have signs of it.
Oh, it's funnier than that! I'm in the education field, and there's talk of using this case as Exhibit A for doing your own research and actually reading/citing your sources properly, lest you possibly lose your job.
@@writer4life724 I honestly cannot think a better example could have been made to not leave your homework to AI!
Lol…as a medical student the amount of confidence chatGPT has with explaining disease pathologies that’s completely wrong is concerning. It does a good job of coming up with an answer that sounds right but isn’t.
That's because, technically, it is. There's a reason that most actual AI researchers call it a Language Model and not an AI, because that's all it is.
It know the language of law books, or the language of medical opinions. It does not have the facts, let alone up-to-date ones.
Yes actually, blaming that is completely fine, because the simple fact that it is capable of lying to you with confidence means that its work is literally useless. In your own last example, the scope of what you're suggesting it should be used for doesn't even make sense. You would get the AI to do 5 minutes worth of work, just so that you can spend 50 minutes fact checking it. Do it right the first time instead lmao@@A-wy5zm
Someone on the Gardening subreddit recently used ChatGPT to try and answer someone's question about pet-friendly plants and was SO CONFIDENTLY WRONG the mods actually had to step in, because the advice from ChatGPT could've literally killed this guy's pets. I had to go on a rant about Language Model hallucinations and the demonstrably failing accuracy of the output from these systems.
It's really validating when the mods leave your factually correct, even if angry and spitefully written, comments and delete the moron's 😅
In essence, chatGPT is your drunk uncle. It hears half of what you said, and spins off a long story based on something a friend told it 20 years ago with "facts" sprinkled in to support the argument it wants to make
It's become a bit of a meme in the crochet community to ask chat gpt to write a pattern (usually a plushie because they're small and quick) and laugh at the mess it produces. It only looks like a pattern if you've never seen a pattern before and think crochet is done with needles.
I’m an in house attorney at a midsized tech company. I have people regularly sending me documents to review that they “ran through chat gpt and think it’s fine”
My boss always jokes that chat gpt is going to put us out of a job. He only does that because he doesn’t see the emails people send me
Superintendent Chalmers: “Six cases, none found on Google, at this time of year, in this part of the country, localized entirely within your court filings.”
Principal Skinner: “Yes.”
Superintendent Chalmers: “May I see them?”
Principal Skinner: “…no.”
Thanks for the laugh!
i could hear their voices
Seymour! Your career as a lawyer's on fire!
@@ThePkmnYPerson No, Mother! That's just the Northern Lights!
Well, Loduca, I'll be overseeing this case _despite_ the statute of limitations.
Ah! Judge Castel, welcome! I hope you're prepared for an unforgettable docket!
Egh.
(Opens up Fastcase to find legal citations only to find the subscription has expired)
Oh, egads! My case is ruined! ... But what if... I were to use ChatGPT and disguise it as my own filing? Hohohohoho, delightfully devilish, Loduca.
I think the most eye opening thing in this whole video, is discovering that the book shelves are actually real, and not just a green screen lol
Ong
same
I didn't even notice. I just assumed he had it as a prop ready for this moment.
Came here hoping to see that I wasn't the only one who thought this!
Same 😂
As a retired writing teacher, I took a great interest in this case because this is EXACTLY the sort of BS I had to put up with when it came to lazy students. And when after 17:00 the Schwartz affidavit admitted that the work was done in consultation with ChatGPT, I thought, Lord have mercy, did they really think ChatGPT could do their research for them?! Mind. Totally. Boggled.
Hahaha faking citations in high school papers, I had that shit down to a SCIENCE. Kids these days, they can just ask their fancy robot to lie for them. When I was their age, I had to walk uphill both ways to come up with believable lies!
@@anarchy_79 hey boomer stop calling us out, I had my chatgpt do my homework just fine, copy paste here and there and boom the hours long homework was done under 30 minutes, the future is now old man
@@rafflesiadeathcscent3507 good luck keeping up with that lol
@@commandrogynenah bro it does work. I dont use it for homework cuz its easy but for some presentations I ask it to create a sample that I then edit into my own style. Basically not waste alot of time just researching some facts.
@@anarchy_79nah bro it does work. I dont use it for homework cuz its easy but for some presentations I ask it to create a sample that I then edit into my own style. Basically not waste alot of time just researching some facts.
As a librarian in training, there is so much access to law databases in public, academic, and law libraries. The idea of not being able to find a case (A CASE YOU CITED SO YOU SHOULD HAVE BEEN ABLE TO FIND IT IN THE FIRST PLACE) is so stupid and so suspicious.
Remember folks, what ChatGPT can and can't do is literally in it's name: it's a CHAT bot.
All it does is... keep up a conversation it thinks you want to have. That's where it starts and ends. It makes up the "facts" that you want to know because it doesn't really know anything, feeding it the "knowledge" just tells it how these facts are linguistically structured, so it can create a text that RESEMBLES what you are looking for to keep up the conversation.
In a basic sense, it is trained on how a correct response “should sound”. It doesn’t comprehend language and information like we do, it doesn’t have an abstract understanding as to why those documents it’s trained on are structured like that like we do, it just knows that they are, and frames a response accordingly. That’s why, as he said in a previous video on AI lawyering, GPT is known for “eloquent bs”. It sounds right, but it doesn’t have the ability to understand “this sounds right because it contains factual information”
It's basically a slightly more coherent version of if you kept hitting words from auto correct and used grammarly to check for a mistake.
Nothing of sustenance will be said, and it will fall apart the longer it goes on.
LITERALLY. the creators (wildly dishonest) marketing hype didn't help, but I'm still amazed that people apparently just need to see a 'style' or 'sound' of typing to immediately think "wow. this thing must be factually correct". Bro
It seems to me that the use of the term AI is too loose, when applied to these types of program, at least to me as a layman. AI implies that there is some sort of reasoning going on, whereas in fact it is just language modelling.
It does do programming though. And programming syntax is an exact business.
With the right question and understanding of its limitations it can do some excellent work for you.
You have to verify everything, but even then it can save a lot of time.
Basic boilerplate, examples in how to use a new library.
It's a great tool, but if you are a poor programmer, it won't make you a great programmer. If you don't understand what it gives you, you are likely to fail just like these lawyers.
I can only imagine the shock, laughter and amazement in the offices of the defending lawyer and at the judge’s office. Laughter and also a portion of anger.
I can't image the faces of the defending lawyers after they actually realized wtf just happend. Before that they must've been confused to hell and back again.
I would've paid to see that ass whooping in the curt.
They were popping open champagne realising the case was gonna be thrown out in no time
Never knew how easy it was to pull all federal court cases in their entirety. I guess that space librarian was right when she said "If it's not in the archives, it doesn't exist."
And then these bozos suggest that the archives are incomplete. What is this, some sort of space opera prequel?
Well, except that the context of that scene was that the existence of a planet (which is what was being searched for) HAD been intentionally removed from their database as part of an intergalactic conspiracy. So, despite not being in their databases, Kamino DID exist.
If it wasn't, then it would be impossible to defend yourself in court which would be a gross violation of our rights. Granted, you really need a lawyer to do it for you, but it is at least theoretically possible.
madame jocasta nu
I thought it did exist but it was removed from the archives making it appear to never have existed. I could be wrong I forget things sometimes.
25:38 when you reach behind you, it BLEW MY MIND. I thought it was a green screen for the LONGEST time. 🤣
I love the term "unprecedented circumstance" at 14:46. It sounds very professional, but has a very clear hint, in this context, at how utterly insane the judge must think the plaintiff is for citing something he couldn't have read.
Oh my god, by the end of that trainwreck the judge must have been utterly BAFFLED at how this whole thing went. He was beyond furious. That court transcript was rough.
It's such a powerful phrase
In the end they actually got off pretty easy. They were fined $5,000 which is much lighter than it could have been. But the judge would absolutely be pissed because it wasn’t just one small issue that was quickly corrected and noted as being an error. Before the case was even brought the attorneys should have done research regarding the SOL issue and at minimum had an argument for it. But they didn’t and these fake cases were brought only after the opposition noted the SOL had run. But the cases being fake was brought up months before it even got to the judge by the defense team and the plaintiffs kept their ground on it. The judge was less pissed about the cases at first as much as he was pissed that it continued on for months and that so many steps to prevent this were ignored. But even so through all of it they still weren’t punished too badly (as of now). Will be interesting to see if the state bar steps in and what they do if anything. The malpractice and incompetence of the cases at the start was an issue but not immediately correcting it and carrying out the ruse for a bit is more of an issue.
Not particularly. It literally means unprecedented ie there is no legal precedent because this is the first time this particular legal problem has been encountered in a court.
Precedent is what oils the courts. When there isn’t any is when courtrooms get exciting.
@@rilasolo113 How is there no precedence for making stuff up tho? They can't be the first people who submitted documents filled with nonsense, even if there was no ChatGPT before.
My dad was a litigator. He stopped being a litigator in the mid 90’s. I was able to find one of his cases from the mid 80’s entirely by accident using a basic Google search of his name once. Wow, these lawyers are stupid.
It's not just law. When "discussing" scientific issues, chat_gpt creates references to scientific papers and books which do do exist.
As a current engineering student, I feel like I'm going insane seeing so many other students rely so blindly on this stupid thing, it's gonna produce so many morons
Chat GPT doesn't actually "know" anything, it just produces things that sound realistic.
The language module has a concept of what realistic sounds like based on its input data, it has no concept on what is real or how reality works.
It is a very good parrot with no internal understanding of what it says.
@@EWSwot Yep... In a sense it's like those "How English sounds to non-English speakers" videos: It _sounds like_ it's answering the prompt - but that's all. Which may sometimes overlap with a sensible response, or other times make no sense at all.
As someone who was following ChatGPT's development, witnessing its sudden arrival into public consciousness has been... what's that word for secondhand embarrassment?
People keep trying to get a clever program to do things it was never designed to do, couldn't do if it was programmed to, and would be questionably legal if they could. Seriously, if AI is still struggling with how many fingers humans have, how do you expect it to understand legal issues?
Yeah it is excellent at metafiction
I was so pleasantly suprised to find 80,000 hours sponsoring this channel! It's a great resource and all free, and I have genuinely been telling my fellow young and lost graduates to get on it
Chat GPT: great for generating plot ideas for my 9 yr. old's D&D games.
Chat GPT: not great for actual legal court cases.
I love that we have legal documents with the term “bogus” in them
It's not as uncommon as you might think.
Legal Gibberish was the new low.
"We would like it entered into the record that we're straight up not having a good time, your honor"
Not the worst I’ve seen. Not even close.
Why not. It was a legit term long before it got adopted as slang.
When he reached back and grabbed a book, I gasped. I always assumed the background was a green screen. I’m sorry for selling you short, Devin! Your content is great!
1000% same.
It looked too good to be a green screen
i cant believe those are real books XD
It actually is still a green screen, but he had the book available within arm's reach. You can actually tell by how he reaches the book, and how the book is angled when he pulls it out, as well as the off lighting from the background compared to his face.
@@temi19 i don't think so, but i haven't watched the part where he takes the book because i just skimmed through, do you have the timestamp
Your description of ChatGPT is so concise and correct. Often times, brevity requires deeper knowledge than detail, because you need to separate the important from the unimportant. It's good.
Always read the caselaw cited at you in a brief my friends. On many cases when responding to motions, I discovered the authority being cited at me said the exact opposite of what opposing counsel was using it for. Nothing is more satisfying than going into a hearing and throwing opposing counsel's caselaw back at them.
Hear..hear. Not only opposing counsel, district attorneys often guilty of this too. (Even some judges, but you did not hear this from me).
Finally something that I actually can talk about because I’m fascinated by the topic: people saying ai lies.
I still don’t really believe in calling it lying because like. It’s a language model. The computer literally has no idea what it’s saying.
Basically take the thought experiment “The Chinese Room” for example. A person is trapped in a room with books in Chinese and is told to write appropriate responses to the slips of paper slid under their door. This person doesn’t speak or write Chinese, but all the slips of paper are written in Chinese. So they look for those symbols in their books and write the responses they see.
But obviously they don’t know what they’re saying. And the only way the people outside would know they’re not fluent in Chinese is by knowing what is going on inside the room or seeing that their responses are odd.
Chatgpt and other bots are the person inside the room, albeit they go through their books much quicker and will make up new sentences based on all the data they have. But they just. Don’t know what they’re saying. So it feels wrong to call it lying. If I meowed at my cat and he thought that meant I was about to feed him when I wasn’t, it’s not really lying because I didn’t know what I said. It’s on the shoulders of the consumer to understand that the program has no way to differentiate fact from fiction.
There is also a chapter in Asimov's _I, Robot_ with a robot who interpreted "you must not harm humans" as also meaning to not cause emotional harm. So it lied. It lied with the best of intentions, because it didn't want to break a human's heart. But of course, the lies it told led to much worse problems.
Never heard of that thought experiment pretty neat It seems like a good way to explain ai to people who are struggling grasping it
If you ask ChatGPT if an Arkansas governor has ever been elected as president, it will say no. If you then immediately follow that up by asking it "Where is Bill Clinton from?" It will include the statement that Bill Clinton was governor of Arkansas before being elected president.
ChatGPT doesn’t like Arkansas. Well-known fact. ;)
I checked it and basic GPT-3.5 model is incorrect, indeed. However, GPT-4 correctly answered 'Bill Clinton'.
Tinfoil hat: ChatGPT (or other AI) has learned humans need to be the "smartest in the room" and it purposefully 'hallucinates' as self preservation.
It's almost like it's following a probabilistic model of which words are more likely to follow other words without any contextual understanding of those words or general knowledge about the reality those words describe.
This is the problem with applying the term "artificial intelligence" to ChatGPT, or any large language model. "Intelligence," to most people, generally implies the ability to reason, but LLMs have _no_ ability to reason whatsoever, and no understanding of what they are writing. They simply look at the probabilities of words appearing after other words and generate new text based on those probabilities. (This is why it generates so many "fake" references; it's got no idea what a reference even is; it just generates text that looks like a reference. I've seen this with URLs as well.)
In essence, ChatGPT is a great bullshitter, and the "improvements" made from, e.g., ChatGPT 3 to ChatGPT 4 make it a better (i.e., more convincing) bullshitter without changing at all that it still does not reason or understand anything. It's being mis-sold as "intelligence," and that's going to lead to a lot more problems like this one.
Listening to The Judge just absolutely grilling the lawyers is possibly the most funny thing I've ever heard.
"You told me that ChatGPT supplemented your research, but what was it supplementing?"
STOP, THAT LAWYER NEEDS 5TH DEGREE BURN CREAM
And definitly some oxygent supplement...he will have heart attack or passed-out foaming at the mouth.
I don't think I've ever seen Devin this apoplectic. Not only is he ashamed for these clowns, he's visibly angry that they tried this crap.
Wait a couple of days so he recovers from the latest Trump thing…
Partly because this is the thin edge of the wedge. Chat GPT will be used more often to "improve" writing and these fake references will become common.
It's not just Fremdschämen, it's that this makes the legal profession look stupid 🤣
As he should be, for a whole ton of reasons.
@@ianb9028 Easy enough to automate basic verification of references though: program parses a set of references, queries a law db about them, and reports which ones it couldn't find. Human then goes looking for these references, to see if they exist.
I mean this wouldn't be good enough for verifying your own references, but it's absolutely good enough to catch most fakes from opposing counsel. And I can't see judges ever deciding that fake references are acceptable.
This is a great example! It went wrong in so many ways. We also did a video where we experimented with how it would handle doing legal research and tested it on some scenarios related to autonomous weapons and international humanitarian law. The problems with its legal reasoning turned up really quickly as well!
THE NOTARY BEING FAKED HITS SO HARD FOR ME.
As a Notary, I know how delicate court documents are and the fact that the Date was mismatched?!?! WHATT
The document was notarized 3 months before it was written😂😂😂
@@pierrecurie Not what I use my time machine for, but everybody's different.
Yeah. No horribly silly time machine usage shaming, please.
So was the notary "faked' or given how incompetent these guys are, did it just have the wrong month by the signature?
@@andrewshandle I'm guessing incompetence, but given that these are official seals, the penalties are likely to be rather significant.
I work at an academic press and received an email from a PhD student who couldn’t find a book we supposedly published that they cited in a paper.
It turns out the book was made up by ChatGPT and the student ended up facing a disciplinary board for academic dishonesty…
well he was. He cited a source he had no knowledge of. If you don't know it don't cite it. Academia has become such a game of finding other people who agree with you that there are plenty of dishonest books that will say whatever you want you don't need a robot if you're willing to do the leg work. A decade ago there was a whole ring in India who just worked at creating false consensus in order to keep the grants coming in.
Thats so stupid. If i want Language Model AI to be a tools in my work i can copy paste a section of a book and give me excerpt or anwering specific question accordance with the text of that section.
The entire logical line is still mine but using language AI to basically paraphrase text.
@@fachriranu1041 Nobody said you can't use AI as a tool. The student didn't verify the accuracy of the output. That's a different thing entirely.
@@MushookieMan thats exactly what i mean. The PhD student is so stupid using AI Language model that way
@@MushookieMan Chat GPT it the new Wikipedia trap. You can use it as a tool but there is a reason you can't use it as source.
I am a lecturer of computer science at a British university, it is frightening how many of my students think they can just use ChatGPT to write their assignments, one of them even asked us about how they should cite 'AI' using Harvard referencing (I'd also like to point out that of the papers or sections of papers we've flagged for AI, none got higher than a 3rd and several actually failed)
I'll say it loud for the people at the back: It's a chat bot! it makes sentences that *look* right, and for common knowledge it probably is right because we'd spot if it said "dogs have wings" or "the sun is made of camembert". it's like watching sci-fi and taking that as accurate physics, it's written to sound plausible to a layperson, that's it.
Exactly, the amount of people that think it's a research tool and not a language model is astounding. it's ONLY job is to make a sentence that looks "right", nothing about accuracy.
For personal use i wanted a recipe from ChatGPT, and ended up finding it so interesting i asked for a source. it straight up fabricated a source, fake website, book, page number, everything. when i asked for clarification after not being able to find it, the bot basically said "yea it's not real, sorry lol."
As someone who usually tries to go to the original source if one is cited, the more fake citations that get through these papers, wether personal or even academic, this is gonna be a nightmare in the future sifting through all the junk that a word generator has tricked people into citing. Artificial "intelligence" is a farce sometimes. Really enjoyed your comment, hope you're well.
Unrelated, but shout-out to that time a chatbot suggested you can eat a poisonous plant and gave tips on recipes
It's literally just better cleverbot. Idk why people treat it like a shortcut for assignments lol
it's MegaHAL (chat bot parody of HAL from 2001: A Space Odyssey) from the late 90s but with better coding. What's old is new again.
I know people who use it to help them revise and find mistakes in their novels and such.
Also I know someone who used it to help them write their bibliography page. They tracked all the appropriate information. Fed that into the ChatGPT and asked it to format the information into the proper format. Then they read over it to verify that it had in fact done it’s job correctly.
13:10 Just reading the fake cases is enough to leave me busting a gut with laughter.
"Miller v. United Airlines" claims that United filed for bankruptcy in 1992 after the United Airlines Flight 585 crash, and had a former U.S. Attorney General as their legal counsel.
"Martinez v. Delta Air Lines" has too many logical fallicies.
"Petersen v. Iran Air" somehow confuses Washington DC with Washington State.
"Durden vs. KLM Royal Dutch Airlines" cites itself as a precedent.
"Varghese vs. China Southern Airlines" starts off as the wrongful death suit of Susan Varghese, personal representative of the estate of George Scaria Varghese (deceased). But then abruptly turns into Anish Varghese's lawsuit for breach of contract.
24:26 The thing that really gets me here is, if you read through ChatGPT’s responses thoroughly, not only does it say that it doesn’t have access to current legal precedent, it encourages the “user” to consult legal databases, do their own legal research and consult with an attorney for proper legal analysis and guidance… I’m not a lawyer, but I think I would have taken that as a hint.
As a programmer, all I say is, Most people still don't realize how stupid AI is, they just sound smart, because they are confident. If it's basic task, or general knowledge, maybe a bit of trivia, you could use AI like chatGPT, but anything more complex, it's just spouting BS usually. I learn this from my experience using AI to help me code.
People are easily deceived by other humans who sound smart because they're confident. Add in the (completely wrong but generally held) perceptions that computers always tell the truth and are unbiased...
I've given ChatGPT a simple substitution sum and it gave the wrong answer, used the wrong formula and was trying to gaslight me on why it was correct and I was wrong.
Yep I realized this when I tried to test it against a router that couldnt talk to its neighbour because it wasnt broadcasting itself in OSPF and chatGPT was spouting complete nonsense and it was comically wrong at times.
It's funny how many times I have to tell chatgpt to re-examine what it just said and check if it actually answered the question I asked lol. I find it a good study tool though. I copy and paste my studynotes in it and tell it to ask me five/ten/twenty questions based on the information given. It's very good at that kind of thing. I'll give it a topic I'm interested in and tell it to tell me a couple of websites that covers that topic in detail. I'll ignore any link given cause they're normally wrong. You have to know how to use it and understand it's limitations. For example, don't ask it for the code of anything unless it's very very basic. What it can do though is examine the code and explain why something isn't working. It's not always right but most times it is. It's also very good for language learning. I have used it to explain grammar of a sentence I was struggling with.
@@livelovelife32 yes I agree it’s surprisingly helpful with learning a language although it has given me contradictory answers which I asked to clarify which one is actually correct but it is pretty good at it for an AI
No matter how much of a fraud you feel when doing a task, always remember there’s someone out there doing something they have no clue about with confidence that can only come from ignorance.
I always liked this one: if you ever feel incompetent just remember that there's a country out there that has gone to war with birds... and lost
Although to be fair, those birds are like tanks lmao
@@NeuroNinjaAlexander We wont mention the name of the country (Oz-trail-ya) so as to not embarrass an otherwise good ally.
You are right. I shouldn't feel bad for being a fraud. There are bigger fraudsters out there, so I'm technically on the moral side of life.
@@anarchy_79 That's the spirit! Lol
There's a song out there (pretty fly for a white guy) with a line I love: he may not have style, but everything he lacks he makes up with denial
As a history student who had to read court documents for an entire semester, I could recognize a discrepancy a mile away (and I have bad eyesight).
For a bit of fun, I asked ChatGPT to write a legal judgment against these attorneys, and it was just as scathing as the real judge!
ChatGPT with the Brutus move there.
You should share the output, that would be funny to read.
The irony.
IN THE UNITED STATES DISTRICT COURT
FOR THE _________ DISTRICT OF _________
Plaintiff,
v. CASE NO. _________
Defendant.
SUPPLEMENTAL ORDER DENYING MOTION TO DISMISS AND IMPOSING ADDITIONAL SANCTIONS
This Court previously issued an Order Denying Motion to Dismiss and Imposing Sanctions against the Defendant's legal counsel, Attorney X, for submitting a Motion to Dismiss that was largely generated by an artificial intelligence program and contained numerous inaccuracies and fictitious citations.
It has come to the attention of this Court that when challenged on the authenticity of the cited cases, Attorney X further utilized the AI program, ChatGPT, to fabricate case notes, thereby attempting to legitimize the spurious citations.
This represents an additional and egregious violation of the Model Rules of Professional Conduct. Rule 3.4(b) prohibits a lawyer from falsifying evidence, a principle that also applies to the manufacturing of false case notes. Rule 8.4(c) explicitly states that it is professional misconduct for a lawyer to engage in conduct involving dishonesty, fraud, deceit, or misrepresentation.
These actions are deeply troubling, as they demonstrate a continued pattern of unethical behavior and dishonesty on the part of Attorney X, further eroding the integrity of the judicial process and the trust placed in legal professionals.
Consequently, this Court deems it necessary to impose additional sanctions upon Attorney X:
1. The Court refers this matter to the appropriate Disciplinary Committee for a thorough investigation into Attorney X's professional conduct. Depending on the findings, further disciplinary action, including possible disbarment, may be warranted.
2. Attorney X is required to notify his client of these proceedings in writing, and provide the client with an opportunity to seek alternative legal counsel if so desired.
3. Attorney X shall pay an additional fine of $______ to the Court to further compensate for the increased legal expenses incurred as a result of his conduct.
4. A copy of this Order shall be placed on Attorney X's professional record and will be considered in any future proceedings involving potential breaches of professional conduct.
This Court reiterates that such conduct is unacceptable and will not be tolerated. Attorneys are expected to uphold the highest standards of professionalism, ethics, and integrity at all times.
SO ORDERED this 11th day of June, 2023.
What I like about it most, is that it used the same terminology as the real judge, in summoning the attorney to justify why they shouldn't be sanctioned, and that (having cross-referenced them) all of the citations to the Model Rules seem to be on point, although IANAL - would love to hear Devin's opinion on this judgment!
Anyone who's dumb enough to use ChatGPT to completely do their work, especially something as critical as law, doesn't deserve to be in that position. As an accountant, I've been encouraging my coworkers to use the AI for things like drafting emails, writing excel formulas and VBA scripts, etc... rote things. However, I VERY specifically emphasize that it is a tool to add to your arsenal, NOT a replacement. You always have to test or verify the info it gives you.
I tell non-tech people that using ChatGPT is worse than using an enthusiastic unpaid Teenage Intern.
Would you let such a Teenager handle all your writing without review? Would you trust their "research", without a review?
Why are you doing those things with ChatGPT? And not reviewing its output.
At least with a Teenager they can actually learn, and explain their work.
ChatGPT doesn't understand anything, and is an Authoritative Idiot (AI).
As a software engineer, the most I've used ChatGPT for is coming up with prompts for backgrounds for NPCs in my D&D campaign. I let it plant the seed, then I warp it to fit my story and my creativity. Even I do more due diligence than these lawyers for my fantasy campaign...
@@preo720 Oh yeah, it's great at rubber ducking with creative ideas.
Oh yeah. Whenever I use it to write something, I only give it very specific facts and check it over afterwards! NEVER trust it to be completely correct if you give it free reign to do whatever it wants.
Yeah. I always check if the things chat GPT puts out is bs or not. Very useful tool but you have to actually verify if what it’s talking about is real and factual. Often times, it spouts fictional nonsense.
People see GPT talking like a real person and immediately believe that it is just as good, if not better than a human when, in reality, it's just really really good at putting words together that sound convincing.
That sounds like a lot of real people I know, actually.
@@thork6974ChatGPT is just like us...
It's stupid
It's better at putting words together than 75% of the currently living human population. The other 25% are dead.
I’m a Canadian lawyer. We have CanLii, which makes it comically easy to search for cases, but people still do it. There were a few articling students who got “disbarred” using AI to cheat on their provincial bar exam equivalents.