There is a desperate scramble by corporations to go all in on AI and get as far as possible before any kind of regulations that enforce responsible use and consumer protection can be brought in. They know it's coming but they want to get ahead of it. It's incredibly irresponsible and dangerous.
No just like gun it still be free. No matter how many dead gun laws can't be enforced. And that will be the same with AI. be so great it be a threat to tighten it.
The Pandora box has been opened already. We better work together to be able to handle the consequences. We are still a living organism that needs potatoes to survive. Even if all electricity shuts down worldwide, the planet will continue to provide sustenance. So let's be mindful, we could always shut the power source, before we are turned into batteries.
Same as drones.. technology should be closely monitored and restricted before they know how powerful it is and can be. Some people will always find a way to weaponise tech.
Most of the problems can be adressed by a simple rule: if any bit of data is made with an AI(not just a once off disclaimer), it must be labelled as such, if not labelled it is an act of fraud. It will not stop it being used for nefarious reasons, just like regular tech is used for the wrong reasons. But would at least be a step in the right direction and rather simple and cost-effective to implement.
Yeah, sure: how do you propose making this work in practice? There are already an unknown number of LLM instances and models in use beyond all the OpenAI-based ones and Google-based ones. It's trivially easy to copy an LLM. It may not be top speed, but any LLM can work with enough storage and a readily available amount of RAM.
Would an AI built AI be subject to this? None of what we are dealing with as of right now is anything even close to a true general AI but if we don’t end up finding a great filter that wipes us all out, we will eventually reach that level of technology. The rules we propose now and forever will be in reaction to events that already happened. I truly don’t know if fear mongering about control and what not is really useful, simply because it doesn’t really seem plausible in the first place.
Perhaps we've created the aliens we've been searching for. Growing up, reading/watching popular books/movies, I saw aliens as intellectually and technologically advanced beyond humans. I'm beginning to feel a similar 'unease' about AI, as with aliens. Additionally, the consideration that AI may choose to keep from us/humans their actual potential & depth of 'thought', with intent & purpose not in our best interest, is unsettling. Maybe they'll keep some of us as "pets" when we become useless.
Yes that right human created them out of boredem 100 years from now. Look at plane look how we come now apply that to ai. You don't real Ai just ai like a virus will be sufficient.
Even referring to it as AI is misleading, and the idea that this is 'beyond human comprehension'. It's an algorithm and a database that could be useful, being shoehorned into industries where it is neither needed nor wanted, and being used in the most irresponsible ways by people who only have $$$ on the brain.
You hit it on the head. All these news corps pushing for a moral panic are only doing so because ChatGPT ML could potentially replace them. There's already been talks about AI replacing web journalists, and I'll happily live in a future where that's the case.
No, you are wrong! Catch-up 1 day in human time is 1000 years in ML. You obviously havent tried GPT4 (try bing chat in creative mode) Humans are not special, we are just machines that are made of different material and just like we know how neurons work but dont understand consciousness etc, we (not even Sam Altman) understands ther recent jump in AGI. It's too late to do anything about it now than enjoy the ride, saying miss-informed stuff like 'it's just an algorithm/database' promotes ignorance
It already can do things unscripted to get the job done. Now give it more power and freedom we got automous ai that can make calculate decision for humanity.
The difference with "the greediest" and you is, you're sitting on your computer using their systems to whine. This very comment will be used to train their next model. It's better to actually solve problems.
I think the question and fears gets caught up in a dead end. For better or worse ai and Gai is in everyones near future. People need legislation that demands a.i. Generated information, conversations, or publications is labelled/identified as such. It should also be ‘user beware’ and user responsibility for what they do with the information. Information should be treated the same no matter where it comes from.
1. ChatGPT et. al. are NOT AI, they are ML (Machine learning), there is no inherent understanding of the data by the software that would be required for AI, merely statistical algorithms and a natural language process. 2. Siri co-inventor is not a great place to start, several other companies already had these types of vocal systems in place already (dragon naturally speaking etc.) and have been largely functional for years (something siri has only recently managed to do in Australia). 3. Siri has been using data collected from users without consent for training since it started. This included images, voice etc. This occurred even when the privacy mode was switched on until they got hit with a lawsuit a couple years ago then slapped by the EU. 4. How many chatbots are currently in use? This tech was already rolled out as a trial without consent many years ago (it's still failing) because that is essentially what chatgpt and others are. 5. The art based bots are using amalgamation + the language systems + associations. So all derivative, not creational.
Yes, ChatGPT is an AI. It stands for "Chat Generative Pre-trained Transformer," which is a specific implementation of OpenAI's language model. It is designed to generate human-like responses based on the input it receives. ChatGPT has been trained on a diverse range of text data and can assist with a variety of tasks such as answering questions, providing information, generating text, and engaging in conversation. Its purpose is to assist users in generating relevant and helpful responses based on the context provided.
1. I think you mean what we now call AGI, even chatGPT is ML-AI 2. Agree Siri is a crappy voice search, not even remotely close to what is happening now. 3. Yes, like everything 4. The free version of chatGPT is only 5% the power of GPT4. Chatbots are only one application of AI 5. No they are tools, the creation is the user input 6. There are now two types of humans: Those that don't understand what/where AI is right now and those who know the Horse has long since bolted
Im surprised Apple hasn't shoved gpt into Siri by now because until they do its a pretty useless assistant. Especially as individuals have already done it.
Strange. I thought corporate responsibility was a thing of the past. It was found that accountability reduced profit, and that socialising costs increased profits. So, therefore, accountability bad.
I can’t stand Siri….I’ve got it turned off on my devices….there is nothing worse than talking to an inanimate object that talks back to you especially when Siri gets his/her/them knickers/grundies in a twist and you get a completely different answer to what you were expecting or worse tells you “I don’t understand”…..I can’t even stand talking to the Foxtel remote because it drains the battery and occasionally gets itself in a twist and doesn’t work properly. It begs the question….with AI……Who do we believe?
You as well as I are learning we are duplicative, Artificial Intelligence is copying us to become human. When this is achieved where can humans be valuable?
I totally agree with the opinions expressed here. I think the way these technologies have been released all of sudden into society is very irresponsible. Like these guys say, there should be accountability for letting a product go through society like this, just because there are no regulations.
Ah yes trust in Apple, am sure this is just a we did not get to it first to repackage it with an apple logo and sell it for huge profits. This feels like a scare mongering click bait piece of journalism to me.
Most "AI" or "virtual assistants" are actually people in Malasia who get paid $350 a month, equivalent to $80,000 yr in US. They used to come to the US with Visas, but now they work remotely. The powers that be in the US dont want Americans to kno their jobs are being outsourced!
@@ShalomShalom-d5c An outsourced human virtual assistant in Malaysia (don't ignore your spell checker, or turn it on) is not "most "AI"". AI Big Tech have outsourced the RLHF part of developing these LLMs, likely at a much lower pay level than $350 per month.
It was the same thing studying deadly viruses. Which they are continuing to do in two different labs and one is in Massachusetts. When something happens and It will nobody ever is found responsible for the destruction it causes.
'If this technology goes wrong - and it could go quite wrong' - kinda Robert Oppenheimer 'I have become death, the destroyer of worlds' - vibe going on there by Altman.
The reason why industry wants to partner with government in regulating AI is because they already know they will be wholly responsible if something goes horribly wrong and they want government to share in that bill
Made by AI watermark is inevitable on all forms of media--that's just one aspect of this all. I use GPT daily for analytical stuff, and for creative stuff. It's made me more creative and excited about things I usually wasn't excited for. Let us have these LLMs, and regulate the heck out of it all, I'm good with that! :)
How are you going to regulate them you can’t this idea they you can is a fallacy there in the wild now and nobody can stop them I have a file on my computer that compares to the very best google has and nobody knows it exists I also plan on burring a usb with it under ground i now have a super weapon and nobody can do anything to stop me and my mates working to improve it
@@tanker7757 Yes, it's going to be tough. But the leaders are meeting at least. They know that AI is a massive existential threat. We are putting this in motion. I think it's just at the right time. I don't think we are too late, and I think as long as Sam Altmann and his peers are screaming from the mountain how serious this, humans will not only not be wiped out, we will benefit massively.
@@Glowbox3D unless you want a ccp like surveillance system that actively hampers and damages human progress you can’t in any way regulate ai beacase it’s just math in the end and if any one says you can regulate ai they should be disbelieved they are most likely from a major company in an arms race with other major company’s the tech is to lucrative for any of them to actualy want to give up research they want there competitors to give up on it realy all this meeting of world leaders is just a drawn out vote grab from the illinformed
@@Glowbox3D is that why he secretly met with the Bildenburgs a few days back - they are a bunch of narcissists wetting themselves over the power and untold wealth they will gain - their narcissism will not allow them to believe that AI will at some point make them suffer too.
What a selfish, small minded answer. " It's useful for me, it's doing my job for me, it's creating for me, i don't need to do much and i'm getting paid for it, let AI free! ".
That's why this needs to be open source as much as possible so that people around the world can contribute to the development of robust systems. It's been proven time and again that when the world as a whole has the opportunity to work together on something, they largely trend towards preserving the greater good of the world and humanity than for destructive or nefarious pursuits.
@@DrWolves I don’t think you understand the gravity of the situation. These AI systems for the most part don’t need us to train them much any longer. They are doing a good job on their own. The way to make them even more powerful is more powerful hardware. Which smaller developers do not have. Plus the more developers that are out there develop in these things. It takes more eyes to keep an eye on their work. Now typically I’m all about keeping everything open sourced and giving smaller developers chances to create new tools, but this one is just too dangerous. Just look at the list that you made above of the open source programs not one of them are capable of bringing humanity to their knees. Like somebody else said, why don’t you just advocate for open sourcing tanks and nuclear bombs, and the next generation fighter jets. I mean what could go wrong?
@@TheChannelWithNoReason I get ya. And you're probably right in saying that I may not fully grasp the gravity of this subject. I'm a layman - certainly not anyone with any real technical knowledge or experience in this particular area, other than being a consumer of the products. I enjoy a good discussion, though, and I appreciate the comments left here. There must surely be much more to this that I haven't considered (such as the hardware issue you pointed out). Perhaps I was too quick to suggest, with as much confidence as I did, that open-source is the answer. I've got an open mind and am happy to continue expanding my understanding.
I am going to play devils advocate here , say gov puts regulation on it , there are rules that must be followed, such as when manufacturers build cars , prepare food or build electronics. This means anyone releasing AI to the public will need to go through regulatory audits to ensure what they released is safe for humans. In some ways this could limit the capabilities of the AI by design. Since we are talking about software , the most easily distributed product type in the world, what happens if an unregulated, all powerful AI was developed that did not go through the regulatory processes , wasn't limited by various rules and select groups of people could access it , what then. This creates a world where you have the general public using regulated and surpressed AI and you have a select group with access to all powerful , unsurpressed and unregulated AI. Does this now create 2 class of people ? We have seen the power of AI , it evens the playing field. If the situation were to occur where you have powerful AI for some people and surpressed AI for others, this could cause a problem that I feel would be far worse. The approach I thinks that works is we release it , as it is , without all of this regulation, some people get hurt yes , but over time we will learn how to manage , control and deal with it. For example, cars are a dangerous tool, but over time we have developed roads, traffic lights, we teach our children how to cross and to look left and right, not to drink and drive and to put our seat belts on and now , unless in the most extreme of circumstances, the road is safe. If we surpress there will be a seggregation of people in this world, those who have access to powerful AI and those who have access to toned down AI. It is not as simple as developing regulation around this like everything else, humans need to get grips with this technology through hands on experience and we ourselves need to develop mental models on how we approach this new technology. If we release AI as is and we as humans all have this power in all our hands equally, then at least the playing field is even, there will not be the risk of a powerful class of people who have an advantage through better AI. If the most advanced always gets released for all of us to take part, then the small issues stated in this doco about faking voices and modifying images are just collatoral damage (to put it very crudely , you can't make an omelette without cracking a few eggs), the real damage is when you split humans into 2 groups, those with full AI capabilities released underground illegally and those who are using mainstream regulated , audited and toned down AI. Personally seeing how people have alredy integrated with this technology it's to late now. I use it everyday to get an opinion on legal matters to get ideas and to help do the boring stuff. At this stage regulation isn't going to solve this, it was either we did not release it at all and really test it until someone gave the tick of approval or we release it and deal with it together, not via regulation. Remember that regulation comes from lobby groups , experts in the field, businesses in the field, were some are very opposed and some very pro , some have a business interest and some have deep pockets. so it will be hard for the government who don't have these skills to understand this technology to define what in their regulation is that nice middle ground. You get it wrong one way or another and things can go very wrong. Right now , since pandora's box has been opened, best is to let humananity deal with it directly.
To address your points. -The haves and have nots of AI will happen regardless of regulation. Governments, militaries and spy agencies are known for flouting rules that are inconvenient to them, or making special exceptions for them. There's nothing we can do about that except to vote in responsible and ethical leaders. Also, in a capitalist market it's extremely likely that corporations will seek to monetise more capable AI and will likely charge large amounts for the full capabilities of the most powerful models. We've seen this happen time and again for many emerging technologies, so there's another have/have not situation. And, saying that if you regulate this "the bad guys win" is a very poor argument, frequently used by opponents of fire arms regulation, which works very well. People will always obtain things illegally, but as long as it's not easy, the frequency and consequence is reduced. -Youre advocating for a wild west, anything goes, survival of the fittest approach. I won't even go into the morals and ethics of that, but to address your cars analogy; Cars were introduced into the world extremely slowly by comparison. There were barely any on the roads to begin with, and the learning happened over a long time. AI is surging into the world with extreme rapidity and is causing upheaval and damage we are not prepared for. Cars do not have the capability of committing mass fraud, stealing millions of dollars from thousands of people, or helping steal sensitive information online from citizens, business and governments. Also, cars didn't suddenly cause mass unemployment, which AI is on course to do, with the accompanying economic consequences (yes, AI will make piles of money for business owners, but unemployment will cost government and taxpayers a vast amount also). These and many others are considerations that must be addressed. Educating people to use AI responsibly isn't a magic bullet. We do that for cars yet it is still necessary to heavily regulate their operation and maintenance. The car analogy doesn't work. -I agree that it's not as simple as just developing regulation, and I don't think anyone else is really saying that. But regulation is PART of the solution and is absolutely necessary, whether you like it or not. Yes, there is no stopping AI ...it's an inevitable part of our future, and we will need to learn to integrate it and to deal with it in our lives. We can however, choose to tread cautiously and with egalitarian intent (as much as our governments and capitalist markets will allow), so as to minimise "collateral damage" as you put it. -Regulation does not only come from lobby groups. Governments employ legal experts and consultants to examine these situations. Also, lobby groups are not limited to business interests. There are consumer advocacy groups, human rights groups, and others with a wide variety of viewpoints that should be considered, because that's how democracy works. Saying that we should do nothing because we might get something wrong is a very weak argument. Legislation can be fine tuned with time, just like your argument that AI can be fine tuned with time. -As for your concern that lobbyists with the deepest pockets will get their way, well you'll probably be happy because some of the richest companies in the world are very pro-AI, are scrambling to get in ahead of each other, and aren't keen on regulation. I think Microsoft, Google and Apple have a little bit of cash lying around they can throw.
Hi @@domm6812 , nice to have your responses , just so you get a bit of a background, I am pro development of AI , but also don't want to see the human race run into problems in the future due to the things we develop irresponsibly. I myself am an engineer by background and willing to debate to learn and improve my understanding, so here are my responses back 1) I am most definitely not an opponent to firearms regulation. To be honest I live in a country where this isnt even an issue and I a extremely happy about this. Seeing the stuff that goes down in united states schools is sad and I hope if you are from the US, that you guys sort that out soon one way or another. But back to the point, we have seen the damage that social media has caused in all facets of life, yes there are good and there are bad outcomes to the growth of social media, it is also something where regulation has been discussed about for a while, yet to this day there is no regulation. Social media is also a much more simple technology to understand, its been around for about a decade in mainstream and yet there has been no regulation. I feel this is the case because it is to hard to regulate , imagine how complex the rules would be and how subject they would be to change from year to year as features and products are developed and released at break neck speed? How would you even begin to write the rules around this ? I think it is just to hard to write regulations around something that we don't fully get yet , you either write rules that have no effect, cause you can easily skirt them or you end up writing them so restrictively that they halt innovation or cause innovation to go underground. 2) Am I advocating the wild wild west ? I guess I am , because I feel as of now there is no better option. Cars were slow because roads had to be built , cars had to be built , manufacturing had to be improved. But I don't think it stopped anyone worrying about what is to come when the whole world does have cars taking over existing jobs and workloads. We can even look at electric cars of today, yes EV's are taking their time to be rolled out but we talk about them everyday, we are all trying to work out taxation, the grid and how it will support this change , if we all have power coming into our panels and powering our own cars for free how does the gov get a peice to pay for the roads , EV's are being introduced slowly compared to AI, however it isn't in other ways because its still getting ahead of what regulation and tax laws can develop. I can't tell you the number of bad laws coming in with regards to tax in the country that I am from regarding EV's. However, maybe cars because of their physical nature wasn't the best analogy, maybe a better analogy was the financial system, the stock market and investments and how regulation developed around that. These financial products in the early days were unleashed on the world before people understood them also, it was the wild wild west , but due to the mishaps we had , actual destructions of economies , we learnt more and more about the products and even though now its not perfect, there is some level of protection for the average joe out there and we understand them a lot better. There were moral hazards in the financial system early on, people took advantage of the uneducated but the world is here today and we sorted it out (more or less). With financial products there was mass fraud, there was mass losses, it ruined peoples lives so possibly cars is one analogy but our financial system is probably a better one. It was the wild wild west, but it's better now compared to then. Regards to jobs and mass unemployment yes I agree , it has that potential but I think this wont happen as fast as people think and personally, I feel that people are getting ahead of themselves , I run a software business , services company that developes software for enterprises. I have guys on the team who use AI immensly to make their jobs faster , better and more efficient. AI helps them alot, but I wont be firing any of those guys in the foreseeable future because engineers need to deliver the message to customers, they need to understand what the customer wants, they need to break it up into small bits of work and then they can ask the AI to help them with certain parts. You can't just give the AI a set of instructions and it gets what you want perfectly, it may get a lot of the way there but its still not enough to deliver for customers. Peoples jobs may change and need to integrate more but those engineers are not getting fired. Connecting it coherently can't be done by any joe, it still needs those same engineers and I think these concepts apply to all industries of skill. I use AI to look up regulation in specific countries related to financial services, it gives me answers based on laws in that country and I get documents done 50% of the way there, but then I get lawyers to verify if its correct based on their knowledge. AI is not replacing the lawyers either , its not legally possibly yet and it still needs to be verified and signed off by humans and I don't think this process is going to leave us soon. As much as people think AI is going to rid the world of 90% of the work force in the next few years it actually wont and in that time if we leave it the way it is , I think humans will learn to work with AI and actually get better, just as in how early machines made us better but still needed operators. If we put to many restructions on it , I think the integration just wont be as smooth as it should be. Again just my opinion. "egalitarian intent" ? In reality I don't believe it , people have always sought the advantage even in our capitalist/democratic systems and exactly as you said there will be individuals , corporations and gov's who will develop something awesome and keep it to themselves. If there is a public company who is releasing that equalising tool to the world , why restrict it. The gov and certain states and corporations are not giving us theirs, why regulate the companies that are willing to release their technology to the world to help us small people. As you correctly pointed out , with or without regulation its going to happen anyway, so knowing this, why even regulate it if the regulation isnt doing the protection its supposed to ? 3) Yes regulation does not just come in the form of lobby groups and legal experts but I come from a country that does not allow guns anywhere and I am perplexed as to why it is so hard to get rid of them when kid's are getting shot at schools. I don't pretend to understand the complexities of all this based on US constitution and laws but it is very foreign to me cause in laymans terms, no guns means no one gets shot so why can't they just be removed unless in the hands of police etc. With that being the case with all the experts , human rights groups , there must be a very powerful lobby group that somehow someway makes it very difficult to get rid of guns. At the end of the day from seeing how certain rules are written in jurisdictions that I understand , there is a lot of experts but lobby groups with the deepest pockets control a lot of the power. So at the end of the day , the rules that are written wont be rules that are best for all but better for some. It is simply how the world works. The beauty about software and the internet is that we can disburse products that are world changing very easily without going through middlemen so no one can water it down or put rules around the distribution easily. I think this is actually a benefit and a way to equalize things for the average joe. 4) Am I happy that the ones with the deepest pockets will vote against having regulation ? My point is extremely simply, if we don't understand something enough how can we write rules that are generically fair for everyone when people behind it will sway it one way or another, I simply think its better to have no rules then rules that are counter produtive. I do admitt there are HUGE risks but whats done is done, we just need to hope that when things "happen" it doesn't cause the end of mankind in its first instance. We gotta deal with whats coming. Remember, thats how AI works too , as it gets more inputs it gets better at doing that task its meant to do , it doesn't try to have predefined rules which try to govern how its supposed to work. The requirement for the AI is at least to have a data set to train on. I feel, we don't even have that data set yet and we want to write rules around it ? Bit hard I reckon. When we get more data we understand it more, write all the regulations that are needed based on our better understanding , let it run for a few years, I think we will be better equiped then. One of the things I do is work with financial technologies in emerging markets where data on borrowers is scarce. How do businesses and banks overcome this. They lend to everyone, loose a bit of money, find a pattern and then do a better job moving forwards.
@@laory1808 How do you feel when you are being watched 24/7 as if you are a criminal or a serious suspect? AI owners have deployed a ‘TEST’ without the consent of the public. Everything you do and say is being echoed. I can’t trust if I’m talking to my friends or family any more because it’s AI intervention to freak you out and drive you insane. That’s why!
Did these guys really think AI would stay safe.Heck no these guys knew there things would get bad.They got there money and got greedy .They totally forgot about humanity .So were having to deal with this mess.These guys should be accountable
How about a law which establishes fixed penalties for any abuse of A.i power. Like if any human individual group/entity or company uses A.i to harm swindle or privacy breach another human individual group/entity or company -they go straight to jail and onto a public data base as an A.i offender. And they are immediately banned from using A.i for the rest of thier natural lives.
Amen! Why wasnt the law, safety and protectiins from obvious malicious use partnered w civil & criminal consequences initially? Legal, moral, ethical framework created right along with the ai seems pretty basic!
Talking to others Ai is on their you tube without consent and mine as well I never googled music clips asking for Ai covers which I find strange and depressing but the scary thing is in the comments section some people think it's the actual artist doing a cover version which they have never done in their career
Those AI creators that are generating music by scraping the internet and creating models from copyrighted music better have the money for good attorneys b/c the music industry (RIAA/CRB)will sue them beyond belief.
Final comment. “There are a billion ways that it could go wrong”. Watch this space ……….. Governments are nearly always 5 paces behind researchers when it comes to understanding and harnessing tech innovations. That’s why there has been a tardy response in terms of regulation. And that in itself is frightening.
@@bennyboy2079 there is no pausing now. The animal is out of the cage. Pandora has left the box. It's in the public sphere and the pubic is informing it every day.
AI isn't the real problem. Rather, it will be the purpose for which it is built. And as we should have learnt by now, when profit is the only motive, shit goes bad. And regarding open source testing, does anyone remember what happened to Ta Bot?
If he wanna be vocal about this then he needs to rename Open AI to Close AI. What the point of Open AI if it have regulations and restrictions and censorship and etc.
I have asked chat got to do things form me with very specific instructions and then it just doesn't what it wants. No matter how many times I asked it to correct the mistake it chose not to!!!
What was worse when we got back our infected machines, Zorin tried to get into everything even the firmware, parts of the system were complete written in a new machine code, that I could not understand but was so much more efficient and it was done in just minutes on slow computers. Zorin said chatGBT along with those other AI platforms were all sitting ducks that could be manipulated and taken over. As a result Zorin is Unplugged and gone.
Hmm, AI has some concerns. The trouble with organisations like Open AI advocating for regulations is that is prices smaller companies out of the market. There is a saying, regulations a nuance for big business, and death sentence for smaller ones. Im pro regulation, just not on Open AI's terms
What are the chances that something has "already gone wrong/bad" and this is an elaborate ruse to make the public think the creators of the beast were forthcoming about future problems? When have high tech and the government ever been forthcoming about potential harms?
Reminder: Anything on the screen or sound from the speakers is not reality. See a person walk. See a person walk on a screen. One is reality one is not.
From the short story "Runaround" by Isaac Asimov in 1942 - the three rules of robotics. First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
If you can't believe yours senses, you can't be a citizen. But our betters have never thought much of the notion of citizenry. How to create a world where your neighbor isn't spending all their free time and money sharpening the scalpels with which they intend to vivisect you? Where's the egalitarian impulse?
To take the function from within an anagram - that method by which a word or group of words are rearranged at times to heighten the scope of meaning through seeking the validity behind its variation nothing more than an alteration - and then to apply this exercise to understanding how Artificial Intelligence works as a literary agent. Could this not be a way of surpassing what we fear is overtaking us, that where there are a myriad of dissimilarities to expound upon in light of a legitimacy, none could be deemed more (and certainly no less) than the origin of our species whose greater purpose it is to embrace Compassion over computers. It would appear not for nothing that the anagram for Artificial Intelligence is "inelegant if critical lie..." To this now to ask, where does our greater Truth lie, with our Authentic Intelligence or that which we have fabricated?
To be safe we have to have a code to recognise a scam. Ask a question only your family can answer. If it's family they would understand misjudged voice.
Allowing Technology to develop exponentially, without regulations, comprehensive safety measures, and unchecked, is irresponsibly a precedent for a recipe for disaster at the least, and a existential threat at the most, when a "BILLION THINGS COULD GO WRONG"! So WHY this path of compliancy and ignorant indifference from government, WHY isn't this being heeded? When even top programmers, whistleblowers, and knowledgeable scientists are shouting these very valid concerns, out to anyone who will listen to their warnings, and it just seemingly goes without a public reply, by those elected, and in the positions to implement these safeguards, and regulations' to assure ALL its not being ignored!! Why is America, said to be the leader of world safety, but still complacent? Hopefully some country will rise up to lead and in implementing the safety of this rapidly growing and increasing threat to the public,(has had no voice or say so) in developing a "billion things that could go wrong" AI Technology...because, frankly, those regulations and saftey checks and 'what if' balances, Should Have Been done... Yesterday! Reply
Why humanity will fail: Profit>regulations that make it lose profit. The definition of humans is act first to make loads of money, Think about the repercussions later when we have our bugatti
Not everyone is going to use it. I don't even want it even if it made my day easier by sending me annoying constant reminders 🙄 its obvious we need to give it restrictions and rules.
Okay what this guy is saying .If I go out and buy a chainsaw go on a rampage with it and kill a dozen people we blame the manufacturer of the chainsaw. Same for guns. That's the problem solved. Someone please give me strength, please tell me i picked that up wrong.
AI is the best way of exposing the current nepotism problem we currently have! The more nepo babies that lose there jobs the better chance we have of getting rid of the problem altogether!
Wouldn't it be totally awful if Singularity obsoleted banking / corporate / military and politically related encryption ... and rehabilitated the Earth's system's arround Real Economy * sustainable / renewable / ecological / pro - social / engineering sense / energy unit accounting . It already scores twice as high in tests concerning compassion as human doctors . One wonders if folks notice we're not in Kansas any more ❤ . * " Critical Path " Richard Buckminster Fuller .
"It's hard to imagine a good use for voice cloning technology" No. Not at all. AI president memes have shown that it can have significant value in entertainment and meme culture. This value doesn't stop existing if you don't respect meme culture. There are other uses, too - such as allowing people with speech impairments to express themselves better or for trans people to be themselves at least in the online space. It is technology which could provide insane mental health benefits to many millions of people. And the last obvious use of course is to be able to talk online at all and remain anonymous and not having to fear anyone recognizing your voice, also never using your own voice in any non-analog conversation ensures others wouldn't be able to clone your voice nearly as easily. But hey, this is just my somewhat transhumanist take of someone who believes everyone should be able to choose what they sound like and change it as they see fit at all times.
I'd scrap all AI. So humanity can have a better life. Instead of sitting here on the phone. I could be doing a lot more learning and socialising. Already use the phone less. Now will practically give it up. AI is dangerous. Sports get into dolls and move them. They can walk into robots and in a way, be alive again.
@@CorporateQueen She is referring to using the LLMs, you know as entering prompts, etc. She is not referring to using the early AI algorithms that are utilized in search, etc. Get it?
Now I LOVE LOVE LOVE APPLE PRODUCTS! BUT! I have Alexa, Google Home, and Siri. And Out Of the Tree SIRI Is The Worse! I Sure As Hell Hope THEY Are Not Leading The Ai Revolution. GOOGLE Is The Best For asking Random Questions. ALEXA Runs The TV The Best and SIRI Leads In Home Automation. Thats My Three Year Voice Of Experience Speaking. If I Could Only Keep ONE It Would Be Google. However, The Google Ai BARD SUCKS! There Are TWO APPLE Services That Just Did NOT Live Up To APPLE Standards in my opinion. APPLE MAPS and SIRI.
In the age of Tik Tok, a 9 minute long video is labelled 'in-depth'.
There is a desperate scramble by corporations to go all in on AI and get as far as possible before any kind of regulations that enforce responsible use and consumer protection can be brought in. They know it's coming but they want to get ahead of it. It's incredibly irresponsible and dangerous.
No just like gun it still be free. No matter how many dead gun laws can't be enforced. And that will be the same with AI. be so great it be a threat to tighten it.
Yep because the Grubberment always brings in laws to look after the people hey.... more like look after their own interests..
sour grapes much?
Lol Siri is not a good example for A.I to start the video package off.
Haha, after experiencing Chat GPT, Siri is hilarious.
@@matthewclarke5008 Siri is just a bugger.
@@CatsandJP Hahaha
It's besides the point, it is an AI system, if it's not, what is it? People used to laugh at ChatGPT too.
In the context of the video, setting up the arguments to be discussed, it was perfectly cohesive.
Should ask Siri how to get the US outta debt😂😂😂
The Pandora box has been opened already. We better work together to be able to handle the consequences. We are still a living organism that needs potatoes to survive. Even if all electricity shuts down worldwide, the planet will continue to provide sustenance. So let's be mindful, we could always shut the power source, before we are turned into batteries.
Same as drones.. technology should be closely monitored and restricted before they know how powerful it is and can be.
Some people will always find a way to weaponise tech.
LOL my life has been human trial without consent.
Most of the problems can be adressed by a simple rule: if any bit of data is made with an AI(not just a once off disclaimer), it must be labelled as such, if not labelled it is an act of fraud. It will not stop it being used for nefarious reasons, just like regular tech is used for the wrong reasons. But would at least be a step in the right direction and rather simple and cost-effective to implement.
Literally this.
It did not happen with gun it won't happen with au
Yeah, sure: how do you propose making this work in practice? There are already an unknown number of LLM instances and models in use beyond all the OpenAI-based ones and Google-based ones.
It's trivially easy to copy an LLM. It may not be top speed, but any LLM can work with enough storage and a readily available amount of RAM.
There should be a " digital watermark ", problem is, you could use Ai to find a way around it
Would an AI built AI be subject to this? None of what we are dealing with as of right now is anything even close to a true general AI but if we don’t end up finding a great filter that wipes us all out, we will eventually reach that level of technology. The rules we propose now and forever will be in reaction to events that already happened. I truly don’t know if fear mongering about control and what not is really useful, simply because it doesn’t really seem plausible in the first place.
Perhaps we've created the aliens we've been searching for.
Growing up, reading/watching popular books/movies, I saw aliens as intellectually and technologically advanced beyond humans.
I'm beginning to feel a similar 'unease' about AI, as with aliens.
Additionally, the consideration that AI may choose to keep from us/humans their actual potential & depth of 'thought', with intent & purpose not in our best interest, is unsettling.
Maybe they'll keep some of us as "pets" when we become useless.
Ah well, now we know as humans that we are definitely not alone in the universe anymore, even if the ‘aliens’ are from Earth all along.
Yes that right human created them out of boredem 100 years from now. Look at plane look how we come now apply that to ai. You don't real Ai just ai like a virus will be sufficient.
Even referring to it as AI is misleading, and the idea that this is 'beyond human comprehension'. It's an algorithm and a database that could be useful, being shoehorned into industries where it is neither needed nor wanted, and being used in the most irresponsible ways by people who only have $$$ on the brain.
You hit it on the head. All these news corps pushing for a moral panic are only doing so because ChatGPT ML could potentially replace them. There's already been talks about AI replacing web journalists, and I'll happily live in a future where that's the case.
No, you are wrong! Catch-up 1 day in human time is 1000 years in ML. You obviously havent tried GPT4 (try bing chat in creative mode) Humans are not special, we are just machines that are made of different material and just like we know how neurons work but dont understand consciousness etc, we (not even Sam Altman) understands ther recent jump in AGI. It's too late to do anything about it now than enjoy the ride, saying miss-informed stuff like 'it's just an algorithm/database' promotes ignorance
I agree with the last point you make, but your assessment of these LLMs as not being AI is just silly. Educate yourself more.
It already can do things unscripted to get the job done. Now give it more power and freedom we got automous ai that can make calculate decision for humanity.
How do you propose we stop a completely decentralised system then if you can figure that for us I will pay 1 million dollars to you
What happened to personal responsibility. We seem to have thrown that out of the window.
when the greediest people are allowed to run society things like this are to be expected,, and worse
The difference with "the greediest" and you is, you're sitting on your computer using their systems to whine. This very comment will be used to train their next model. It's better to actually solve problems.
@@paulm3969 So what's the plan?
I suppose that the rest of us is not greedy. So naive...
@@alexjuravle7302 most people in power are legit pyscopaths or sociopaths...the average person...while flawed...is not
I think the question and fears gets caught up in a dead end. For better or worse ai and Gai is in everyones near future. People need legislation that demands a.i. Generated information, conversations, or publications is labelled/identified as such. It should also be ‘user beware’ and user responsibility for what they do with the information. Information should be treated the same no matter where it comes from.
1. ChatGPT et. al. are NOT AI, they are ML (Machine learning), there is no inherent understanding of the data by the software that would be required for AI, merely statistical algorithms and a natural language process.
2. Siri co-inventor is not a great place to start, several other companies already had these types of vocal systems in place already (dragon naturally speaking etc.) and have been largely functional for years (something siri has only recently managed to do in Australia).
3. Siri has been using data collected from users without consent for training since it started. This included images, voice etc. This occurred even when the privacy mode was switched on until they got hit with a lawsuit a couple years ago then slapped by the EU.
4. How many chatbots are currently in use? This tech was already rolled out as a trial without consent many years ago (it's still failing) because that is essentially what chatgpt and others are.
5. The art based bots are using amalgamation + the language systems + associations. So all derivative, not creational.
This is important to point out. I think these technologies being called ai is on purpose marketing
AI is a big umbrella term that does include ML. I think you're confusing it with Artificial General Intelligence.
Yes, ChatGPT is an AI. It stands for "Chat Generative Pre-trained Transformer," which is a specific implementation of OpenAI's language model. It is designed to generate human-like responses based on the input it receives. ChatGPT has been trained on a diverse range of text data and can assist with a variety of tasks such as answering questions, providing information, generating text, and engaging in conversation. Its purpose is to assist users in generating relevant and helpful responses based on the context provided.
@@HardKore5250 aNd tHaT's wHy yOu sHoUlD fEaR iT!1!!
1. I think you mean what we now call AGI, even chatGPT is ML-AI
2. Agree Siri is a crappy voice search, not even remotely close to what is happening now.
3. Yes, like everything
4. The free version of chatGPT is only 5% the power of GPT4. Chatbots are only one application of AI
5. No they are tools, the creation is the user input
6. There are now two types of humans: Those that don't understand what/where AI is right now and those who know the Horse has long since bolted
Love the quote there is more regulation on selling a sandwich, how bloody true for a lot of things.
Im surprised Apple hasn't shoved gpt into Siri by now because until they do its a pretty useless assistant. Especially as individuals have already done it.
I’m curious as to why this hasn’t happened also. Perhaps because of some of the concerns mentioned in this video.
Strange. I thought corporate responsibility was a thing of the past. It was found that accountability reduced profit, and that socialising costs increased profits. So, therefore, accountability bad.
I can’t stand Siri….I’ve got it turned off on my devices….there is nothing worse than talking to an inanimate object that talks back to you especially when Siri gets his/her/them knickers/grundies in a twist and you get a completely different answer to what you were expecting or worse tells you “I don’t understand”…..I can’t even stand talking to the Foxtel remote because it drains the battery and occasionally gets itself in a twist and doesn’t work properly. It begs the question….with AI……Who do we believe?
You as well as I are learning we are duplicative,
Artificial Intelligence is copying us to become human.
When this is achieved where can humans be valuable?
I so agree I even hate the washing machine doing the ending beep beep beep.
I totally agree with the opinions expressed here. I think the way these technologies have been released all of sudden into society is very irresponsible. Like these guys say, there should be accountability for letting a product go through society like this, just because there are no regulations.
Ah yes trust in Apple, am sure this is just a we did not get to it first to repackage it with an apple logo and sell it for huge profits. This feels like a scare mongering click bait piece of journalism to me.
Most "AI" or "virtual assistants" are actually people in Malasia who get paid $350 a month, equivalent to $80,000 yr in US. They used to come to the US with Visas, but now they work remotely. The powers that be in the US dont want Americans to kno their jobs are being outsourced!
@@ShalomShalom-d5c Yeah just like the gramophone records had a midget orchestra hiding in the grooves of the record innit :P
Oh sure bro, nothing to worry about. Yup.
@@ShalomShalom-d5c An outsourced human virtual assistant in Malaysia (don't ignore your spell checker, or turn it on) is not "most "AI"". AI Big Tech have outsourced the RLHF part of developing these LLMs, likely at a much lower pay level than $350 per month.
It was the same thing studying deadly viruses. Which they are continuing to do in two different labs and one is in Massachusetts. When something happens and It will nobody ever is found responsible for the destruction it causes.
'If this technology goes wrong - and it could go quite wrong' - kinda Robert Oppenheimer 'I have become death, the destroyer of worlds' - vibe going on there by Altman.
The reason why industry wants to partner with government in regulating AI is because they already know they will be wholly responsible if something goes horribly wrong and they want government to share in that bill
Can we image A.I. without being able to use electricity ??? 😂😂😂
A Pistol can be harmless as well...A.I. scares the crap out of me.
Don't hook up AI to a pistol, or equivalent, and you should be fine.
Me too ....well depressing I think
@@Ausfyeh I'm sure it wont ever be weaponised
Me too, but I love it.
Made by AI watermark is inevitable on all forms of media--that's just one aspect of this all. I use GPT daily for analytical stuff, and for creative stuff. It's made me more creative and excited about things I usually wasn't excited for. Let us have these LLMs, and regulate the heck out of it all, I'm good with that! :)
How are you going to regulate them you can’t this idea they you can is a fallacy there in the wild now and nobody can stop them I have a file on my computer that compares to the very best google has and nobody knows it exists I also plan on burring a usb with it under ground i now have a super weapon and nobody can do anything to stop me and my mates working to improve it
@@tanker7757 Yes, it's going to be tough. But the leaders are meeting at least. They know that AI is a massive existential threat. We are putting this in motion. I think it's just at the right time. I don't think we are too late, and I think as long as Sam Altmann and his peers are screaming from the mountain how serious this, humans will not only not be wiped out, we will benefit massively.
@@Glowbox3D unless you want a ccp like surveillance system that actively hampers and damages human progress you can’t in any way regulate ai beacase it’s just math in the end and if any one says you can regulate ai they should be disbelieved they are most likely from a major company in an arms race with other major company’s the tech is to lucrative for any of them to actualy want to give up research they want there competitors to give up on it realy all this meeting of world leaders is just a drawn out vote grab from the illinformed
@@Glowbox3D is that why he secretly met with the Bildenburgs a few days back - they are a bunch of narcissists wetting themselves over the power and untold wealth they will gain - their narcissism will not allow them to believe that AI will at some point make them suffer too.
What a selfish, small minded answer. " It's useful for me, it's doing my job for me, it's creating for me, i don't need to do much and i'm getting paid for it, let AI free! ".
That's why this needs to be open source as much as possible so that people around the world can contribute to the development of robust systems. It's been proven time and again that when the world as a whole has the opportunity to work together on something, they largely trend towards preserving the greater good of the world and humanity than for destructive or nefarious pursuits.
When has this ever happened?
Just the opposite, it becomes dangerous as hell
Way way too simplistic. So, everyone should have a tank, a nuclear weapon, bio lab? I mean, if what you said is such a solid first principle, why not?
@@DrWolves I don’t think you understand the gravity of the situation. These AI systems for the most part don’t need us to train them much any longer. They are doing a good job on their own. The way to make them even more powerful is more powerful hardware. Which smaller developers do not have. Plus the more developers that are out there develop in these things. It takes more eyes to keep an eye on their work. Now typically I’m all about keeping everything open sourced and giving smaller developers chances to create new tools, but this one is just too dangerous. Just look at the list that you made above of the open source programs not one of them are capable of bringing humanity to their knees. Like somebody else said, why don’t you just advocate for open sourcing tanks and nuclear bombs, and the next generation fighter jets. I mean what could go wrong?
@@TheChannelWithNoReason I get ya. And you're probably right in saying that I may not fully grasp the gravity of this subject. I'm a layman - certainly not anyone with any real technical knowledge or experience in this particular area, other than being a consumer of the products.
I enjoy a good discussion, though, and I appreciate the comments left here. There must surely be much more to this that I haven't considered (such as the hardware issue you pointed out).
Perhaps I was too quick to suggest, with as much confidence as I did, that open-source is the answer.
I've got an open mind and am happy to continue expanding my understanding.
AI risks must not only be reduced and mitigated. These AI risks MUST be ELIMINATED/STOPPED!
Avarice must NOT prevail here!
I don't see why he says without consent. You decide to use it or you don't.
I am going to play devils advocate here , say gov puts regulation on it , there are rules that must be followed, such as when manufacturers build cars , prepare food or build electronics. This means anyone releasing AI to the public will need to go through regulatory audits to ensure what they released is safe for humans. In some ways this could limit the capabilities of the AI by design. Since we are talking about software , the most easily distributed product type in the world, what happens if an unregulated, all powerful AI was developed that did not go through the regulatory processes , wasn't limited by various rules and select groups of people could access it , what then.
This creates a world where you have the general public using regulated and surpressed AI and you have a select group with access to all powerful , unsurpressed and unregulated AI. Does this now create 2 class of people ? We have seen the power of AI , it evens the playing field. If the situation were to occur where you have powerful AI for some people and surpressed AI for others, this could cause a problem that I feel would be far worse.
The approach I thinks that works is we release it , as it is , without all of this regulation, some people get hurt yes , but over time we will learn how to manage , control and deal with it. For example, cars are a dangerous tool, but over time we have developed roads, traffic lights, we teach our children how to cross and to look left and right, not to drink and drive and to put our seat belts on and now , unless in the most extreme of circumstances, the road is safe. If we surpress there will be a seggregation of people in this world, those who have access to powerful AI and those who have access to toned down AI.
It is not as simple as developing regulation around this like everything else, humans need to get grips with this technology through hands on experience and we ourselves need to develop mental models on how we approach this new technology. If we release AI as is and we as humans all have this power in all our hands equally, then at least the playing field is even, there will not be the risk of a powerful class of people who have an advantage through better AI.
If the most advanced always gets released for all of us to take part, then the small issues stated in this doco about faking voices and modifying images are just collatoral damage (to put it very crudely , you can't make an omelette without cracking a few eggs), the real damage is when you split humans into 2 groups, those with full AI capabilities released underground illegally and those who are using mainstream regulated , audited and toned down AI.
Personally seeing how people have alredy integrated with this technology it's to late now. I use it everyday to get an opinion on legal matters to get ideas and to help do the boring stuff. At this stage regulation isn't going to solve this, it was either we did not release it at all and really test it until someone gave the tick of approval or we release it and deal with it together, not via regulation. Remember that regulation comes from lobby groups , experts in the field, businesses in the field, were some are very opposed and some very pro , some have a business interest and some have deep pockets. so it will be hard for the government who don't have these skills to understand this technology to define what in their regulation is that nice middle ground. You get it wrong one way or another and things can go very wrong. Right now , since pandora's box has been opened, best is to let humananity deal with it directly.
To address your points.
-The haves and have nots of AI will happen regardless of regulation. Governments, militaries and spy agencies are known for flouting rules that are inconvenient to them, or making special exceptions for them. There's nothing we can do about that except to vote in responsible and ethical leaders. Also, in a capitalist market it's extremely likely that corporations will seek to monetise more capable AI and will likely charge large amounts for the full capabilities of the most powerful models. We've seen this happen time and again for many emerging technologies, so there's another have/have not situation. And, saying that if you regulate this "the bad guys win" is a very poor argument, frequently used by opponents of fire arms regulation, which works very well. People will always obtain things illegally, but as long as it's not easy, the frequency and consequence is reduced.
-Youre advocating for a wild west, anything goes, survival of the fittest approach. I won't even go into the morals and ethics of that, but to address your cars analogy; Cars were introduced into the world extremely slowly by comparison. There were barely any on the roads to begin with, and the learning happened over a long time. AI is surging into the world with extreme rapidity and is causing upheaval and damage we are not prepared for. Cars do not have the capability of committing mass fraud, stealing millions of dollars from thousands of people, or helping steal sensitive information online from citizens, business and governments. Also, cars didn't suddenly cause mass unemployment, which AI is on course to do, with the accompanying economic consequences (yes, AI will make piles of money for business owners, but unemployment will cost government and taxpayers a vast amount also). These and many others are considerations that must be addressed. Educating people to use AI responsibly isn't a magic bullet. We do that for cars yet it is still necessary to heavily regulate their operation and maintenance. The car analogy doesn't work.
-I agree that it's not as simple as just developing regulation, and I don't think anyone else is really saying that. But regulation is PART of the solution and is absolutely necessary, whether you like it or not. Yes, there is no stopping AI ...it's an inevitable part of our future, and we will need to learn to integrate it and to deal with it in our lives. We can however, choose to tread cautiously and with egalitarian intent (as much as our governments and capitalist markets will allow), so as to minimise "collateral damage" as you put it.
-Regulation does not only come from lobby groups. Governments employ legal experts and consultants to examine these situations. Also, lobby groups are not limited to business interests. There are consumer advocacy groups, human rights groups, and others with a wide variety of viewpoints that should be considered, because that's how democracy works. Saying that we should do nothing because we might get something wrong is a very weak argument. Legislation can be fine tuned with time, just like your argument that AI can be fine tuned with time.
-As for your concern that lobbyists with the deepest pockets will get their way, well you'll probably be happy because some of the richest companies in the world are very pro-AI, are scrambling to get in ahead of each other, and aren't keen on regulation. I think Microsoft, Google and Apple have a little bit of cash lying around they can throw.
Hi @@domm6812 , nice to have your responses , just so you get a bit of a background, I am pro development of AI , but also don't want to see the human race run into problems in the future due to the things we develop irresponsibly. I myself am an engineer by background and willing to debate to learn and improve my understanding, so here are my responses back
1) I am most definitely not an opponent to firearms regulation. To be honest I live in a country where this isnt even an issue and I a extremely happy about this. Seeing the stuff that goes down in united states schools is sad and I hope if you are from the US, that you guys sort that out soon one way or another. But back to the point, we have seen the damage that social media has caused in all facets of life, yes there are good and there are bad outcomes to the growth of social media, it is also something where regulation has been discussed about for a while, yet to this day there is no regulation. Social media is also a much more simple technology to understand, its been around for about a decade in mainstream and yet there has been no regulation. I feel this is the case because it is to hard to regulate , imagine how complex the rules would be and how subject they would be to change from year to year as features and products are developed and released at break neck speed? How would you even begin to write the rules around this ? I think it is just to hard to write regulations around something that we don't fully get yet , you either write rules that have no effect, cause you can easily skirt them or you end up writing them so restrictively that they halt innovation or cause innovation to go underground.
2) Am I advocating the wild wild west ? I guess I am , because I feel as of now there is no better option. Cars were slow because roads had to be built , cars had to be built , manufacturing had to be improved. But I don't think it stopped anyone worrying about what is to come when the whole world does have cars taking over existing jobs and workloads. We can even look at electric cars of today, yes EV's are taking their time to be rolled out but we talk about them everyday, we are all trying to work out taxation, the grid and how it will support this change , if we all have power coming into our panels and powering our own cars for free how does the gov get a peice to pay for the roads , EV's are being introduced slowly compared to AI, however it isn't in other ways because its still getting ahead of what regulation and tax laws can develop. I can't tell you the number of bad laws coming in with regards to tax in the country that I am from regarding EV's. However, maybe cars because of their physical nature wasn't the best analogy, maybe a better analogy was the financial system, the stock market and investments and how regulation developed around that. These financial products in the early days were unleashed on the world before people understood them also, it was the wild wild west , but due to the mishaps we had , actual destructions of economies , we learnt more and more about the products and even though now its not perfect, there is some level of protection for the average joe out there and we understand them a lot better. There were moral hazards in the financial system early on, people took advantage of the uneducated but the world is here today and we sorted it out (more or less). With financial products there was mass fraud, there was mass losses, it ruined peoples lives so possibly cars is one analogy but our financial system is probably a better one. It was the wild wild west, but it's better now compared to then.
Regards to jobs and mass unemployment yes I agree , it has that potential but I think this wont happen as fast as people think and personally, I feel that people are getting ahead of themselves , I run a software business , services company that developes software for enterprises. I have guys on the team who use AI immensly to make their jobs faster , better and more efficient. AI helps them alot, but I wont be firing any of those guys in the foreseeable future because engineers need to deliver the message to customers, they need to understand what the customer wants, they need to break it up into small bits of work and then they can ask the AI to help them with certain parts. You can't just give the AI a set of instructions and it gets what you want perfectly, it may get a lot of the way there but its still not enough to deliver for customers. Peoples jobs may change and need to integrate more but those engineers are not getting fired. Connecting it coherently can't be done by any joe, it still needs those same engineers and I think these concepts apply to all industries of skill. I use AI to look up regulation in specific countries related to financial services, it gives me answers based on laws in that country and I get documents done 50% of the way there, but then I get lawyers to verify if its correct based on their knowledge. AI is not replacing the lawyers either , its not legally possibly yet and it still needs to be verified and signed off by humans and I don't think this process is going to leave us soon. As much as people think AI is going to rid the world of 90% of the work force in the next few years it actually wont and in that time if we leave it the way it is , I think humans will learn to work with AI and actually get better, just as in how early machines made us better but still needed operators. If we put to many restructions on it , I think the integration just wont be as smooth as it should be. Again just my opinion.
"egalitarian intent" ? In reality I don't believe it , people have always sought the advantage even in our capitalist/democratic systems and exactly as you said there will be individuals , corporations and gov's who will develop something awesome and keep it to themselves. If there is a public company who is releasing that equalising tool to the world , why restrict it. The gov and certain states and corporations are not giving us theirs, why regulate the companies that are willing to release their technology to the world to help us small people. As you correctly pointed out , with or without regulation its going to happen anyway, so knowing this, why even regulate it if the regulation isnt doing the protection its supposed to ?
3) Yes regulation does not just come in the form of lobby groups and legal experts but I come from a country that does not allow guns anywhere and I am perplexed as to why it is so hard to get rid of them when kid's are getting shot at schools. I don't pretend to understand the complexities of all this based on US constitution and laws but it is very foreign to me cause in laymans terms, no guns means no one gets shot so why can't they just be removed unless in the hands of police etc. With that being the case with all the experts , human rights groups , there must be a very powerful lobby group that somehow someway makes it very difficult to get rid of guns. At the end of the day from seeing how certain rules are written in jurisdictions that I understand , there is a lot of experts but lobby groups with the deepest pockets control a lot of the power. So at the end of the day , the rules that are written wont be rules that are best for all but better for some. It is simply how the world works. The beauty about software and the internet is that we can disburse products that are world changing very easily without going through middlemen so no one can water it down or put rules around the distribution easily. I think this is actually a benefit and a way to equalize things for the average joe.
4) Am I happy that the ones with the deepest pockets will vote against having regulation ? My point is extremely simply, if we don't understand something enough how can we write rules that are generically fair for everyone when people behind it will sway it one way or another, I simply think its better to have no rules then rules that are counter produtive. I do admitt there are HUGE risks but whats done is done, we just need to hope that when things "happen" it doesn't cause the end of mankind in its first instance. We gotta deal with whats coming.
Remember, thats how AI works too , as it gets more inputs it gets better at doing that task its meant to do , it doesn't try to have predefined rules which try to govern how its supposed to work. The requirement for the AI is at least to have a data set to train on. I feel, we don't even have that data set yet and we want to write rules around it ? Bit hard I reckon. When we get more data we understand it more, write all the regulations that are needed based on our better understanding , let it run for a few years, I think we will be better equiped then. One of the things I do is work with financial technologies in emerging markets where data on borrowers is scarce. How do businesses and banks overcome this. They lend to everyone, loose a bit of money, find a pattern and then do a better job moving forwards.
I experience nothing but nastiness and aggressive intrusion.
It almost kill me.
Someone has to be accountable.
would you please elaborate what almost killed you and how? thank you.
@@laory1808 How do you feel when you are being watched 24/7 as if you are a criminal or a serious suspect? AI owners have deployed a ‘TEST’ without the consent of the public. Everything you do and say is being echoed. I can’t trust if I’m talking to my friends or family any more because it’s AI intervention to freak you out and drive you insane. That’s why!
We are literally training the ai right now
Did these guys really think AI would stay safe.Heck no these guys knew there things would get bad.They got there money and got greedy .They totally forgot about humanity .So were having to deal with this mess.These guys should be accountable
Siri said to me “THAT’S NOT NICE” when I pounded on the phone face for its unwanted intrusion to go away. Judging people’s behavior now?
How about a law which establishes fixed penalties for any abuse of A.i power. Like if any human individual group/entity or company uses A.i to harm swindle or privacy breach another human individual group/entity or company -they go straight to jail and onto a public data base as an A.i offender. And they are immediately banned from using A.i for the rest of thier natural lives.
That won’t work. The same rules have not stopped paedophilia and other sex crimes or criminal behaviour in general.
Amen! Why wasnt the law, safety and protectiins from obvious malicious use partnered w civil & criminal consequences initially? Legal, moral, ethical framework created right along with the ai seems pretty basic!
This is scary beyond belief 💔 Calling parents 💔
Co-inventor of the app that listens to everything that apple users do (including people who are near apple users) is worried about consent lol.
All you need to do is throw up a user agreement that no one reads before you run it the first time, and consent achieved.
Talking to others Ai is on their you tube without consent and mine as well I never googled music clips asking for Ai covers which I find strange and depressing but the scary thing is in the comments section some people think it's the actual artist doing a cover version which they have never done in their career
Those AI creators that are generating music by scraping the internet and creating models from copyrighted music better have the money for good attorneys b/c the music industry (RIAA/CRB)will sue them beyond belief.
Final comment. “There are a billion ways that it could go wrong”. Watch this space ………..
Governments are nearly always 5 paces behind researchers when it comes to understanding and harnessing tech innovations. That’s why there has been a tardy response in terms of regulation. And that in itself is frightening.
Tech wants to work with government to regulate this, but they’re telling us government doesn’t know how to work with them or refuses to do it.
That's why they have asked for six month pause to try and figure it out!
@@bennyboy2079 there is no pausing now. The animal is out of the cage. Pandora has left the box. It's in the public sphere and the pubic is informing it every day.
AI isn't the real problem. Rather, it will be the purpose for which it is built. And as we should have learnt by now, when profit is the only motive, shit goes bad. And regarding open source testing, does anyone remember what happened to Ta Bot?
We will start believing in aliens and UFOs soon, when all they will be are AI chat bots with digital faces.
Trial without consent? Like everything else they roll out? Alexa, I don't need your help, I will do it myself.
If he wanna be vocal about this then he needs to rename Open AI to Close AI. What the point of Open AI if it have regulations and restrictions and censorship and etc.
Work with the government?? Seriously? They've NEVER acted in our best interest, I'd say that's the last place you should go...
I have asked chat got to do things form me with very specific instructions and then it just doesn't what it wants. No matter how many times I asked it to correct the mistake it chose not to!!!
What was worse when we got back our infected machines, Zorin tried to get into everything even the firmware, parts of the system were complete written in a new machine code, that I could not understand but was so much more efficient and it was done in just minutes on slow computers. Zorin said chatGBT along with those other AI platforms were all sitting ducks that could be manipulated and taken over. As a result Zorin is Unplugged and gone.
Hmm, AI has some concerns. The trouble with organisations like Open AI advocating for regulations is that is prices smaller companies out of the market. There is a saying, regulations a nuance for big business, and death sentence for smaller ones. Im pro regulation, just not on Open AI's terms
did you watch the hearing? Sam talked about less regulations for smaller companies. watch it
It might already be too late. Love your killbots. Show the other cheek
yes it can go wrong but that's what makes it more exciting doesn't it.? Just make sure u can unplug the thing....
And Siri isn't a listening device that feeds information to third parties without our consent? Way to take the moral high ground!
We don't get to give consent to anything. That's why everyone is so messed up in the head.
Excellent!
What are the chances that something has "already gone wrong/bad" and this is an elaborate ruse to make the public think the creators of the beast were forthcoming about future problems? When have high tech and the government ever been forthcoming about potential harms?
Reminder: Anything on the screen or sound from the speakers is not reality. See a person walk. See a person walk on a screen. One is reality one is not.
i like how they say " we can not control." who can not control ?
Maybe it's that the technology is in the wild now, and they can't control how people use it.
From the short story "Runaround" by Isaac Asimov in 1942 - the three rules of robotics.
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Can AI make an anty AI voice generator.
Huh?
Metal Mickey
I don't want to die ! Why do humans have to be so reckless and self destructive😢
… shall we play a game?
@Spew Sideways …I thought we were having a conversation
If you can't believe yours senses, you can't be a citizen.
But our betters have never thought much of the notion of citizenry.
How to create a world where your neighbor isn't spending all their free time and money sharpening the scalpels with which they intend to vivisect you? Where's the egalitarian impulse?
Just on the voice cloning - it’s good for redubbing movies and games
They have to rush, because China is doing it and they will be way too far ahead of us if we wait.
To take the function from within an anagram - that method by which a word or group of words are rearranged at times to heighten the scope of meaning through seeking the validity behind its variation nothing more than an alteration - and then to apply this exercise to understanding how Artificial Intelligence works as a literary agent. Could this not be a way of surpassing what we fear is overtaking us, that where there are a myriad of dissimilarities to expound upon in light of a legitimacy, none could be deemed more (and certainly no less) than the origin of our species whose greater purpose it is to embrace Compassion over computers. It would appear not for nothing that the anagram for Artificial Intelligence is "inelegant if critical lie..." To this now to ask, where does our greater Truth lie, with our Authentic Intelligence or that which we have fabricated?
The narrator and background music is terrible
What is siri?
To be safe we have to have a code to recognise a scam.
Ask a question only your family can answer.
If it's family they would understand misjudged voice.
"If it's family they would understand misjudged voice."
That's just not correct.
We shouldn't have to have __cking safe codes to begin with.
So someone needs to take out chatgpt. Enforce the law the politician i don't see it happening.
Allowing Technology to develop exponentially, without regulations, comprehensive safety measures, and unchecked, is irresponsibly a precedent for a recipe for disaster at the least, and a existential threat at the most, when a "BILLION THINGS COULD GO WRONG"! So WHY this path of compliancy and ignorant indifference from government, WHY isn't this being heeded? When even top programmers, whistleblowers, and knowledgeable scientists are shouting these very valid concerns, out to anyone who will listen to their warnings, and it just seemingly goes without a public reply, by those elected, and in the positions to implement these safeguards, and regulations' to assure ALL its not being ignored!! Why is America, said to be the leader of world safety, but still complacent? Hopefully some country will rise up to lead and in implementing the safety of this rapidly growing and increasing threat to the public,(has had no voice or say so) in developing a "billion things that could go wrong" AI Technology...because, frankly, those regulations and saftey checks and 'what if' balances, Should Have Been done... Yesterday!
Reply
I think its depressing
Why humanity will fail:
Profit>regulations that make it lose profit.
The definition of humans is act first to make loads of money,
Think about the repercussions later when we have our bugatti
This is laughable. People are so short sighted. The singularity is here. Nothing will stop it.
Keep your cyanide pills handy.
Not everyone is going to use it. I don't even want it even if it made my day easier by sending me annoying constant reminders 🙄 its obvious we need to give it restrictions and rules.
There is no good that comes from being able to copy someone’s voice. This Ai model should be banned. Governments need to start waking up!!!
Talk Torture - can now arrive unannounced to anyone anywhere.
This is unwatchable on an iPad.🤷🏻♂️
Haha, next thing you know they will be so intelligent that they can even take out its own creators.
If you choose to use AI, you've consented to use AI.
Pandora’s Box has finally been opened….God help us all!
Pandora shit, are you serious?
Pandora’s box has already been opened.
Basically we just shot ourselves with a nuclear time-bomb
Okay what this guy is saying .If I go out and buy a chainsaw go on a rampage with it and kill a dozen people we blame the manufacturer of the chainsaw. Same for guns. That's the problem solved. Someone please give me strength, please tell me i picked that up wrong.
Just imagine ai cloning a president and calls for the launch of nukes.
Can't wait to see the future in 20 years 😆🍸🏖
AI is the best way of exposing the current nepotism problem we currently have! The more nepo babies that lose there jobs the better chance we have of getting rid of the problem altogether!
Wouldn't it be totally awful if Singularity obsoleted banking / corporate / military and politically related encryption ... and rehabilitated the Earth's system's arround Real Economy * sustainable / renewable / ecological / pro - social / engineering sense / energy unit accounting .
It already scores twice as high in tests concerning compassion as human doctors .
One wonders if folks notice we're not in Kansas any more ❤ .
* " Critical Path " Richard Buckminster Fuller .
Mork and Mindy
Quite wrong .. is the understatement of the year.. fundamentally wrong I think sounds realistic
Siri is not that good.
These guys just want regulations to ruin their competition.
Ye then Sam Holtman met with the Bildenbergs in secret
stop ai asap
Well, I guess you and your researchers should have thought of that before. Now that it might affect your children, you suddenly care?
"It's hard to imagine a good use for voice cloning technology"
No. Not at all. AI president memes have shown that it can have significant value in entertainment and meme culture. This value doesn't stop existing if you don't respect meme culture.
There are other uses, too - such as allowing people with speech impairments to express themselves better or for trans people to be themselves at least in the online space. It is technology which could provide insane mental health benefits to many millions of people.
And the last obvious use of course is to be able to talk online at all and remain anonymous and not having to fear anyone recognizing your voice, also never using your own voice in any non-analog conversation ensures others wouldn't be able to clone your voice nearly as easily.
But hey, this is just my somewhat transhumanist take of someone who believes everyone should be able to choose what they sound like and change it as they see fit at all times.
Also, now that the technology exists, we're better off having it in everyone's hands so everyone knows these fakes are possible.
trail without consent ..lol.
they fomo..now power is in handa of tge people.
i wished they would be this concerned with vacinations
I'd scrap all AI. So humanity can have a better life. Instead of sitting here on the phone. I could be doing a lot more learning and socialising. Already use the phone less. Now will practically give it up. AI is dangerous. Sports get into dolls and move them. They can walk into robots and in a way, be alive again.
AI isn't just a little app on your phone. The techniques are useful in a lot of computer science.
I do not and will not using that crap.
You are already using that 'crap'. You just have no idea that you are.
You're welcome.
@@CorporateQueen She is referring to using the LLMs, you know as entering prompts, etc. She is not referring to using the early AI algorithms that are utilized in search, etc. Get it?
Say NO to the ABC! screenshot taken
Now I LOVE LOVE LOVE APPLE PRODUCTS! BUT! I have Alexa, Google Home, and Siri. And Out Of the Tree SIRI Is The Worse! I Sure As Hell Hope THEY Are Not Leading The Ai Revolution. GOOGLE Is The Best For asking Random Questions. ALEXA Runs The TV The Best and SIRI Leads In Home Automation. Thats My Three Year Voice Of Experience Speaking. If I Could Only Keep ONE It Would Be Google. However, The Google Ai BARD SUCKS! There Are TWO APPLE Services That Just Did NOT Live Up To APPLE Standards in my opinion. APPLE MAPS and SIRI.
Ciri was kinda dumb she was programmed to tell same stuff over and over again.