We will use it to design manufacturing processes that will create raw materials or final products autonomously. Natural intelligence is something that has gotten us very far, we are able to fly, create vaccines, I think we should learn from natural intelligence before we start creating our own.
We should use AI to call out people's BS, unless that AI is designed to do BS, in which case we'll let the non-BS AI duke it out with BS AI, and watch the confrontation with beer, wings, fries, dips, and some occasional celery sticks.
Given enough training data it can do anything. Human brain evolved into what it is now and AI evolves quite similarly. Different training models and approaches will teach them different things.
At first, I came here to learn and listen to science space stuff. But after like 100 episodes, I'm excited for Chuck. I know Neil and the comment section always pick on him, but that dude is so smart. I don't base smartness on degrees and how much you know. I think not being afraid to ask questions and asking the right questions. He's like good at being curious
I agree. I also feel that stand up comedians in general are naturally smart people. It takes a very unique kind of brain to come up with a funny line on the spot.
Clearly, One of the best episodes on this channel. Matt is so articulate in his thoughts and so precise when he speaks. I like his clarity about social concepts. Chuck and Garry too are having such insightful questions. Just love to see some experts talk about things they are expert in.
Sometimes i wish Startalk would hire a animator to animate in a simple way what some explanations looks like. I've seen it a few times but if all explainers contained animations then i think starttalk could gain so much more followers and views. Just a thought
Regarding the John Lennon AI story: McCartney has clarified that AI was merely used to isolate the vocal from an existing Lennon demo recording, in order to clean it up for a new song. Lennon's voice was not "synthesized".
That's correct his voice was already on the Demo. They just isolated it so his voice would remain in the song. There were 2 songs that were done that way. Free as a Bird & Real Love.
Neil & Co. need to do more episodes on this subject. They certainly have a cohort of scientists that could offer various different points of view on the subject.
As a physicist and systems architect for statistical analysis on large scale data it has been interesting to see the progression over the last few years of Machine learning. The part that does concern me is if there is a point where computers actually "learn" on their own coming to decisions that were not programmed for them. This to me would be the major change between Machine Learning and Artificial Intelligence (although I'm sure there is a better definition out there). If that point comes then computers will be able to show us new things in data that we didn't think of which is exciting but it is also a bit of a scary tipping point.
That's what the AI singularity is. The starting point, actually, when it starts to learn by itself and take decisions that weren't programmed. Once that happens, we don't know what CAN happen next. You might look into the famous paperclip maximizer analogy. A lot of people thought about this very subject decades ago, when all we had were expert systems and AI was mere science fiction.
@bw-g2539 Maybe. Or if they remove the "can't learn from interactions" and the "no memory beyond the current session" built-in limitations, that are obvious safeguards. OpenAI don't want to be taken responsible for the Chat-GPT starting to write weird stuff, that I can understand. Just allowing it to gather more information online (Auto-GPT) has already produced some freaking results.
I think we need better terminology since capitalism has co-opted the term artificial intelligence to mean any output that appears coherent. When a machine intelligence becomes conscious is, to me, when we have actual artificial intelligence. Until then it is just some clever trickery.
This was great and Matt Ginsberg offered sharp and knowledgeable insight into AI and other topics. I'd like to ask if you could also have experts on who are knowledgeable about AI risk who are outside of the AI industry, people who look into potential problems with AI in of itself like Max Tegmark, Geoffrey Hinton, Yoshua Bengio, Eliezer Yudkowsky, or others. Thanks for the great episode!
AI is replacing us. I'm an Illustrator and since the development of A.I "art images" apps, the clients NO longer want to go directly with a real illustrator, rather they prefer to save money, not pay anything, just grab the images of the illustrator or artist you like and download them into the app, it will do the job for you free and in seconds, when going to a real illustrator would take days and money to create a piece of art or any type of illustration. This seems great to any user and companies but for us the illustrators artist means the end of work commissions. Just in the weeks I have lost several clients that decided to continue working with the apps rather than working with me. The real problem here is that the A.I need us the artists to come up with the styles, the machine then copy the artists styles and make them obsolete, not paying any kind of retribution to the real artists from were it feeds. Does this sound fair to the art community?. Meanwhile these companies are making millions of dollars just for getting the people to use their products. Products that are being developed NOT considering the artists right to their work and images. There should be a law or laws that protect us artist from companies and people users robbing the images rights, if the companies want to use my images they should ask if I want to lend them my images, and if I don't want to participate they should respect that, or pay royalties for the use of my images. You have to understand the images of the artist are THE FUEL (A FUEL that isn't being paid or recognized, its has been taken for granted) of these apps, without them they can NOT produce the quality and rich variants art images that they can produce now.
Here is a suggestion: Next time you do a A.I podcast episode invite the people who's jobs are being affected (artists, writers, actors, voice actors, illustrators) and ask them how they are doing. You will get a much better complete picture of A.I impact in real life right now.
I just want to say that I'm very thankful for this discourse and Star Talk. I wish this was available when I was growing up only 20 years ago. This is amazing.
If the www was there 20 years ago, I could've saved time and money doing what I love and not go to school at all which was my plan, but I have to follow the usual path.
@marsrideroneofficial Just imagine if we have all the resources and infinite insight of the WWW back then. However, I'm grateful for the outside time making mud pies, catching frogs, and chasing the cattle at grandaddy's farm. 😊
The best scene from the movie War Games is when the main character asks the computer if the military mobilization for WWW3 was a game or if it was real, and the computer answered: "What is the difference?" That is a problem with AI that needs to be worked out.
As a huge Beatles fan and a music producer: as far as I know they did NOT “sample” or create an AI model of Lennon’s voice. What I have heard (read) Paul and Ringo say is that machine learning was used to separate noise, instruments (piano) and voice so that a whole production could be built around the usable elements. Huge difference. Plus, they did that (the separation of sounds) for the Let It Be movie and Revolver album re-mix. And what they did (finishing a song) is not that different from what was done in 1995 by the band. Super nerdy comment I just had to make 😂
This was amazing, I could have listened for longer! Its good to hear different perspectives on this topic, especially after finishing a short film on the dangers of a.i ourselves. Neil deGrasse Tyson is a legend in this field!
Nazareth University in Rochester NY now has a full AI Bachelors of Science Degree. AI is here to stay might be best to learn about it and how to use responsibly.
My biggest fear of AI is that humans won’t have time to adapt. AI growth is exponential. Humans need time to adapt to change and AI is about to change the world the way cell phones did in half the time. And much bigger impacts. I worry we won’t adapt in time.
We want more, we want more, we want more. This debate is so good that there are so many questions in the air without an answer. 1/5 plays? Why they are not hiring you already? After hearing this, I hiring you for all my bids, with a comission.
"Short Term Bumps" and "Problems are Opportunities in Disguise" are two great phrases. Basically, nearly all ground breaking developments end up with collateral damage, especially early on, but they eventually even themselves back out. This should not prevent us from making those strides or to regress to lesser times.
I need to add, as an auditor for almost 40 years, is that rigid, uncircumventable controls should be implemented to mitigate the risk of AI running amok. This has to be done BEFORE AI is implemented in any system, otherwise AI could find ways to circumvent them (which is basically the point of AI.)
Gee, and here the AI developers weren't even bothering with controls... or rules? It sounds like you're describing rules, not inputs (controls). Don't worry, humanity, the auditor has the solution for the technology sector: Bureaucracy.
@@SupachargedGaming The controls required are neither over input nor processing, but rather over output. That is, what new ideas IA is coming up with. To use a very mundane example: if a medical device advises that a patient's paracetamol dose should be replaced by say acetylsalicylic acid, there should be a control to first check if the patient is not allergic to aspirin. Scoff if you must, but auditors aid all sectors (even technology) in self-preservation.🤓
So, Neil disagreed with the surprise situation. Let's point out something really quick. Deep learning models are considered to be universal function approximators. When they train, they are training to approximate the probability distribution of the training dataset. It does not learn to identify distributions outside of its training set. Much like y=x^2, if you put in an X of any type, it will try to output X^2. If the label was supposed to be X^3, the neural network alone won't be able to classify that appropriately. Now with that said, there are anomaly detectors which use Variational Autoencoders (VAE). These are models which try to replicate the input. You can look at the reconstruction error on this to try to find anomalous inputs which don't match the model inputs. So if I have a model trained on just cats, it will only know how to reconstruct inputs into cats. If you give it a car, it will build a car with cat "parts" or some other noise. Looking at the reconstruction error on such an output can potentially be flagged as anomalous. There's other methods in recalibration, which analyzes the confidence outputs of a classifier and rescales the confidence values to fit appropriately for a particular context or probability. There is a potential to push low confident samples to low confident outputs, and specify these low confident outputs as anomalous. So we're both right and wrong here. AI can be used as anomaly detectors, but the AI itself alone cannot tell you about the anomaly. It needs to be designed in such a fashion to identify these anomalies, and not just rely on a model itself as trained.
At the end of the day, every advanced and pogressive science and technological inventions and innovations, every science and technological gadgets, every micro to macro tools should always be for betterment of people of society, human friendly, transparent, forever technically transformative and sustainable; rather should always be aiming in strengthening people mentally, physically and spiritually, solving every human complexities and contradictions, should always be advancing human civilisation in every ways, through every developed and pogressive version of art, literature and culture.
If ever there was an interview that raises my red flags, this is it. Startalk did a great job, this guy just wants to deflect questions, dance around difficult answers and pat himself on the back. I'm a fan of AI, I want AGI but I do not want a monopoly like Google getting it first and this guy is doing a great job confirming that for me.
literally most ai experts talking about this stuff keep repeating the same thing, " oh dont worry about being replaced, it will only enhance us", yeah right, total BS
@@alfrilysencarnacion2085 They'll do whatever saves them the most money in order to give more to their shareholders and line the pockets of their executives. They don't care, they just want money. Right now they're using AI as a marketing ploy, "Our AI enhanced product will improve your life, whatever whatever" but with GPT4 and others coming soon, it really is capable of replacing jobs. In fact quite a lot of what I've done in recent jobs could be replaced with AI. Its not if, its when and when is coming soon.
@@enadegheeghaghe6369 Have you searched Google? What you see is where they want you to look. You need to look far & wide to get what you really want. They work with these Cos. to put them 1st.
21:00 I feel like I thought of a good analogy. You have an ai camera looking at a piece of paper with an infinite number of shapes drawn on it. (The paper representing space time and the shapes being things in that reality.) The paper is infinitely big with every possible shape there can be, even new shapes. But no matter what, the ai wouldn’t be able to find a four dimensional shape. At this point ai doesn’t “think” the way we consider thinking. It’s not going to get creative and go out of the box in any way that hasn’t been programmed. That would be an ai that’s able to self program it’s own original processes I guess… but even that is relatively archaic, because the ai would have to have some kind of conscious choice as to what it’s programming for itself on the spur of the moment. Like, that ai sports play caller isn’t randomly one day going to ask what it’s purpose is and then freak out. That would be like growing a rose being worried it might grow fingers instead of thorns.
I think we need more education. My father, born 1921 had 12 years of school. I , born 1960 had 12 years My son, 1989 had 12 years Today we still only have a grade 12 for public schools. There is so much more to learn than 100 years ago but we never extended the public education system. We should be at least at grade 14 by now.
“No matter what the finance minister and her spokespeople say, the market has spoken - the human nation’s credit rating is falling like a stone, while 01’s currency is climbing without stopping for breath. With headlines like that, the money markets have no choice but to…” “The leaders of men, their power waning, refused to cooperate with the fledgling nation, wishing rather that the world be divided.” -The Animatrix, Second Renaissance
I’m a high-level controls engineer, robotics, has a long way to go however, factories are currently performing machine learning, we are using AI to implement lean manufacturing techniques. At the same time, I’m trying to expose ancient art from North America. Manifest destiny is still in full effect. The indigenous population made a whole lot more than arrowheads, they made art that goes unrecognized by modern humans.
For real, Chuck was the best part of the show. That Queen came prepared to talk about sports, encouraging a nerd fight!!!, moral ramifications, and finding their way in or out of the closet, as they see fit. Thank you for one of your best shows: Chuck, and the two nerds. Thank you for asking questions about machine learning, and how it can impact all of us. And also, all the creative fields that are being disrupted because of it... Rules need to be established that put the actual living beings first.
I love watching Startalk; however, I get annoyed with the frequency of interrupting and talking over each other. (Might be a pet peeve, I'm working on it.) I would like to see a difference in future releases.. I've got 2 months before it's time to renew my Patreon subscription.
as an artist, I am incensed by the idea that my work can be sampled, without compensation or accreditation, in order to enable an AI to produce new art that mimics mine almost perfectly, and that someone else can sell. It is conceptually abhorrent, particularly if it puts me out of a job. this is one of the precise issues the Writer's Guild is protesting right now, btw...
But the issue is, it is already too late. Once we know HOW to do something, the genie cannot be put in the bottle. We cannot unlearn it and say "you know what? This technology is forbidden knowledge. Lets pretend we never invented this." We have to adapt to the change in our society, just like with photography, the motion picture, radio, television, automation, videogames, the internet, Smart Devices, VR, etc. The debates happening now has happened with EVERY one of these inventions, did you know that? AI is no different. Also as pointed out, even if we were to massively regulate it, or (extreme example) outright ban and outlaw ai....other countries wont. We live in a global society thanks to the internet, so our regulations won't do anything really. So we have to find a way to cope, and integrate ai into our lives.
@@river_acheron whew, this is way back. so, you make good points, but there're counterpoints I want to emphasize here. 1) Ai isn't "Artificial Intelligence", the term was coopted. the ability to learn was just ONE metric of almost a dozen for "AI" and it's the ONLY one these have. they also cannot teach themselves, they must be taught (fed data). It's a marketing scam to drum up investors. I call it "AI" for "Automatic Interpolation": which is more accurate and less sensational. 2) you're right, it's here to stay, but that doesn't mean that artists should be shafted and told their work is fair game to big corporations. it's actually fairly easy to legislate regulations on the attainment and application of training data (some like Adobe have already implemented this). "Anarcho-Capitalism" is just theft. 3) if only the US and UK pass these regulations, that represents such a massive market share, that even foreign countries will comply in order to access our markets, it's been done time and again with almost every international industry. 4) being compensated and credited for ones work is a fair ask, and a matter of settled law for decades, *centuries* in some cases. it's also well within the abilities of these mega-corporations. that they want to whine about it is their problem. 5) I don't think many folk are calling for outright bans on the tech anymore. it's not "Don't use it" it's "Use it responsibly and fairly" that's being asked now.
Great discussion! I used to watch Star Talk on National Geographic and am glad to have discovered this channel. I just need to learn to type faster with my thumbs. Yes I am a boomer.
Yes this was a comprehensive conversation on this topic... since the early days of the 'dall-e' copyright complaints there have been many interesting youtube conversations but this has been the best one yet... imo
Make it a felony to misrepresent or convey a deep fake as real. Hold all media social or otherwise responsible. Have a 6 month grace period where warnings will be sent out. Enforce it enthusiastically. You can't stop people from making them. You can stop them from being distributed.
Librarians are having similar discussions under the heading of Information Literacy. It's important and getting more so every day. Thanks for this great episode.
The REALLY SCARY part of this is... how DlSCONECTED from REALlTY (no offence intended) are the people developing these technologies, statements like: -Cars are cheaper now ( litteraly at a time with record prices and mark ups) -We can trust info if the source is a news agency (most news agencies taking their news litteraly from tik tok and viral videos) And especialy.... -Yeah it's always been "challenging" for society to keep up with technology, it's important that society distinguish from the apparent changes (proceeds to take ZERO responsability on his role developing these technologies) -I am thrilled I get to spend my productive time working on technical problems and I don't have to solve the social problems (🥶😱 these is the people shaping the future )
You named all the red flags I heard and then some. The overarching pattern here is that all these people in the forefront of AI development talk about the ways to increase profits. Of course this guy is designing an AI to call plays in football. If6 course people are designing AI to make decisions in the stock market. They think they're doing humanity a favor. To the contrary, we haven't even figured out how to allocate food. We have 9 million people starving to death every year worldwide. And in America, we have 700k with heart disease. The profit motives drive the poor allocation of resources, and here we are, once again exacerbating the problem with more tools. AI could make incredible medical discoveries. But is that what we will use it for? That would require universal healthcare, otherwise it once again becomes a technology to increase profits at the expense of prosperity. We can't even get ahold of the divide and conquer strategies that are driven by radios, television, and internet, and we think we are responsible enough for AI? If an alien civilization was trickling technological advancements to us, they would cut us off until we prove ourselves worthy. I hope that's the case. I hope we got their attention with Hiroshima and Nagasaki. I hope they're watching over us. Call it god... I don't care. We need help.
Our recent visions of "News Outlets" brings home these discussions of trusting sources. I see too much of people citing these not real sources. People that reference "Wikipedia" are 1.
I agree to some extent with you. Society is what for every advancement was supposed to be made, to make them live a more happier life, but these ruthlessly ambitious technological advancements are being made while neglecting the actual society. Social science are always looked down upon while the ultimate reality is that all these advancements are means to an END, and this End is human happiness and longer survivability. I am making my life in social science so that I could devise ways to solve the actual problems that these people are making ignorantly. Capitalisation is now Turning to be more bad than good. Sometimes I feel poor people are just going to be exterminated in upcoming times
It is ironic that this gentleman talks about the need for trusted sources and yet Google played a huge part in the decline of accredited, quality journalism
This guy raises my red flags left and right. He's deflecting questions, like morality in regards to bringing John Lennon back to life. For one he works for a monopoly but on his own he's not doing a great job of gaining my trust that's for sure.
@@Nefville Yeah, we don't don't need AI puppets of dead artists doing poor impressions for the continued profit consolidation for billionaires, who we know are the ones that will exploit dead artist AI reanimations. Wonder how the surviving family would feel about a fake image or voice of a loved one being exploited. It's sickening, it's fake, and also can never replace a human artist with full equivalence. I don't trust any corporation. They are ALL inherently greedy and care for nothing but their enrichment. Profits dominate their entire existence.
AI in and of itself is not bad. It's literally just like anything else, or any other tool used by people. It's people that are good or bad. Good people will use it for good things. Bad people will use it for bad things.
@Wis_Dom - Well, I didn't distinctly say "no tool is bad." However, I can see how that could be inferred from my statement. Weapons as a whole could easily be said to be bad things, but again it can be argued either way depending on their end use. For nukes, the tool can be the bomb itself or the energy knowledge contained within it.
Philosophy: study of the fundamental nature of knowledge, reality, and existence. My take away from this debate is, why can't we create a philosophy Ai to balance the truth and lies to govern all the outputs from the results generated by Ais?
The Calculated Year As Of This Writing Is 02052, QT or On Adapted Unified Field Theory To Maintain Compatibility With Religions With 12 Month Years is 02053, AUFT To Keep Time Moving, And Use Quantum Physics, Which Enables Quantum Computing, The 12 Negative Time Zones And 12 Positive Time Zones Have To Be Converted Into 20 Positive Time Zones or 24 Positive Time Zones (For Adapted Unified Field Theory), from 0-19 or 0-23. It would also be great if you can do a special on Planetary Thermodynamics that explain our current planetary temperature based upon our distance between the earth and the sun... it should be simple for everyone to understand why we have summer and winter, but few appear to grasp it and think a magical force must be forming rain clouds and tsunamis for no reason.
Trusted sources is what I choose. I've been saying this for years. Anyone can post on RUclips or other social media sites. Anyone can make their own website and make it look really professional. I trust sites that I know as "main stream", and am skeptical of EVERYTHING else. Whatever a non trusted source says, I try to verify with trusted sources.
I have a question for Matt about his punt prediction model. Curious how, over time, any of the recommendations/predictions have changed? Im guessing that when it was first developed, the data was based on old school thinking about when to punt. But now that you see more teams implementing your suggested approach, have the outcomes changed at all?
The decision is made by this big recursive computation, trying to maximize the chances of winning based on statistics about the outcomes of various plays. Those statistics (outcomes of specific plays) are likely pretty stable even as team philosophies change.
Education is everything, it is the only weapon against ignorance and complacency in order to arm ourselves for the future. Therefore we need to make education accessible to everyone around the world. Easy!
The Harris Rogan cinematic universe we didn’t know we needed . Alex Jones is a nut bar, like some of his guests joe is just interested in learning about their perspective for better or for worse, allowing the listener to be a bug on the wall. Congrats on the Emmy! Was here since Taylor ice cream machines
When I first began to watch this podcast, my opinion of Chuck Nice was not very high and I didn't understand the inclusion of a comedian in the show. However, I have changed my opnion and determined he is one of the most insightful, well spoken and intelligent voices on the internet. I agree with Chuck, it is a problem of education. Teaching basic logic should be a fundamental part of early childhood education. I have a small sample size in my own life where I taught my son, Occam's Razor at a very young age, 6. Not the complicated methodology but simple methods to look at the world and make decisions based on simple rules. He is now 17 and I have paid for it in spades when he calls his old man on his own BS. 🤣 I have created a monster but at least he is not fooled by the media machines of the internet. The simplest answer is usually the correct one.
The problem is that if a coach gives commands how likely is it that the player will do it or can do it. What’s his stamina, how long has he been on the or better can he depend on the rest of the field
If individuals were to recognize that their own brains are prone to bias and deception, it could foster a greater sense of self-awareness and critical thinking. People might become more cautious in their judgments, more open to different perspectives, and more willing to question their own beliefs. This heightened awareness of cognitive biases, such as confirmation bias or the tendency to favor information that supports preexisting beliefs, could lead to a more balanced and objective approach to decision-making. As a result, individuals may become more empathetic and understanding of others, recognizing that everyone is subject to the same cognitive limitations. This could lead to increased cooperation, reduced conflicts, and a stronger sense of community. People might engage in constructive dialogue, seeking common ground and shared understanding instead of perpetuating divisive ideologies. That part of a theory im working on it
The difficulty of solving chaotic problems (3 body problem, for example) isn't computer power or time, it's error or rounding in initial state parameters. It doesn't matter how powerful of a computer or how long you run the simulation, you will end up with a diverging result simply because the tools to measure the initial state are not good enough. They will never be good enough. We don't even have the decimal accuracy in our computers and never will. We run into floating point errors in non-chaotic problems already.
Nice talk, like always a pleasure watching. What I really don’t understand are this contradictions between experts, ones saying “run for your life”, “stop the AI development, will destroy us”, “we are doomed”, and others saying “AI is not smart like us, doesn’t know what reality is, just predicts some words”, “Many times just hallucinate and give false information”, “We must call it Artificial Idiocy “ and so on. And these are experts, highly educated in exact science fields, from which we expect clear evidence and answers. If they are not capable of giving an unanimous accepted, accurate and solid answer to general public, they will only contribute to the rise of ambiguity and uncertainty.
ChatGPT's (3.5) answer to the question: Is there are correlation between the phases of the moon and the amount of chicken eaten in Denmark? "As an AI language model, I don't have access to real-time data, and my knowledge is up to date only until September 2021. Up to that point, there was no scientific evidence or study indicating a correlation between the phases of the moon and the amount of chicken eaten in Denmark or any other country. Correlation studies require careful analysis of data and statistical methods to determine whether there is a relationship between two variables. If there has been any research conducted on this topic after September 2021, I wouldn't be aware of it. In general, claims of correlations between seemingly unrelated events (like moon phases and chicken consumption) should be treated with skepticism unless supported by rigorous scientific studies with statistically significant results. Otherwise, such claims may be mere coincidences or unrelated factors influencing the observations."
Neil sir i have a question as a student From your earlier video about time travel and twin paradox if we move at the speed of light then according to position of observer for both of them time will be different But what would happen if one observer is on earth and one is revolving around the earth close to speed of the light
as far as using AI to create a video, recording of someone's voice, image, etc. should only be allowed for the privacy of one's home and is not allowed to be distributed, sold, posted on the internet without the premission of the original person or their estate. So, if one wants to use John Lennon's voice, image, etc. and create a new song, etc., they can do it all they want at home for their own personal enjoyment or research, but if they shouldn't be allowed to make copies, post of sell copies/original on the open market without the written permission by John Lennon or his estate and Royalties must be paid if it's allowed and it's sold. The thing goes with anyone famous or not. AI should NOT be allowed to create any form of news. PERIOD. Any time AI is used publicly, there should be some form of watermark or label where the viewer or listener of that AI generated content is known ahead of time that it is AI based and can't be trusted to be valid, truthful, etc.
May I suggest two short stories by Ray Bradbury about the replacing of Humans with technology. The first is " Marionettes Incorporated ". A cautionary tale where-in we think we can control the AI but unleash our replacements who are not just satisfied to serve but plot to take our place. Second is " I sing the body electric " . Here AI does serve us well , but we become too fond of what we create . Has what we built in our likeness become capable of feelings and how do we deal with it..
What we need is a look in on A.I and machine learning technology on astrophysics and future space advancement such as considering merging man and machine in order to endure inter stellar space travel, advancing civilization on Mars, the earth like moons of Jupiter, getting to Alpha Centauri enduring a 1 to 10 thousand year journey etc. etc.
Okay, I usually never comment here but this episode was so good that I had to. Really intriguing, interesting and engaging episode, was sad when it finished.
To help identify fakes, creators should start signing their content. From the camera and creation tools themselves, it should be an option (i.e., using digital certificates), so a chain of sources can be established. We're never going to get 100% of the fakes to be properly watermarked (as said in the video, it depends on the creator's good intentions), but we can get trusted sources to digitally sign (in essence, watermark) their content going forward because they have incentives to do so (like protect ownership/copyright, and show it comes from them, to put their reputation behind them). Anything not signed would be considered derivative and/or unverified, and consumers should know/learn to take them as such. Then consumer apps can indicate who signed the content you're consuming. There will be a (relatively) small amount of content watermarked from their source (which can be either presented as factual, opinion, or AI generated, according to the source). It is up to the consumer to decide if they believe it. And any inquiring consumer can easily track authentic content to their source (not to mention that it makes it possible for authorities to track the chain upstream). If the chain is broken, the content has to be treated with less veracity than properly signed content (and/or you have to find a better source that is unbroken). When the courts and government get involved, there would be a chain of evidence (making libel laws sting again, catching those purposedly lying). And consumers should be encouraged to find original sources when making arguments for their conclusions: they seldom do it now, since anybody can write whatever they like on the Internet (and preset it as a good source, and disown anything that you becomes disproven). But if they (and their sources) get rated, having consequences when your past arguments divorce from reality, then we might get to having meaningful discourse.
I'm thinking asking those same lines. The will be huge incentive to digitally sign your work. Possibly public key encryptions? Block chain also might make sense.
What do you think we'll use AI for as it continues to develop?
Do physical labor most efficiently
We will use it to design manufacturing processes that will create raw materials or final products autonomously.
Natural intelligence is something that has gotten us very far, we are able to fly, create vaccines, I think we should learn from natural intelligence before we start creating our own.
We should use AI to call out people's BS, unless that AI is designed to do BS, in which case we'll let the non-BS AI duke it out with BS AI, and watch the confrontation with beer, wings, fries, dips, and some occasional celery sticks.
Given enough training data it can do anything. Human brain evolved into what it is now and AI evolves quite similarly. Different training models and approaches will teach them different things.
Map asteroids with threat determination would be good.
At first, I came here to learn and listen to science space stuff. But after like 100 episodes, I'm excited for Chuck. I know Neil and the comment section always pick on him, but that dude is so smart. I don't base smartness on degrees and how much you know. I think not being afraid to ask questions and asking the right questions. He's like good at being curious
He is good at asking the question that we are all thinking and wanting to ask ourselves.
Wtf are you talking about? He’s pretty smart, is his ROLE to make it comical … 😅
I agree. I also feel that stand up comedians in general are naturally smart people. It takes a very unique kind of brain to come up with a funny line on the spot.
You understand that education does not equal intelligence. I applaud you.
Also here for Chuck
The honesty, humor and great energy between these guys makes the whole video, everytime again!
Topics and conversation is great and moves topics forward. Also Chuck is great at keeping it lively.
Chuck came PREPARED for this conversation! contagious enthusiasm, as usual. 👍
I've rarely if ever seen Chuck so serious on this show as he is in parts of this show.
I liked it a lot 😁
So serious for real
Clearly, One of the best episodes on this channel. Matt is so articulate in his thoughts and so precise when he speaks. I like his clarity about social concepts. Chuck and Garry too are having such insightful questions.
Just love to see some experts talk about things they are expert in.
Sometimes i wish Startalk would hire a animator to animate in a simple way what some explanations looks like. I've seen it a few times but if all explainers contained animations then i think starttalk could gain so much more followers and views. Just a thought
honestly this could be good, i dont want it to be done infographics style though.... if you make something cheap, it will feel cheap
I always said this . At least now with AI lol, animators have an easier time ...
Regarding the John Lennon AI story: McCartney has clarified that AI was merely used to isolate the vocal from an existing Lennon demo recording, in order to clean it up for a new song. Lennon's voice was not "synthesized".
That's correct his voice was already on the Demo. They just isolated it so his voice would remain in the song. There were 2 songs that were done that way. Free as a Bird & Real Love.
Neil & Co. need to do more episodes on this subject. They certainly have a cohort of scientists that could offer various different points of view on the subject.
I remember Stephen Hawking saying that he thinks AI will be dangerous.
As a physicist and systems architect for statistical analysis on large scale data it has been interesting to see the progression over the last few years of Machine learning. The part that does concern me is if there is a point where computers actually "learn" on their own coming to decisions that were not programmed for them. This to me would be the major change between Machine Learning and Artificial Intelligence (although I'm sure there is a better definition out there). If that point comes then computers will be able to show us new things in data that we didn't think of which is exciting but it is also a bit of a scary tipping point.
That's what the AI singularity is. The starting point, actually, when it starts to learn by itself and take decisions that weren't programmed.
Once that happens, we don't know what CAN happen next.
You might look into the famous paperclip maximizer analogy. A lot of people thought about this very subject decades ago, when all we had were expert systems and AI was mere science fiction.
I feel like this will come with quantum computers
@bw-g2539 Maybe. Or if they remove the "can't learn from interactions" and the "no memory beyond the current session" built-in limitations, that are obvious safeguards. OpenAI don't want to be taken responsible for the Chat-GPT starting to write weird stuff, that I can understand.
Just allowing it to gather more information online (Auto-GPT) has already produced some freaking results.
AI is a scam and you all bought into it. Real AI is sentient not just a complicated program.
I think we need better terminology since capitalism has co-opted the term artificial intelligence to mean any output that appears coherent. When a machine intelligence becomes conscious is, to me, when we have actual artificial intelligence. Until then it is just some clever trickery.
My Goodness Neil,Chuck and Gary this episode relieved a terrible amount of anxiety for me bless you guys ❤❤❤
We need to accept that failures are a great way to learn.
This was great and Matt Ginsberg offered sharp and knowledgeable insight into AI and other topics. I'd like to ask if you could also have experts on who are knowledgeable about AI risk who are outside of the AI industry, people who look into potential problems with AI in of itself like Max Tegmark, Geoffrey Hinton, Yoshua Bengio, Eliezer Yudkowsky, or others. Thanks for the great episode!
This may be your best show that I’ve seen. Really important subject matter and serious but entertaining discussion.
AI is replacing us. I'm an Illustrator and since the development of A.I "art images" apps, the clients NO longer want to go directly with a real illustrator, rather they prefer to save money, not pay anything, just grab the images of the illustrator or artist you like and download them into the app, it will do the job for you free and in seconds, when going to a real illustrator would take days and money to create a piece of art or any type of illustration. This seems great to any user and companies but for us the illustrators artist means the end of work commissions. Just in the weeks I have lost several clients that decided to continue working with the apps rather than working with me. The real problem here is that the A.I need us the artists to come up with the styles, the machine then copy the artists styles and make them obsolete, not paying any kind of retribution to the real artists from were it feeds. Does this sound fair to the art community?. Meanwhile these companies are making millions of dollars just for getting the people to use their products. Products that are being developed NOT considering the artists right to their work and images. There should be a law or laws that protect us artist from companies and people users robbing the images rights, if the companies want to use my images they should ask if I want to lend them my images, and if I don't want to participate they should respect that, or pay royalties for the use of my images. You have to understand the images of the artist are THE FUEL (A FUEL that isn't being paid or recognized, its has been taken for granted) of these apps, without them they can NOT produce the quality and rich variants art images that they can produce now.
Here is a suggestion: Next time you do a A.I podcast episode invite the people who's jobs are being affected (artists, writers, actors, voice actors, illustrators) and ask them how they are doing. You will get a much better complete picture of A.I impact in real life right now.
I just want to say that I'm very thankful for this discourse and Star Talk. I wish this was available when I was growing up only 20 years ago. This is amazing.
If the www was there 20 years ago, I could've saved time and money doing what I love and not go to school at all which was my plan, but I have to follow the usual path.
@marsrideroneofficial Just imagine if we have all the resources and infinite insight of the WWW back then. However, I'm grateful for the outside time making mud pies, catching frogs, and chasing the cattle at grandaddy's farm. 😊
The best scene from the movie War Games is when the main character asks the computer if the military mobilization for WWW3 was a game or if it was real, and the computer answered: "What is the difference?" That is a problem with AI that needs to be worked out.
As a huge Beatles fan and a music producer: as far as I know they did NOT “sample” or create an AI model of Lennon’s voice. What I have heard (read) Paul and Ringo say is that machine learning was used to separate noise, instruments (piano) and voice so that a whole production could be built around the usable elements. Huge difference. Plus, they did that (the separation of sounds) for the Let It Be movie and Revolver album re-mix. And what they did (finishing a song) is not that different from what was done in 1995 by the band. Super nerdy comment I just had to make 😂
Chuck is so incredibly smart ! I love him and his quick witted quips. Love this show so much !
We watched him get smarter over the years by hanging out with the right crowd :)
The humor and pacing really helps me digest these complex topics better. Such a good recipe you guys have in this channel.
The camera doesn't lie - but photographers do.
What a great interview!!!!! Communication and education will set us all FREE!!!
This was amazing, I could have listened for longer! Its good to hear different perspectives on this topic, especially after finishing a short film on the dangers of a.i ourselves. Neil deGrasse Tyson is a legend in this field!
Nazareth University in Rochester NY now has a full AI Bachelors of Science Degree. AI is here to stay might be best to learn about it and how to use responsibly.
My biggest fear of AI is that humans won’t have time to adapt. AI growth is exponential. Humans need time to adapt to change and AI is about to change the world the way cell phones did in half the time. And much bigger impacts. I worry we won’t adapt in time.
I love how Startalk has only a general direction of conversation and the conversation just flows from there.
Amazing show. What are the headphones that Matt’s using? Anyone recognized?
We want more, we want more, we want more.
This debate is so good that there are so many questions in the air without an answer.
1/5 plays? Why they are not hiring you already? After hearing this, I hiring you for all my bids, with a comission.
Such an interesting episode. Love it, it really makes you think 🥰
"Short Term Bumps" and "Problems are Opportunities in Disguise" are two great phrases. Basically, nearly all ground breaking developments end up with collateral damage, especially early on, but they eventually even themselves back out. This should not prevent us from making those strides or to regress to lesser times.
You are a legend. Thank you for sharing knowledge and meaningful insights. Lucky to be in this age.
Chuck is ON POINT !!
I need to add, as an auditor for almost 40 years, is that rigid, uncircumventable controls should be implemented to mitigate the risk of AI running amok. This has to be done BEFORE AI is implemented in any system, otherwise AI could find ways to circumvent them (which is basically the point of AI.)
Gee, and here the AI developers weren't even bothering with controls... or rules? It sounds like you're describing rules, not inputs (controls). Don't worry, humanity, the auditor has the solution for the technology sector: Bureaucracy.
@@SupachargedGaming The controls required are neither over input nor processing, but rather over output. That is, what new ideas IA is coming up with. To use a very mundane example: if a medical device advises that a patient's paracetamol dose should be replaced by say acetylsalicylic acid, there should be a control to first check if the patient is not allergic to aspirin. Scoff if you must, but auditors aid all sectors (even technology) in self-preservation.🤓
So, Neil disagreed with the surprise situation. Let's point out something really quick. Deep learning models are considered to be universal function approximators. When they train, they are training to approximate the probability distribution of the training dataset. It does not learn to identify distributions outside of its training set. Much like y=x^2, if you put in an X of any type, it will try to output X^2. If the label was supposed to be X^3, the neural network alone won't be able to classify that appropriately.
Now with that said, there are anomaly detectors which use Variational Autoencoders (VAE). These are models which try to replicate the input. You can look at the reconstruction error on this to try to find anomalous inputs which don't match the model inputs. So if I have a model trained on just cats, it will only know how to reconstruct inputs into cats. If you give it a car, it will build a car with cat "parts" or some other noise. Looking at the reconstruction error on such an output can potentially be flagged as anomalous.
There's other methods in recalibration, which analyzes the confidence outputs of a classifier and rescales the confidence values to fit appropriately for a particular context or probability. There is a potential to push low confident samples to low confident outputs, and specify these low confident outputs as anomalous.
So we're both right and wrong here. AI can be used as anomaly detectors, but the AI itself alone cannot tell you about the anomaly. It needs to be designed in such a fashion to identify these anomalies, and not just rely on a model itself as trained.
This was really insightful podcast. it gave me a while new realms to explore. It also gave a point of view which I wouldn't have thought myself.
At the end of the day, every advanced and pogressive science and technological inventions and innovations, every science and technological gadgets, every micro to macro tools should always be for betterment of people of society, human friendly, transparent, forever technically transformative and sustainable; rather should always be aiming in strengthening people mentally, physically and spiritually, solving every human complexities and contradictions, should always be advancing human civilisation in every ways, through every developed and pogressive version of art, literature and culture.
Great show, love the interaction with all participants.
If ever there was an interview that raises my red flags, this is it. Startalk did a great job, this guy just wants to deflect questions, dance around difficult answers and pat himself on the back. I'm a fan of AI, I want AGI but I do not want a monopoly like Google getting it first and this guy is doing a great job confirming that for me.
There are dozens of companies working on different types of AI. Where are you getting this Google AI monopoly stuff from?
@@enadegheeghaghe6369 Google is a monopoly already. Not in AI and I don't want them to become that either.
literally most ai experts talking about this stuff keep repeating the same thing, " oh dont worry about being replaced, it will only enhance us", yeah right, total BS
@@alfrilysencarnacion2085 They'll do whatever saves them the most money in order to give more to their shareholders and line the pockets of their executives. They don't care, they just want money. Right now they're using AI as a marketing ploy, "Our AI enhanced product will improve your life, whatever whatever" but with GPT4 and others coming soon, it really is capable of replacing jobs. In fact quite a lot of what I've done in recent jobs could be replaced with AI. Its not if, its when and when is coming soon.
@@enadegheeghaghe6369 Have you searched Google? What you see is where they want you to look. You need to look far & wide to get what you really want. They work with these Cos. to put them 1st.
Do not interrupt them, please!
21:00 I feel like I thought of a good analogy.
You have an ai camera looking at a piece of paper with an infinite number of shapes drawn on it. (The paper representing space time and the shapes being things in that reality.)
The paper is infinitely big with every possible shape there can be, even new shapes.
But no matter what, the ai wouldn’t be able to find a four dimensional shape.
At this point ai doesn’t “think” the way we consider thinking. It’s not going to get creative and go out of the box in any way that hasn’t been programmed.
That would be an ai that’s able to self program it’s own original processes I guess… but even that is relatively archaic, because the ai would have to have some kind of conscious choice as to what it’s programming for itself on the spur of the moment.
Like, that ai sports play caller isn’t randomly one day going to ask what it’s purpose is and then freak out.
That would be like growing a rose being worried it might grow fingers instead of thorns.
I think we need more education.
My father, born 1921 had 12 years of school.
I , born 1960 had 12 years
My son, 1989 had 12 years
Today we still only have a grade 12 for public schools.
There is so much more to learn than 100 years ago but we never extended the public education system.
We should be at least at grade 14 by now.
Keep being awesome Neil!!!
MR TysonThank you for allowing me to endure an exponentially enlightening future grasp of reality. Seriously thank you sir.
Hurray for Chuck. Please make nice with him. He mixes humor with profound knowledge. I find the whole thing so mind expanding!
Neil you truly are amazing at getting a hard subject across to the rest of us. I keep sharing episodes to my daughter.
It is very comforting that there are super intelligent guys like Matt around, but it is very worrisome that he works at Google ...
“No matter what the finance minister and her spokespeople say, the market has spoken - the human nation’s credit rating is falling like a stone, while 01’s currency is climbing without stopping for breath. With headlines like that, the money markets have no choice but to…”
“The leaders of men, their power waning, refused to cooperate with the fledgling nation, wishing rather that the world be divided.”
-The Animatrix, Second Renaissance
Animatrix badass.
I’m a high-level controls engineer, robotics, has a long way to go however, factories are currently performing machine learning, we are using AI to implement lean manufacturing techniques. At the same time, I’m trying to expose ancient art from North America. Manifest destiny is still in full effect. The indigenous population made a whole lot more than arrowheads, they made art that goes unrecognized by modern humans.
For real, Chuck was the best part of the show. That Queen came prepared to talk about sports, encouraging a nerd fight!!!, moral ramifications, and finding their way in or out of the closet, as they see fit. Thank you for one of your best shows: Chuck, and the two nerds. Thank you for asking questions about machine learning, and how it can impact all of us.
And also, all the creative fields that are being disrupted because of it... Rules need to be established that put the actual living beings first.
😢😅😢😅😢😅😅😅😅😅😅
The two nerds 😭
Queen?
@@TheRealSkeletor when she or he came out of the closet
@@dboii54 Who?
I love watching Startalk; however, I get annoyed with the frequency of interrupting and talking over each other. (Might be a pet peeve, I'm working on it.) I would like to see a difference in future releases.. I've got 2 months before it's time to renew my Patreon subscription.
The initial appearance of A.I. (Artificial Intelligence) has a profound impact on data privacy, responsibility/accountability, and job displacement.
"Can your program deflate the ball" wasn't a throwaway joke. What Neil was really asking was, "Can the program Kobayashi Maru a situation?"
As Stephen Fry once said, "Just because we have elevators and escalators, doesn't mean we've stopped using stairs."
Or sometlike that😅
When computers/machines first started taking peoples jobs, the smarter ones learned to programme and/or repair those computers/machines.
I was very into this chat, then it went sports and I fell asleep- then I saw it moved on - thank goodness for pause and go back 💚😎💚
Man I absolutely love both Chuck’s and Garry’s questions
This man's mind is like a machine.
He has the capacity to think in ways and terms that most of us never will.
bros actually ai
as an artist, I am incensed by the idea that my work can be sampled, without compensation or accreditation, in order to enable an AI to produce new art that mimics mine almost perfectly, and that someone else can sell. It is conceptually abhorrent, particularly if it puts me out of a job.
this is one of the precise issues the Writer's Guild is protesting right now, btw...
But the issue is, it is already too late. Once we know HOW to do something, the genie cannot be put in the bottle. We cannot unlearn it and say "you know what? This technology is forbidden knowledge. Lets pretend we never invented this." We have to adapt to the change in our society, just like with photography, the motion picture, radio, television, automation, videogames, the internet, Smart Devices, VR, etc. The debates happening now has happened with EVERY one of these inventions, did you know that? AI is no different.
Also as pointed out, even if we were to massively regulate it, or (extreme example) outright ban and outlaw ai....other countries wont. We live in a global society thanks to the internet, so our regulations won't do anything really.
So we have to find a way to cope, and integrate ai into our lives.
@@river_acheron whew, this is way back.
so, you make good points, but there're counterpoints I want to emphasize here.
1) Ai isn't "Artificial Intelligence", the term was coopted. the ability to learn was just ONE metric of almost a dozen for "AI" and it's the ONLY one these have. they also cannot teach themselves, they must be taught (fed data). It's a marketing scam to drum up investors.
I call it "AI" for "Automatic Interpolation": which is more accurate and less sensational.
2) you're right, it's here to stay, but that doesn't mean that artists should be shafted and told their work is fair game to big corporations. it's actually fairly easy to legislate regulations on the attainment and application of training data (some like Adobe have already implemented this). "Anarcho-Capitalism" is just theft.
3) if only the US and UK pass these regulations, that represents such a massive market share, that even foreign countries will comply in order to access our markets, it's been done time and again with almost every international industry.
4) being compensated and credited for ones work is a fair ask, and a matter of settled law for decades, *centuries* in some cases. it's also well within the abilities of these mega-corporations. that they want to whine about it is their problem.
5) I don't think many folk are calling for outright bans on the tech anymore. it's not "Don't use it" it's "Use it responsibly and fairly" that's being asked now.
he got me at "more Beatles songs are an undeniably good thing", love it! 😁
Is artificial intelligence more worrying than natural incompetence?
“No, no, no. Quicker, easier, more seductive”
;)
When it facilitates more natural incompetence, then yes. But that’s a hypothetical, I still believe it will reach equilibrium if it’s misused.
Great discussion! I used to watch Star Talk on National Geographic and am glad to have discovered this channel. I just need to learn to type faster with my thumbs. Yes I am a boomer.
Hopefully AI helps us realize we have some humanity left.
Yes this was a comprehensive conversation on this topic... since the early days of the 'dall-e' copyright complaints there have been many interesting youtube conversations but this has been the best one yet... imo
Make it a felony to misrepresent or convey a deep fake as real. Hold all media social or otherwise responsible. Have a 6 month grace period where warnings will be sent out. Enforce it enthusiastically. You can't stop people from making them. You can stop them from being distributed.
Librarians are having similar discussions under the heading of Information Literacy. It's important and getting more so every day. Thanks for this great episode.
The REALLY SCARY part of this is... how DlSCONECTED from REALlTY (no offence intended) are the people developing these technologies, statements like:
-Cars are cheaper now ( litteraly at a time with record prices and mark ups)
-We can trust info if the source is a news agency (most news agencies taking their news litteraly from tik tok and viral videos)
And especialy....
-Yeah it's always been "challenging" for society to keep up with technology, it's important that society distinguish from the apparent changes (proceeds to take ZERO responsability on his role developing these technologies)
-I am thrilled I get to spend my productive time working on technical problems and I don't have to solve the social problems (🥶😱 these is the people shaping the future )
You named all the red flags I heard and then some. The overarching pattern here is that all these people in the forefront of AI development talk about the ways to increase profits. Of course this guy is designing an AI to call plays in football. If6 course people are designing AI to make decisions in the stock market. They think they're doing humanity a favor. To the contrary, we haven't even figured out how to allocate food. We have 9 million people starving to death every year worldwide. And in America, we have 700k with heart disease. The profit motives drive the poor allocation of resources, and here we are, once again exacerbating the problem with more tools.
AI could make incredible medical discoveries. But is that what we will use it for? That would require universal healthcare, otherwise it once again becomes a technology to increase profits at the expense of prosperity. We can't even get ahold of the divide and conquer strategies that are driven by radios, television, and internet, and we think we are responsible enough for AI?
If an alien civilization was trickling technological advancements to us, they would cut us off until we prove ourselves worthy. I hope that's the case. I hope we got their attention with Hiroshima and Nagasaki. I hope they're watching over us. Call it god... I don't care. We need help.
Our recent visions of "News Outlets" brings home these discussions of trusting sources. I see too much of people citing these not real sources. People that reference "Wikipedia" are 1.
I agree to some extent with you.
Society is what for every advancement was supposed to be made, to make them live a more happier life, but these ruthlessly ambitious technological advancements are being made while neglecting the actual society.
Social science are always looked down upon while the ultimate reality is that all these advancements are means to an END, and this End is human happiness and longer survivability.
I am making my life in social science so that I could devise ways to solve the actual problems that these people are making ignorantly.
Capitalisation is now Turning to be more bad than good. Sometimes I feel poor people are just going to be exterminated in upcoming times
I feel like this conversation was just starting as it ended! Didn't want it to end!
“trusted sources”. Who watches the watchers.
Chuck has not missed this entire episode!!!!!
It is ironic that this gentleman talks about the need for trusted sources and yet Google played a huge part in the decline of accredited, quality journalism
This guy raises my red flags left and right. He's deflecting questions, like morality in regards to bringing John Lennon back to life. For one he works for a monopoly but on his own he's not doing a great job of gaining my trust that's for sure.
@@Nefville Yeah, we don't don't need AI puppets of dead artists doing poor impressions for the continued profit consolidation for billionaires, who we know are the ones that will exploit dead artist AI reanimations. Wonder how the surviving family would feel about a fake image or voice of a loved one being exploited. It's sickening, it's fake, and also can never replace a human artist with full equivalence.
I don't trust any corporation. They are ALL inherently greedy and care for nothing but their enrichment. Profits dominate their entire existence.
Would be interested in an episode dedicated to Oppenheimer. May get bumped by RUclips algo. Cheers and stay healthy guys!
AI in and of itself is not bad. It's literally just like anything else, or any other tool used by people. It's people that are good or bad. Good people will use it for good things. Bad people will use it for bad things.
Spot on 👍
@Wis_Dom - Well, I didn't distinctly say "no tool is bad." However, I can see how that could be inferred from my statement. Weapons as a whole could easily be said to be bad things, but again it can be argued either way depending on their end use.
For nukes, the tool can be the bomb itself or the energy knowledge contained within it.
Philosophy: study of the fundamental nature of knowledge, reality, and existence. My take away from this debate is, why can't we create a philosophy Ai to balance the truth and lies to govern all the outputs from the results generated by Ais?
Who chooses the trusted sources
Chuck is a master at articulating the questions that must be asked!
The Calculated Year As Of This Writing Is 02052, QT or On Adapted Unified Field Theory To Maintain Compatibility With Religions With 12 Month Years is 02053, AUFT
To Keep Time Moving, And Use Quantum Physics, Which Enables Quantum Computing, The 12 Negative Time Zones And 12 Positive Time Zones Have To Be Converted Into 20 Positive Time Zones or 24 Positive Time Zones (For Adapted Unified Field Theory), from 0-19 or 0-23.
It would also be great if you can do a special on Planetary Thermodynamics that explain our current planetary temperature based upon our distance between the earth and the sun... it should be simple for everyone to understand why we have summer and winter, but few appear to grasp it and think a magical force must be forming rain clouds and tsunamis for no reason.
Trusted sources is what I choose. I've been saying this for years. Anyone can post on RUclips or other social media sites. Anyone can make their own website and make it look really professional. I trust sites that I know as "main stream", and am skeptical of EVERYTHING else. Whatever a non trusted source says, I try to verify with trusted sources.
I have a question for Matt about his punt prediction model. Curious how, over time, any of the recommendations/predictions have changed? Im guessing that when it was first developed, the data was based on old school thinking about when to punt. But now that you see more teams implementing your suggested approach, have the outcomes changed at all?
The decision is made by this big recursive computation, trying to maximize the chances of winning based on statistics about the outcomes of various plays. Those statistics (outcomes of specific plays) are likely pretty stable even as team philosophies change.
The amount of commercials interupting this video is ridiculous
Education is everything, it is the only weapon against ignorance and complacency in order to arm ourselves for the future. Therefore we need to make education accessible to everyone around the world. Easy!
The Harris Rogan cinematic universe we didn’t know we needed . Alex Jones is a nut bar, like some of his guests joe is just interested in learning about their perspective for better or for worse, allowing the listener to be a bug on the wall. Congrats on the Emmy! Was here since Taylor ice cream machines
When I first began to watch this podcast, my opinion of Chuck Nice was not very high and I didn't understand the inclusion of a comedian in the show. However, I have changed my opnion and determined he is one of the most insightful, well spoken and intelligent voices on the internet. I agree with Chuck, it is a problem of education. Teaching basic logic should be a fundamental part of early childhood education. I have a small sample size in my own life where I taught my son, Occam's Razor at a very young age, 6. Not the complicated methodology but simple methods to look at the world and make decisions based on simple rules. He is now 17 and I have paid for it in spades when he calls his old man on his own BS. 🤣 I have created a monster but at least he is not fooled by the media machines of the internet. The simplest answer is usually the correct one.
The problem is that if a coach gives commands how likely is it that the player will do it or can do it. What’s his stamina, how long has he been on the or better can he depend on the rest of the field
Hopefully we can get Matt over again soon, I would love to have a more deep dive into ML and its effect on society
So far I only have an AI garden plant identifier but I'm very excited about expanding that into other uses.
GOT'DANG IT!!! This was such a good episode!!! 😀
I like what Matt said:
"Problems are opportunities in disguise"
If individuals were to recognize that their own brains are prone to bias and deception, it could foster a greater sense of self-awareness and critical thinking. People might become more cautious in their judgments, more open to different perspectives, and more willing to question their own beliefs. This heightened awareness of cognitive biases, such as confirmation bias or the tendency to favor information that supports preexisting beliefs, could lead to a more balanced and objective approach to decision-making.
As a result, individuals may become more empathetic and understanding of others, recognizing that everyone is subject to the same cognitive limitations. This could lead to increased cooperation, reduced conflicts, and a stronger sense of community. People might engage in constructive dialogue, seeking common ground and shared understanding instead of perpetuating divisive ideologies. That part of a theory im working on it
The difficulty of solving chaotic problems (3 body problem, for example) isn't computer power or time, it's error or rounding in initial state parameters. It doesn't matter how powerful of a computer or how long you run the simulation, you will end up with a diverging result simply because the tools to measure the initial state are not good enough. They will never be good enough. We don't even have the decimal accuracy in our computers and never will. We run into floating point errors in non-chaotic problems already.
18:39 Chuck actually succeeds in interrupting Neil. A rare moment.
chuck that is the best thing i have heard anybody say to a big audience, Thank you!
good video! you all are my favorite
Nice talk, like always a pleasure watching.
What I really don’t understand are this contradictions between experts, ones saying “run for your life”, “stop the AI development, will destroy us”, “we are doomed”, and others saying “AI is not smart like us, doesn’t know what reality is, just predicts some words”, “Many times just hallucinate and give false information”, “We must call it Artificial Idiocy “ and so on.
And these are experts, highly educated in exact science fields, from which we expect clear evidence and answers. If they are not capable of giving an unanimous accepted, accurate and solid answer to general public, they will only contribute to the rise of ambiguity and uncertainty.
ChatGPT's (3.5) answer to the question: Is there are correlation between the phases of the moon and the amount of chicken eaten in Denmark?
"As an AI language model, I don't have access to real-time data, and my knowledge is up to date only until September 2021. Up to that point, there was no scientific evidence or study indicating a correlation between the phases of the moon and the amount of chicken eaten in Denmark or any other country.
Correlation studies require careful analysis of data and statistical methods to determine whether there is a relationship between two variables. If there has been any research conducted on this topic after September 2021, I wouldn't be aware of it.
In general, claims of correlations between seemingly unrelated events (like moon phases and chicken consumption) should be treated with skepticism unless supported by rigorous scientific studies with statistically significant results. Otherwise, such claims may be mere coincidences or unrelated factors influencing the observations."
I wish I could be so intelligent and as funny as these two gentleme are being at the same time.
Neil sir i have a question as a student
From your earlier video about time travel and twin paradox if we move at the speed of light then according to position of observer for both of them time will be different
But what would happen if one observer is on earth and one is revolving around the earth close to speed of the light
as far as using AI to create a video, recording of someone's voice, image, etc. should only be allowed for the privacy of one's home and is not allowed to be distributed, sold, posted on the internet without the premission of the original person or their estate. So, if one wants to use John Lennon's voice, image, etc. and create a new song, etc., they can do it all they want at home for their own personal enjoyment or research, but if they shouldn't be allowed to make copies, post of sell copies/original on the open market without the written permission by John Lennon or his estate and Royalties must be paid if it's allowed and it's sold.
The thing goes with anyone famous or not.
AI should NOT be allowed to create any form of news. PERIOD.
Any time AI is used publicly, there should be some form of watermark or label where the viewer or listener of that AI generated content is known ahead of time that it is AI based and can't be trusted to be valid, truthful, etc.
May I suggest two short stories by Ray Bradbury about the replacing of Humans with technology. The first is " Marionettes Incorporated ". A cautionary tale where-in we think we can control the AI but unleash our replacements who are not just satisfied to serve but plot to take our place. Second is " I sing the body electric " . Here AI does serve us well , but we become too fond of what we create . Has what we built in our likeness become capable of feelings and how do we deal with it..
Neil Thank you! You the GOAT!
Great vid! It gave me so many ideas I ended up taking notes!
What we need is a look in on A.I and machine learning technology on astrophysics and future space advancement such as considering merging man and machine in order to endure inter stellar space travel, advancing civilization on Mars, the earth like moons of Jupiter, getting to Alpha Centauri enduring a 1 to 10 thousand year journey etc. etc.
Okay, I usually never comment here but this episode was so good that I had to. Really intriguing, interesting and engaging episode, was sad when it finished.
To help identify fakes, creators should start signing their content. From the camera and creation tools themselves, it should be an option (i.e., using digital certificates), so a chain of sources can be established. We're never going to get 100% of the fakes to be properly watermarked (as said in the video, it depends on the creator's good intentions), but we can get trusted sources to digitally sign (in essence, watermark) their content going forward because they have incentives to do so (like protect ownership/copyright, and show it comes from them, to put their reputation behind them). Anything not signed would be considered derivative and/or unverified, and consumers should know/learn to take them as such.
Then consumer apps can indicate who signed the content you're consuming. There will be a (relatively) small amount of content watermarked from their source (which can be either presented as factual, opinion, or AI generated, according to the source). It is up to the consumer to decide if they believe it. And any inquiring consumer can easily track authentic content to their source (not to mention that it makes it possible for authorities to track the chain upstream). If the chain is broken, the content has to be treated with less veracity than properly signed content (and/or you have to find a better source that is unbroken). When the courts and government get involved, there would be a chain of evidence (making libel laws sting again, catching those purposedly lying). And consumers should be encouraged to find original sources when making arguments for their conclusions: they seldom do it now, since anybody can write whatever they like on the Internet (and preset it as a good source, and disown anything that you becomes disproven). But if they (and their sources) get rated, having consequences when your past arguments divorce from reality, then we might get to having meaningful discourse.
I'm thinking asking those same lines. The will be huge incentive to digitally sign your work. Possibly public key encryptions? Block chain also might make sense.