I seem to be really unaware of the big changes that are approaching in the field of intelligent systems. Not that I don't know about the success of ML, but he seems to be hinting at something else (probably not at odds), something based on the actor model. Does ayone know where ca I start looking about this?
It's about an inherently concurrent (parallel) model of computation which is capable to deal with inconsistencies. You can see this work for details: hal.archives-ouvertes.fr/hal-01566393v14/document
Can anyone provide pointers/examples on the difficulties of dealing with inconsistencies? I'm lacking knowledge and terminology but anyway: is it too simplistic to think an actor should bifurcate on encountering an inconsistency and that there could be an economy of sorts for messages so that those finding the greatest reach/acceptance/least-contradiction reinforce their originating actor? This might also imply the merging/grouping of actors resulting from separate bifurcations. Hmmm, sounds like back-propagation.
I think the issue with inconsistency is that how an actor "realises" that there is an inconsistency. I mean if we are talking about data inconsistency. I'd classify this into two major groups: (1) the inconsistent for the actor, in which case the actor can't resolve it and should fail (crash, signal failure). In this case I don't really see how can the actor bifurcate... (2) the inconsistency is on a much higher level (maybe only visible for the user). In this case the actor would need a global knowledge to decide whether the data is inconsistent, so the actor is not self-contained. Although if you can assign a probability to the data processed by the actor, then creating a network of actors to converge to a solution would look very similar to back propagation. If we are not talking about the data inconsistency, then I have no idea how to decide between two possible outcome of such a network. I don't know. An interesting research topic. Maybe by defining properly (I mean mathematically) what an actor is, a what inconsistency mean would help.
@@olikasg Yes, looks like Brian has supplied Carl's paper to describe things mathematically (and video I just realised). My knowledge of logic is so thin I have to (over?) simplify my thoughts: someone replied with colours as an example but then must have deleted it. Pity, the example at least highlighted, as you state, that detection of inconsistency must require **at least** a frame of reference and/or expected value/s. Though simple it also highlights that something can be inconsistent at one level yet still be consistent in another (related) context. That seems like the foundation but still feels materially different from a system that can detect contradictory viewpoints and do something productive with them. I suppose political voting is a better example, especially as sometimes the outcome, which way to vote, may sometimes have to be decided by a coin toss after which the "system" is free to return to the "undecided" state :-)
There is more info available here: Video of Stanford Seminar: Building and Deploying Scalable Intelligent Systems by 2025 professorhewitt.blogspot.com/2019/01/video-of-stanford-seminar-building-and.html
There is more info on Professor Hewitt's blog here:
professorhewitt.blogspot.com/
i suppose the answer to the question he posed at the end, "Who's going to do it?" might be answered with: OpenAI?
dope
I seem to be really unaware of the big changes that are approaching in the field of intelligent systems. Not that I don't know about the success of ML, but he seems to be hinting at something else (probably not at odds), something based on the actor model. Does ayone know where ca I start looking about this?
It's about an inherently concurrent (parallel) model of computation which is capable to deal with inconsistencies. You can see this work for details: hal.archives-ouvertes.fr/hal-01566393v14/document
There is a longer video here:
ruclips.net/video/l1wMFd2dHCE/видео.html
Can anyone provide pointers/examples on the difficulties of dealing with inconsistencies? I'm lacking knowledge and terminology but anyway: is it too simplistic to think an actor should bifurcate on encountering an inconsistency and that there could be an economy of sorts for messages so that those finding the greatest reach/acceptance/least-contradiction reinforce their originating actor? This might also imply the merging/grouping of actors resulting from separate bifurcations. Hmmm, sounds like back-propagation.
ruclips.net/video/l1wMFd2dHCE/видео.html
I think the issue with inconsistency is that how an actor "realises" that there is an inconsistency. I mean if we are talking about data inconsistency. I'd classify this into two major groups: (1) the inconsistent for the actor, in which case the actor can't resolve it and should fail (crash, signal failure). In this case I don't really see how can the actor bifurcate... (2) the inconsistency is on a much higher level (maybe only visible for the user). In this case the actor would need a global knowledge to decide whether the data is inconsistent, so the actor is not self-contained. Although if you can assign a probability to the data processed by the actor, then creating a network of actors to converge to a solution would look very similar to back propagation.
If we are not talking about the data inconsistency, then I have no idea how to decide between two possible outcome of such a network. I don't know. An interesting research topic. Maybe by defining properly (I mean mathematically) what an actor is, a what inconsistency mean would help.
@@briancannard7335 Thanks, weekend reading though I think it'll take me much longer to comprehend ;-)
@@olikasg Yes, looks like Brian has supplied Carl's paper to describe things mathematically (and video I just realised). My knowledge of logic is so thin I have to (over?) simplify my thoughts: someone replied with colours as an example but then must have deleted it. Pity, the example at least highlighted, as you state, that detection of inconsistency must require **at least** a frame of reference and/or expected value/s. Though simple it also highlights that something can be inconsistent at one level yet still be consistent in another (related) context. That seems like the foundation but still feels materially different from a system that can detect contradictory viewpoints and do something productive with them. I suppose political voting is a better example, especially as sometimes the outcome, which way to vote, may sometimes have to be decided by a coin toss after which the "system" is free to return to the "undecided" state :-)
There is more info available here:
Video of Stanford Seminar: Building and Deploying Scalable Intelligent Systems by 2025
professorhewitt.blogspot.com/2019/01/video-of-stanford-seminar-building-and.html