there is quite a difference in choice of vocabulary among different scholars, for example in this lecture: professor used hill climbing = gradient ascend(descend); best first search = greedy search; beam = branching factor; telephone pole = gradient decay/disappearing; ridge problem = saddle/singular points
@@aidenigelson9826 It seems that enqueued list is the list of possible paths so far (queue or stack) and extended list is a list to record nodes that have been visited. All methods of search discussed use enqueued list. Extended list is used to avoid revisiting nodes visited before.
@@aidenigelson9826extending means taking something from the queue like (S B) and extending that by a node connected to the last element, e.g. (S B A), then putting it back in the queue. When he talks about turning on/off the extension list he's talking about keeping tracking of whether we've extended with a given node before. If we have, then we can just forget about it.
Hill Climbing Search: At 27:53 why does the instructor say that Node B is closer to the goal than A? From the route map, given the weights, I understand that A is just 8 units away from goal i.e S -> A -> D -> G = 11 units and via node B it is S -> B -> A -> D -> G = 17 units. So why would hill climbing search take the node B path ?
Rahul Dev Mishra He clears it up in the next lecture. He was talking about heuristic distance, not edge distance. In this case, the direct distance that a "bird would fly" is shorter from B to G, than it is from A to G.
@@ififif31 thank you for your reply ! But still I want to ask you a one more question in the video you told me about he mentioned heuristic distance but I am still confused how to do that did he taught this in his next lecture ?
At 40:26 things are confusing. "Down" specifically means the south direction, especially if you think of a road. The cars going in the south direction are the cars that are going down. So "takes you down over contour lines" is the same as "takes you in the south direction over contour lines". And "takes you down over contour lines in every direction" is correct, but not for one special case, the South direction.
Why at the Hill Climbing explanation, the professor says that the closest node to the goal is B, in wich he says it has size 6 (why 6?) and node A he says it has size 7+? Isn't A closer to S with 3, and B with 5?
Hill Climbing is not aware of the paths that nodes are connected to. Instead it asks how far away is a node from the goal based upon a straight-line distance from the node to the goal. So, in context to your question, Node B is closer because if you drew a straight line between B and G it'd be roughly a distance of 6 units while A to G is roughly 7 units away if you drew a straight line between them. Although to us we know that A is obviously closer than B, from the perspective of an AI with no knowledge of the given path, B would seem much closer. I hope that helps for you or any future viewer who had the same question as you.
The numbers are just arbitrary cost values. He could break out a ruler and make exact measurements of lines, or he can draw some line and give it a cost value to make things easier. Did that help at all? If not give me the time bracket (ex: 1:23 - 1:24) of what part you're confused about.
Helped me understand a lot thanks! Although, at 16:55, on enqueueing the paths in the front of the queue, doesn't that violate the FIFO property of a queue?
This video is very inspiring. I mean no technicality here. I mean it with life. For example, when he said that the DFS and BFS are "incredibly stupid" because of that "silly mistake" they make of exploring some node that has been already explored. It's nice to learn that some mistakes in our life (maybe very small mistakes, silly ones) when omitted, our life is "enhanced" dramatically as the speed of reaching the goal has increased as seen in the video. Speaking of which, the speed of reaching the goal has also increased after traversing through the nodes that are generally closer (to the goal) and leaving those other nodes behind. That's very similar to the human behavior when we leave the life distractions behind, putting only our goal in life just before our eyes, moving along the path that help us reach the goal help us reach the goal faster.
Still i can't understand why in Hill Climbing, when he did the representational tree, why he took B instead of A and why he assume than the larger path, begining by B, is the optimal. I searched for a lot about hill climbing and yet i do not catch that search, please there are another video which i can understand this concept with very detail?
There is another response on this video that explains it but in essence, the hill climbing search is a more efficient depth first search. If you look at the amount of extensions that A would have to take to get to G it would be 6 extensions (S->A->B->C->E then back up to A -> D -> G since it exhausted the first branch). Taking B would give you 4 extensions (S -> B -> A -> D -> G and staying within the first branch). The professor also said that Hill Climbing only works if you have a heuristic that will help determine if you are close to the goal or not. This diagram also assumes we are starting the search "left" to "right".
My question is how do we get a useful heuristic? Do we actually plot these places out in euclidean space and figure out the distance between the node you're at and the end node?
Tadek HT We did not see any reference to a "macbeth reading" program in the course materials... but lecture does list two search related materials in 6.01SC. They might be of some help. See the "Related Resources" tab on the video lecture 4 on MIT OpenCourseWare (ocw.mit.edu/6-034F10). Hope this helps.
Look at it from a heuristic POV. If you were to decide which one of the two options takes me closer to my goal, I would likely use a heuristic function that returns me the direct distance between G and both A, B. So if A-G direct distance < B-G distance (not counting the edge distance, just straight line distance), we know to take A.
I did not know this problem as a the telephone pole problem, but it should not matter. Imagine an hill with a plateu on it so every possible move you make or rather decision will not bring you closer to the maximum/goal or further away. In this case you would not know what to do. Two solutions, which I know, are either to choose a "random" decision or to use the beam technique (for a person on hill this could be a difficult task). There are probably much more solutions to this. I hope everything was correct, understandable, helpful and still relevant.
It is a problem when there is a plateau, an almost flat space or even a flat place, and then the highest point has an extreme difference in slope only in a small space, like a telephone pole. Hence, you won't be able to get to the highest point unless with the algorithm unless you are really close to the pole and finally find that it is the highest spot. I hope it helps
Regards optimized search. I take it we take it for granted that we already know the distances to goal from each node for the optimizations. Do beam, and hill climbing searche types still produce faster results if that information is not already available. My guess is they are not applicable in this case because our 'distance' verification itself will have to do extra traversals. Ie making it have more time complexity.
+Lance Bryant after some thought i guess there are two types of efficiency here, traveling time efficiency, and computational time. I take it, we do not care about the computational time so much as the result leading to the most optimal solution that would actually be traveled.
If the hill climbing algorithm can be outfitted with a backtracking feature why then could it ever get stuck at a local maximum, can't it just backup and look for the global maximum?
Because it has no awareness of whether or not there is some other maximum in the system. "Knowing" that it's at a maximum is contingent upon there being no other ways to go "up". In other words, every direction you go takes you "down". If you imagine two hills separated from one another, one taller than the other, and you imagine that you are at the top of the shorter hill, then no matter which direction you go you have to go down, even to get to the larger hill.
+Mares Fillies Backtracking would allow you to find the global maximum only if you already knew what the global maximum was. If you didn't know what the global maximum was, then you'd effectively be doing a british museum search instead. Hill climbing doesn't need to know what the global maximum is in advance, but with the caveat that it may not take you there. There are many cases where it would be successful however. Perhaps you know what the goal looks like, but not where it is.
First of all: Thank you for there terrific lectures! Question: with hillclimbing and beam information is used, namely the distance to the target. But to calculate the distance you already know the best path to the target. so how can you know the distance to the target? The only thing you could do is calculate direct distance through Pythagoras, but not using the distances written near the paths. Correct? But then it should be correctly drawn or presented. A combination of a system like hill-climbing or beam with a stupid one would give: a) a superfast answer with a reasonable good path and probably already the best. b) the best path later on if there is a better solution in the end.
I'm new to this stuff, but I think the program calculates the distance to the target as the crow flies, without minding the paths. Otherwise it wouldn't have chosen the path B as the nearest one (27:52). I don't know how it'd calculate such a thing if there wasn't a map, since without map it wouldn't be able to count pixels or somehow understand the shortest distance...
what is the reason behind saying that Hill Climbing algorithm uses backtracking and queue? we don't go back in Hill Climbing so there can't be any backtracking. And also because of not needing backtracking, we don't need to save our paths in a queue. Am I wrong?
Outstanding lecture, wonderful teaching skills, I missed best first search here it was mentioned and abstract idea was given, also tree level in beam search should have been a bit more complex to cover all aspects of it, overall very nice explanation, I am lovin it !!!
Euhm I'm not good at explaining things through words, so I made this pic to explain it visually a bit ibb.co/n5XUuv I hope u can see it. The thing is just that with the contour lines drawn as they are, one has only a pretty limited amount of directions to go to for going higher (only the directions that lead you closer to the highest contour (which is the smallest one in the top right)). And by going north, west, south or east, you go down, because as you can see (check my pic out, it really helps) you move towards a lower contour line, which means you go down. Because you go down in those four directions, it's easy to presume that you're on the highest point... Okay, I'm sorry for the messy explaination, this is exactly why I made a picture. I hope it helps a little and excuse me for my poor English hehe. :P
Hmmm, no because then you would actually have to be on a maximum, just not the biggest one. In that situation you would go lower in any direction you go, because you really are on a top point (just not the highest one). In the contour problem you can actually go higher, the directions in which one must go to go higher is just very limited. If you don't try out all directions you might miss the direction that leads higher, which is the problem. So in the contour problem you could actually go higher (although it might lead to a local maximum), instead of the local maximum problem where you think you reached the real maximum (but you didn't). And no problem! :)
good lecture sir. I have one doubt. I understood the logic behind this problem but how to write a program on this. I don't know java but i do know C,C++ and C#. how should i start write this program.
Much of the material in 6.034 is reinforced by on-line artificial-intelligence demonstrations developed by us or otherwise available on the web. Those demonstrations developed by us are provided via the easy-to-use Java Web Start mechanism, which comes with the Java Runtime Environment, the so-called JRE. For more information, see the course on MIT OpenCourseWare: ocw.mit.edu/6-034F10
The microphone is too high up on his collar. The sound of static we are hearing is the microphone rubbing up against his skin. Might be the least important part of the video, but if it bothers you as it does me, it helps to know what's actually causing that sound. Great video otherwise.
He keeps implying that vision is somehow specially tied to intelligence...doesn't the existence of intelligent blind people (or other blind organisms) fully refute this idea? Clearly there is something more general going on that has literally no dependence upon vision.
It's actually the opposite: he's saying that intelligence (as in making smart decisions fast) is tied to vision, but it's very clear that it could be any other sense. It's not about the sense of vision itself, but about how senses in general can provide heuristic information that greatly speeds up the decision making process, yielding results that aren't necessarily the best but are still very good.
Hill Climbing begins at (26:00)
thank you
You're welcome!
Thank you!
thank you brother
appreciate
there is quite a difference in choice of vocabulary among different scholars, for example in this lecture: professor used hill climbing = gradient ascend(descend); best first search = greedy search; beam = branching factor; telephone pole = gradient decay/disappearing; ridge problem = saddle/singular points
I think he is one of the best professor, these lectures are fun...
mr winston is so epic. I've been trying to understand these searches for a long time and now he makes it so easy within 2 minutes.
What is the difference between enqueued list and extended list ?
@@aidenigelson9826 It seems that enqueued list is the list of possible paths so far (queue or stack) and extended list is a list to record nodes that have been visited. All methods of search discussed use enqueued list. Extended list is used to avoid revisiting nodes visited before.
@@aidenigelson9826extending means taking something from the queue like (S B) and extending that by a node connected to the last element, e.g. (S B A), then putting it back in the queue. When he talks about turning on/off the extension list he's talking about keeping tracking of whether we've extended with a given node before. If we have, then we can just forget about it.
R.I.P Patrick Winston
Beam Search begins at 31:46
Hill Climbing Search: At 27:53 why does the instructor say that Node B is closer to the goal than A? From the route map, given the weights, I understand that A is just 8 units away from goal i.e S -> A -> D -> G = 11 units and via node B it is S -> B -> A -> D -> G = 17 units. So why would hill climbing search take the node B path ?
Rahul Dev Mishra He clears it up in the next lecture. He was talking about heuristic distance, not edge distance. In this case, the direct distance that a "bird would fly" is shorter from B to G, than it is from A to G.
hill climb is search method with heuristic
@@ififif31 which leture?
@@pratikkulkar8128 5. Search: Optional, Branch and Bound A* (3:50)
@@ififif31 thank you for your reply ! But still I want to ask you a one more question in the video you told me about he mentioned heuristic distance but I am still confused how to do that did he taught this in his next lecture ?
At 40:26 things are confusing. "Down" specifically means the south direction, especially if you think of a road. The cars going in the south direction are the cars that are going down. So "takes you down over contour lines" is the same as "takes you in the south direction over contour lines". And "takes you down over contour lines in every direction" is correct, but not for one special case, the South direction.
For 2010, that last demo was amazing!
Beam Search: At 32:24, how is Node B connected to Node G ? There is no path drawn from B to G on the map ?
It must be from B to C. This is a mistake I think.
Yes. It's C, not G
Why at the Hill Climbing explanation, the professor says that the closest node to the goal is B, in wich he says it has size 6 (why 6?) and node A he says it has size 7+?
Isn't A closer to S with 3, and B with 5?
Hill Climbing is not aware of the paths that nodes are connected to. Instead it asks how far away is a node from the goal based upon a straight-line distance from the node to the goal. So, in context to your question, Node B is closer because if you drew a straight line between B and G it'd be roughly a distance of 6 units while A to G is roughly 7 units away if you drew a straight line between them. Although to us we know that A is obviously closer than B, from the perspective of an AI with no knowledge of the given path, B would seem much closer. I hope that helps for you or any future viewer who had the same question as you.
Austin Ewens ahh I was wondering that too, I didn't know it was because the distance wasn't measured in node paths. Makes sense now, thanks!
Austin Ewens I want to know where he got those numbers from? If the straight-line distance is just an educated guess, what did he base his guess on?
The numbers are just arbitrary cost values. He could break out a ruler and make exact measurements of lines, or he can draw some line and give it a cost value to make things easier. Did that help at all? If not give me the time bracket (ex: 1:23 - 1:24) of what part you're confused about.
Austin Ewens Okay, so it is defined by the programmer then. Thanks. :) Crystal Clear.
That florida training guy literally slept half of the lecture lol
I noticed that too. :-) Must have pulled an all nighter.
he could have brought a pillow with him , lol
Helped me understand a lot thanks!
Although, at 16:55, on enqueueing the paths in the front of the queue, doesn't that violate the FIFO property of a queue?
Gorev Minzis He's talking in abstraction. You would implement a stack if you went the other way.
You could always use an ArrayDeque in Java
@@kevinpacheco8169 What is the difference between enqueued list and extended list ?
Implementation would be a stack; this is LIFO behavior
This video is very inspiring. I mean no technicality here. I mean it with life.
For example, when he said that the DFS and BFS are "incredibly stupid" because of that "silly mistake" they make of exploring some node that has been already explored. It's nice to learn that some mistakes in our life (maybe very small mistakes, silly ones) when omitted, our life is "enhanced" dramatically as the speed of reaching the goal has increased as seen in the video.
Speaking of which, the speed of reaching the goal has also increased after traversing through the nodes that are generally closer (to the goal) and leaving those other nodes behind. That's very similar to the human behavior when we leave the life distractions behind, putting only our goal in life just before our eyes, moving along the path that help us reach the goal help us reach the goal faster.
Well said.
wow dude you got some philosophy here
Great insight Ahmed
I'm sure you are the best teacher in the world! thanks for explanation!
does i need to learn programming before learning this
No. But data structure concepts are needed atleast .
Beam Search starts at (31:40)
Great lecture...there could be an improvement on the blackboard switching algorithm : ) maybe 1,2,3 switch to move boards...
Still i can't understand why in Hill Climbing, when he did the representational tree, why he took B instead of A and why he assume than the larger path, begining by B, is the optimal. I searched for a lot about hill climbing and yet i do not catch that search, please there are another video which i can understand this concept with very detail?
There is another response on this video that explains it but in essence, the hill climbing search is a more efficient depth first search. If you look at the amount of extensions that A would have to take to get to G it would be 6 extensions (S->A->B->C->E then back up to A -> D -> G since it exhausted the first branch). Taking B would give you 4 extensions (S -> B -> A -> D -> G and staying within the first branch). The professor also said that Hill Climbing only works if you have a heuristic that will help determine if you are close to the goal or not. This diagram also assumes we are starting the search "left" to "right".
I have a doubt, in the beam search, is there a subroutine involved in determining the best node that is nearer to the goal node?
My question is how do we get a useful heuristic? Do we actually plot these places out in euclidean space and figure out the distance between the node you're at and the end node?
It will be really amazing If you can sharing the source code to testing all algorithms. The virtual class is very usefull and very illustrative.
did professor mean extended lisr when he wrote on the blackboard at 25:15
"This is a particularly cute problem in high dimensional space but I'll illustrate it here in only two."
*acute
Wow! This is an incredible lecture! I start to have some idea of how Siri works!
9:42. Isn't the backtracking idea contrary to the rule of not biting one's tail ?
No, but going back, you can't reseach an already searched node (in the tree).
absolutely great lecture. Do anyone knows how the "macbeth reading" program is called and if its mechanism is described somewhere?
Tadek HT We did not see any reference to a "macbeth reading" program in the course materials... but lecture does list two search related materials in 6.01SC. They might be of some help. See the "Related Resources" tab on the video lecture 4 on MIT OpenCourseWare (ocw.mit.edu/6-034F10). Hope this helps.
13:10 Sir, does breadth first search require a high calculation time compared to depth first search?
How is B closer to goal node than A ? I am sure I am missing something, just unable to understand
Look at it from a heuristic POV. If you were to decide which one of the two options takes me closer to my goal, I would likely use a heuristic function that returns me the direct distance between G and both A, B. So if A-G direct distance < B-G distance (not counting the edge distance, just straight line distance), we know to take A.
39:05
I dont get the 2nd drawback of Hill climbing
He said telephone pole problem
what is that ?
I did not know this problem as a the telephone pole problem, but it should not matter. Imagine an hill with a plateu on it so every possible move you make or rather decision will not bring you closer to the maximum/goal or further away. In this case you would not know what to do. Two solutions, which I know, are either to choose a "random" decision or to use the beam technique (for a person on hill this could be a difficult task). There are probably much more solutions to this.
I hope everything was correct, understandable, helpful and still relevant.
Thank you so much !
It is a problem when there is a plateau, an almost flat space or even a flat place, and then the highest point has an extreme difference in slope only in a small space, like a telephone pole. Hence, you won't be able to get to the highest point unless with the algorithm unless you are really close to the pole and finally find that it is the highest spot.
I hope it helps
en.wikipedia.org/wiki/Hill_climbing
it is like a plateau, where it seems any path seems to have no increase/decrease in altitude.
Beam search (31:38)
Love Patrick Winston so much. are there any more courses given by Dr.
can't believe that guy just took his shoe of 29:00
i know right? And he was learning hill climbing!
I c wot u did there
@@coolshibs88 but he was on the ground
18:52 Doesn't DFS use a stack and BFS use a queue?
Regards optimized search. I take it we take it for granted that we already know the distances to goal from each node for the optimizations. Do beam, and hill climbing searche types still produce faster results if that information is not already available. My guess is they are not applicable in this case because our 'distance' verification itself will have to do extra traversals. Ie making it have more time complexity.
+Lance Bryant after some thought i guess there are two types of efficiency here, traveling time efficiency, and computational time. I take it, we do not care about the computational time so much as the result leading to the most optimal solution that would actually be traveled.
@@LanceBryantGriggcould you tell me one thing?? How A is not closer to goal while B is??
Whats the difference between hill climbing search and best first search, to me both seems to be doing the same?
If the hill climbing algorithm can be outfitted with a backtracking feature why then could it ever get stuck at a local maximum, can't it just backup and look for the global maximum?
Because it has no awareness of whether or not there is some other maximum in the system.
"Knowing" that it's at a maximum is contingent upon there being no other ways to go "up". In other words, every direction you go takes you "down".
If you imagine two hills separated from one another, one taller than the other, and you imagine that you are at the top of the shorter hill, then no matter which direction you go you have to go down, even to get to the larger hill.
+Mares Fillies Backtracking would allow you to find the global maximum only if you already knew what the global maximum was. If you didn't know what the global maximum was, then you'd effectively be doing a british museum search instead.
Hill climbing doesn't need to know what the global maximum is in advance, but with the caveat that it may not take you there. There are many cases where it would be successful however. Perhaps you know what the goal looks like, but not where it is.
How can a program make a decision based on if it's closer to the goal if it doesn't know the best path? Is it based on a straight line distance?
the tree that was given must already contain how far each point it is from the goal
First of all: Thank you for there terrific lectures!
Question: with hillclimbing and beam information is used, namely the distance to the target. But to calculate the distance you already know the best path to the target. so how can you know the distance to the target? The only thing you could do is calculate direct distance through Pythagoras, but not using the distances written near the paths. Correct?
But then it should be correctly drawn or presented.
A combination of a system like hill-climbing or beam with a stupid one would give:
a) a superfast answer with a reasonable good path and probably already the best.
b) the best path later on if there is a better solution in the end.
I'm new to this stuff, but I think the program calculates the distance to the target as the crow flies, without minding the paths. Otherwise it wouldn't have chosen the path B as the nearest one (27:52). I don't know how it'd calculate such a thing if there wasn't a map, since without map it wouldn't be able to count pixels or somehow understand the shortest distance...
because the map or graph is a weighted graph, and weight is what uses as distance between two node.
what is the reason behind saying that Hill Climbing algorithm uses backtracking and queue? we don't go back in Hill Climbing so there can't be any backtracking. And also because of not needing backtracking, we don't need to save our paths in a queue. Am I wrong?
I dont see what he mean at 6 in B and 7 in B at the hill climbing part
This lecture helped a lot for me on building Robot. Thanks a lot
41:15 Is that guy on the second row aisle seat dozing?
ya once he wakes up around 36:43
Professor walked to him and rubbed the iron bar to say congrats to wakeup.@@oudarjyasensarma4199
does that blackboard still do that thing ?
Outstanding lecture, wonderful teaching skills, I missed best first search here it was mentioned and abstract idea was given, also tree level in beam search should have been a bit more complex to cover all aspects of it, overall very nice explanation, I am lovin it !!!
What is the difference between enqueued list and extended list ?
Why would you need a queue in Depth First search ? Doesn't the recursion stack handle everything ?
Yes , I realized that it was possible to implement an iterative version of Depth First Search using a stack ... Thanks for your answer
In Beam climbing, his explanation does not seem to match the graph layout. eg., B has no direct link to G ... yet he lays that out in the tree.
Where can we find this illustration program? Genesis and Pathfinder?
So a beam search with a width of 1 is hill climbing?
Yes
is that a X200 thinkpad the professor is using?
Thank you professor, great work!
what program was the prof using to simulate the search?
Chloe Duan I believe he created it himself
Thanks for your nice description
RIP
How can we human figure out the best path at the first glance just using eyes without any complex processing?
+Bruce Shung We are using complex processing. So complex we don't understand it yet. He mentioned that in the video.
Did I misunderstand the question?
I don't really get the last problem (the contour problem) of Hill Climbing. Can anyone explain it for me? Thanks in advance.
Euhm I'm not good at explaining things through words, so I made this pic to explain it visually a bit ibb.co/n5XUuv I hope u can see it. The thing is just that with the contour lines drawn as they are, one has only a pretty limited amount of directions to go to for going higher (only the directions that lead you closer to the highest contour (which is the smallest one in the top right)). And by going north, west, south or east, you go down, because as you can see (check my pic out, it really helps) you move towards a lower contour line, which means you go down. Because you go down in those four directions, it's easy to presume that you're on the highest point... Okay, I'm sorry for the messy explaination, this is exactly why I made a picture. I hope it helps a little and excuse me for my poor English hehe. :P
I thought that is the local maxima problem? And I am really grateful that you spend time to make the picture for me. You are so enthusiast
Hmmm, no because then you would actually have to be on a maximum, just not the biggest one. In that situation you would go lower in any direction you go, because you really are on a top point (just not the highest one). In the contour problem you can actually go higher, the directions in which one must go to go higher is just very limited. If you don't try out all directions you might miss the direction that leads higher, which is the problem. So in the contour problem you could actually go higher (although it might lead to a local maximum), instead of the local maximum problem where you think you reached the real maximum (but you didn't). And no problem! :)
Thank you for the pic, it was really helpful to understand.
@@joked3202 Thanks for the picture. It really clarified the notion.
oh what i wouldn't give to be in that class just for once....
5:37 snakes actually
great lecture
This is soooo great, thanks for sharing
Good Professor.
Why isn't there lecture 20
Prof. Winston requested it not to be published. No explanation was given.
@@mitocw Understood.
which software was that
The classroom is very nice.We have not good class.
good lecture sir. I have one doubt. I understood the logic behind this problem but how to write a program on this. I don't know java but i do know C,C++ and C#. how should i start write this program.
tahaaalam c# and java very close
The programs are made in Java ?
Much of the material in 6.034 is reinforced by on-line artificial-intelligence demonstrations developed by us or otherwise available on the web. Those demonstrations developed by us are provided via the easy-to-use Java Web Start mechanism, which comes with the Java Runtime Environment, the so-called JRE. For more information, see the course on MIT OpenCourseWare: ocw.mit.edu/6-034F10
@@mitocw What is the difference between enqueued list and extended list ?
The microphone is too high up on his collar. The sound of static we are hearing is the microphone rubbing up against his skin. Might be the least important part of the video, but if it bothers you as it does me, it helps to know what's actually causing that sound. Great video otherwise.
Good Lecture Sir...
Very helpfull....
He’s funny sometimes, like how he found the ai system talking annoying and pulled off the plug xD
How could Hill climbing backtracking??
It seems that no backtracking, so it is more efficient that BFS.
19:40
He keeps implying that vision is somehow specially tied to intelligence...doesn't the existence of intelligent blind people (or other blind organisms) fully refute this idea? Clearly there is something more general going on that has literally no dependence upon vision.
It's actually the opposite: he's saying that intelligence (as in making smart decisions fast) is tied to vision, but it's very clear that it could be any other sense. It's not about the sense of vision itself, but about how senses in general can provide heuristic information that greatly speeds up the decision making process, yielding results that aren't necessarily the best but are still very good.
completed 4th lesson
"Ph.D. in physics after 3rd post-doc", lol
08:17 20:42 nice
All these algorithms just to find the G.
Awesome!!!
thank you!
ruclips.net/video/j1H3jAAGlEA/видео.html
That first guy in the second row from the right is sleeping 🤣
We don't stop when we hit G ;)
Does this guy have internet explorer on his taskbar oof
From Sankho Kun
Like a true troll
I wonder why a big mind like this professor says such derogatory and insulting things about cab drivers. really disgusting.
what a boring lecture!
Is this sarcasm ?
what program does he use in the video?
miftah Ihsan Do you mean Eclipse, Java, or the program he wrote?
What mapping software is that?
Which is the software used in the video?