FAQ and corrections in this comment! 1) Check out today's sponsor at brilliant.org/alphaphoenix/ for 20% off! 2) Yes, there will be a part 2 video, but no, it's not coming out next. these programming videos are monstrous to edit because they're all animations and I can't stand to look at this project anymore. hopefully the next video is something fun with the high speed camera! 3) I'd love to see more people take a crack at this! Someone on Patreon wrote a fantastic algorithm this week that could take loads of steps backwards on a glider, but apparently struggled with the larger "play button" pattern I was using. 4) A LOT of people are complaining about the complexity section, but I chose my words very carefully there. I said “NP complete is AT WORST exponential”. I also said “there’s no KNOWN algorithm” to solve NP hard questions in polynomial time. 5) AI would be awful for this problem. Neural nets also likes smooth parameter spaces, and they are good at finding “good” solutions not “exact” solutions. 6) a lot of people are pointing out that there can be more than one GoL board that leads to any given desired GoL board. On the surface it feels like this would make the problem easier, because there are more solutions to find, but it’s also this property that guarantees you will have “Eden” game boards that cannot be produced by a previous step. 7) Tile solver: a bunch of people have suggested pre solving tiles, and I love this idea - it’s one that I tossed around too late down my path of pixel-by-pixel SAT. You’re trading precompute time and memory to build an enormous library: find every way to back up a 3x3 area and save them, then catalog if a tile is used to back up a particular state, what tiles could be adjacent (this will chop down the long list to a less but still quite long list), then you can build a solution from tiles. Maybe by making tiles from tiles iteratively you could precompute solutions for quite large patterns. This may be an interesting way to cut down run-time complexity pretty massively. I didn’t initially go this way because I felt it would be difficult to keep the simulation small while taking multiple steps backwards, but that could be sorted by your tile selection method/biases (a sat solver would be faster but it’s not easily biasable)
Garden of Eden patterns were proven to exist. So in the worst case it is irreversible. It may be hard to find examples of Gardens of Eden --- oh, it seems you found some!
Minor correction is that "NP" (which as you say at one point) is "nondeterministic polynomial" which does _not_ mean it is not polynomial. We're "pretty sure" that NP-complete problems cannot be done in polynomial time, but proving it is literally a million dollar problem and it's not _known_ to be impossible to do in polynomial time.
It's not really important to the eventual purpose of the video, but I got caught up by the map at 17:30. I assume the edges wrap, which would account for the odd shape of the ring, but a lot of the top points on the ring are unbalanced despite being of equal height to full solutions next to them. I also don't see any space for solutions that are 3/4 one density and 1/4 of the other. In the end it's either just an imperfect example that demonstrates what it needs to just fine and I shouldn't be thinking about it this hard, or I'm missing something glaring, which is equally possible.
Can you say what that library was and maybe a link? Python I'm assuming? Great video! I watch everything p vs np related and all of your videos so today was a great watch.
@Windows__2000 Hashing is the fundamental principle of privacy on the internet, computers in general. Hashing is crazy simple in one direction, while becoming an NP Hard problem in reverse. Large prime encryption key methods work in a similarly way. However they encryption is easily reverseable if you have the private prime key. To compute the private key is an NP Hard problem. Gotta love NP problems, that is why it is one of the Millennium Problems.
Actually it's not as simple as that. Entropy can be decreased locally at the expense of an even bigger global increase. So it could be that you'd be able to reorder GoL at the expense of some other increase of entropy. (Not the case with GoL, however).
on step -1 of solving cube you could only rotate slices from single spatial direction, on step -2 from another (or same), etc. this kinda decrease field of possibilities
"we did it not because it was easy, but because we thought it would be easy" (Naive developer discovering that the algorithm they are trying to reverse is not bijective)
Месяц назад+49
Bijective or not doesn't necessarily make much of a difference for how hard it is to compute. Eg mapping between positive integers and their prime factorisation is bijective, but one direction is hard to compute.
Bijectivity doesn't imply ease of computability in each direction, but lack of bijectivity might imply a lack of ease of computability to find the inverse image, for some general enough class of functions?
@@imsleepy620 I don't think that's true either. As a simple example, consider the problem "for a source and a sink node, find the shortest path between them". There may be multiple shortest paths, but it's solved easily. The inverse "given a shortest path, find the source and the sink" is completely trivial, and has only one solution.
Computer scientist here. If you're just using the SAT-solver built-in to Z3 (as opposed to SMT constraints), you should really instead be using a specialized sat solver like cadical or kissat. They're much much much faster. >1 week for one sat solve at the problem scale you're dealing with is almost always unnecessary. The annual sat competition does a good job keeping track of what's currently fastest. If you're actually using SMT constraints in Z3 (like you were previously with integers), you should almost certainly switch to CVC5. It is also much faster than Z3 in almost all cases (save for some theories that you won't use). Also, you almost certainly should not be doing manual a local search for multiple steps like you are now. It should be very straightforward and fairly efficient to encode multiple steps of your constraints at once into SAT (or SMT). Then you don't implement manual backtracking (which I'm assuming is what you're doing currently with your "edens") - that's already what DPLL does (and hence what CDCL emulates). I'm not sure what your constraint formulation looks like, but also see if you can add any "symmetry breaking constraints". It shouldn't be too hard to find good resources on why these are useful and what they look like. Not super important, but also the thing you're calling a "phase space" is what would be called a "reconfiguration graph" in my field, since it's a graph over the space of configurations whose edges correspond to a certain type of "reconfiguration move" (in this case, flipping the colour of a grid point). There are some much cooler reconfiguration graphs. Fun fact the simplex method for linear programming is a greedy reconfiguration graph traversal.
The most successful algorithm I’ve tried takes a number of steps backwards at once - as many as I can until it hits a computational limit of multiple days. I’d be very interested to know if there’s a solver optimized for inequalities - my best-performing single-step iterative solver was seeking to leave as few live cells as possible.
@@AlphaPhoenixChannel You're much further along than I thought if you have a multi-step implementation, nice! For inequalities, ILP solvers are often recommended, but in my experience they don't perform very well on these sorts of highly combinatorial problems. Z3 has an optimizer built-in that I'm sure you've already found, but it is extremely slow, even on much smaller problems than what you're dealing with. Since actually finding the absolute minimum is actually a very strong constraint (requiring either exhaustion or a lower bound, or some combination of the two), you might get better results by relaxing it to some heuristic bound on the number of living cells - say, at most 10% more living cells than what you started with. Any chance you're planning to publish your code? I'm curious to see if there are low-hanging improvements. Also, if academic publications in overlapping math/cs ever interest you, this is exactly the kind of thing the folks at jcdcggg really like.
@@procedurecall4219 I had luck with the SMT solver using an inequality to limit the number of live cells and slowly climbing with that limit until I got a solve. It didn’t work well (ground to a halt) for multi-step solves where I tried to limit the number of live cells in the final step, and I couldn’t come up with a way to limit the number of live cells once I went full Boolean (aside from just giving it a small bounding box)
During the video I considered this "what if you just checked every possible combination" but turns out that with a 10 by 10 board, if each combination takes one microsecond it'll take 40 quadrillion years to check them all.
That's the weird thing about problems like this. There's almost always a solution that's intuitive. "Well of COURSE it would work like this!" Then you try it out and it fails almost immediately and you're left scratching your head like "how? how did that possibly fail??" I have to admit, it's that fact that keeps me endlessly fascinated even about math I can barely comprehend :D
That false solution thing is also the mechanism behind a lot of "Why didn't evolution lead to X!" conversations. You basically never will go down a hill, even if the evolutionary mountain on the other side is four times as high. Creates these feedback loops where animals keep terrible systems because in order to improve they'd have to "unwind" first.
Yep, it's also what causes convergence, as certain local maxima are really massive in the phase space of selective pressure solutions, and thus regardless of how inefficient they might be, they continuously pull in more species. (Crabs aren't the ultimate body plan for most niches, but they're a Very attractive one due to how much their traits reinforce each other)
my favorite fact is the laryngeal nerve needing to make a detour down to the heart before going to the neck. It'd be nice for the nerve to connect the right way from the start but there's no practical way to disconnect and reconnect elsewhere during embryo development. making it longer is way easier, which resulted in the poor giraffe not able to make proper sounds.
Yeah the common one I hear is, apart from flagella, why don't we see more wheels in nature for movement? Everything that is not quite a rotating wheel is bad. It requires walking down too much in phase space relative to competition
Another problem which is NP-hard is placing conveyor belts in Factorio. Someone made a modified SAT solver that just makes custom belt balancers for you.
Been playing factorio for a while now and I think I’ve mastered or at least understood most systems but compact belt balancers are just black magic to me, just grab a blueprint online and pray to the omnissiah it keeps working fine
I’m a software engineer, and I used to work with data that required n-dimensional arrays. At first I was really confused trying to imagine a hypercube of cubes in my brain. As you’ve noted, it’s extremely difficult. One day I was staring across my office at my file cabinet, and I realized that when dealing with arrays using grids & cubes are just an abstraction. There’s no reason to conceptualize arrays in dimensions, it’s much easier to imagine them as containers. Like squares -> grid -> cube -> drawer -> cabinet -> storage room -> building -> etc. It’s not a complex shift in metaphor, but for me it made the notion of distance between cells much easier to understand. I use the same logic when thinking about configuration space. Containers of containers of containers of containers. It’s surprisingly effective!
Same for me, i wondered how to solve a place-time problem and while thinking about higher dimensions, i stumbled upon the realization, that it is just simply a matter of simplifying the complex system and separating each of the dimensions until we get something which is manageable by the human brain to comprehend. Basically shadow projections. And i have it noted somewhere (as this was a passion project more than anything), but transforming each dimension to a lower one takes some amount of calculations and translating between dimensions is a matter of reversing the calculations, and communicating between the created layers of lower dimension objects is a matter of finding the correct path. Somehow i thought about optimal path cost adjustment, but sadly it was quite some time ago and i need to revise my notes.
I get SO frustrated with other people's videos because they DON'T understand the problem they're looking at to the depth you do and they don't explain it. Thank you for making the first video I've seen in a long time that lacks the hand waving of incomprehension.
I really love how Gerrymandering is relatively much more computationally easier than reversing GoL states. Like, oh, making this cool pattern pop out? insanely hard, computationally intensive. Abusing loopholes in a democratic system leading to political corruption? yeah dude, it's a walk in the park
the reason it is easier is because we are just looking for a "good" solution; we are okay with reaching false summits with our gradient ascent algorithm with reversing GOL, we are specifically looking for a *perfect* solution so yeah, as it turns out, election rigging is computationally easy! :D
@@Dong_Harvey Well, I'm convinced. No need for any context for your remark or data or that pesky "proof". Just a bold, clear, and very inaccurate statement. I'm SOLD!!!
The N always stands for Nondeterministic. The NP-hard problems in NP are those that can be *verified* in polynomial time. It’s an open problem (with a million dollar prize!) to prove or disprove if all the problems verifiable in polynomial time can be decided in polynomial time. - So, polynomials are still very much involved! (Note: the qualifier of “NP-hard in NP” in necessary, because there are undecidable problems that are NP-hard: the Halting problem is famously undecidable, but it’s trivially easy to reduce any NP problem to the halting problem: simply prepend a string representing a Turing machine that solves that NP problem. Likewise, if the N actually stood for “non” then it would include the halting problem as well.)
while that's *true* (and I would assume alphaphoenix knows it is), I read that "Not Polynomial" more as a simplification for the viewers that might be less involved in computer science; as all simplifications do, it leaves out some detail, but since we're not here to solve a millenium problem, we might as well treat NP problems as not solvable in polynomial time
@@3211learzaThe problem with using "Not" instead of "Nondeterministic" is that you actually make a statement that's kind of false if you use an equivalent definition of NP: the class of algoritms decided by a Nondeterministic Turing Machine in polynomial time. To say it's not polynomial is wrong without being more specific.
Heartbreaking: someone has solved your non trivial problem but used completely different terminology you were unaware of meaning that if you just knew what you didn't know you didn't know you would have saved yourself endless headache
@@Axolotinethere are people who have spent years writing custom solvers for problems that are easily done by SAT solvers. It is the worst marketed field of computer science.
I'm incredibly chuffed at the recent trend of not deciding that "this video idea takes way too long to arrive at the final stage to be worth a video" and instead adapting it to "this incredibly simple looking thing took me incredible effort, let me explain to you the surprising reasons why". A lot of my favourite channels have been putting out partial successes or outright failures and creating absolutely wonderful pieces of content.
24:10 to be completely precise, it is not known if NP-complete problems require non-polynomial time to solve (but it is widely believed to require exponential time and all current SAT solvers are running in exponential time (with some heuristics to hopefully be fast in easy cases)). Just one of those open problems in computer science.
In addition, although the entire class of SAT problems is NP-complete, that does not mean any one instance of a SAT problem can not be solved in polynomial time. In fact, it's quite easy to come up with subsets of SAT problems that can be solved in polynomial time (e.g. those that can be separated into smaller, fixed-size SAT problems because variables do not overlap). So it is very well possible that the problem of finding the previous state in game of life, expressed as a SAT problem, could be solved in polynomial time, even if it was proven that the entire class of SAT problems could not be solved in polynomial time.
@@Frank01985 reversing game of life is NP-hard. You can do Turing machines in game of life, hence you can also do boolean circuits in game of life, and if you try to reverse those you're essentially solving SAT.
@@jonasmaier6631 That makes sense when considering reversing more than one step. I wonder if this still holds for reversing just one step into one feasible solution (instead of enumerating all solutions). A boolean circuit will need more steps to compute a more complex circuit. I could see it being the case that reversing one step can be done in polynomial time, but considering the combinatorial explosion of all possibilities that need to be searched for multiple reverse steps, that that is not feasible in polynomial time.
@@Frank01985 Yeah he reduced game of life into SAT, which doesn't say anything about game of life's complexity. But I don't think the point of the video was to show reversing game of life is NP-hard. Other people pointed him to the SAT solver already knowing the problem was NP-hard. I think solving even one time step is NP-hard. If it was solvable in polynomial time, I don't see why you couldn't just iteratively solve it to get the desired starting configuration "n time steps ago" (I could be wrong though).
At 10:00 I noticed something interesting. Traversing diagonally across a face always flips the same two pixels. Look a the front face. The top two pixels change along each diagonal, but the bottom one is never affected. Along the top face, diagonals flip the left and bottom pixels, but never the right one. Similarly, traversing the 3D diagonal flips all 3 pixels. A 4D diagonal four flip 4 pixels, etc. This means a path length of sqrt(n) would always flip n pixels, making plotting a path much easier to calculate. You could imagine a large image with huge chunks of pixels being the wrong color. Instead of navigating through configuration space in cardinal directions only, you could take a shortcut by pre-calculating and caching the diagonal that corresponds to flipping that entire chunk. Maybe divide the whole pixel array into smaller square chunks for faster processing, etc, etc.
also afaik z3 doesn't have any progress bar, did u find that at all anxiety inducing? like if you had stopped the solve 1.9wk after start, maybe you would have missed a solution that you could have found with just one more day of solving
I don't even remember what its solving right now, but there's been one going for like the past month that I haven't wanted to shut down. it's got to be close... right??
It's the nature of the fact that the solving involves random guessing - you can't know how close to a solution are you. Some pure SAT solvers like Kissat can give you numbers on how many variables they have left, which essentially shows you the search space that is still left, but that still isn't that useful, because often that remaining search space could take until the heat death of the universe to complete.
It has been shown that restarting after some (steadily increasing) time is optimal for certain SAT-solving algorithms. Probably Z3 already does this, though.
Reversing a (cryptographically strong) hash is kinda even worse, because there's no metric of how close you are, the only real algorithm is an exhaustive search of the inputs
did you hear about Golly during this investigation? it runs finite automata extremely quickly because of a method called hashlife. it memoizes computed blocks to advance larger and larger patterns more quickly than evaluating the rules. you can use it to look backwards by looking forwards, unless there's an *incredibly* improbable situation that can never happen under a certain sized starting condition, it will find it
I was wondering if you could do something like Hashlife too. It would start with all the ways to reverse a 2*2 configuration to get some 4*4 configurations, then try following back only the parts in the end image and matching the overlaps together to build up the ways to run the needed 4*4 configurations backwards 2 steps to get 8*8 configurations, then match up the edges of those (by the recursive hash) etc. I expect the hash tables would soon get impractically huge though.
For much of the video, I was wondering why there would not be a way to precompute all (k+1)*(k+1) automata as a way to stitch together a k*k solution. If k^2^2 ends up being too large for memory, then a variation of rainbow tables could be used, albeit with the problem that many states evolve to "everything dead", and for large k there are many states that are translations of each other that should ideally be coalesced.
It would indeed help a lot for reversing large configurations when you have some canonical preceding configuration (e.g. there are a lot of gliders in the figure and you make their ancestor another glider), but it wouldn't help with the hard part of the problem which is finding some ancestor in the first place.
Have you looked at zero-suppressed decision diagrams also, they "break down" (by running out of memory) more often than SAT solvers but when they work they enable you to ask questions about the entire space of solutions (instead of just getting one of them, as SAT would, making you generate them one-by-one with extra constraints). Such as you can ask how many solutions there are, or even a list of pairs of how many solutions there are for each number of "live" cells, or find the solutions with the most or least life cells.
A ZDD is an efficient* way to encode solutions to a boolean function, but you still need to calculate the solutions, which I think is a significant portion of the difficulty here. *ZDDs are efficient in that they encode the solutions in the smallest possible datastructure which, because boolean logic, also turns out to be maximally efficient to search/query (assuming you can figure out how to write an efficient algorithm to query it for what you want to know, which is part of what SAT helps with in the first place.) However, just because it's maximally efficient doesn't mean a complete solution set (or even a sufficiently-interesting subset of solutions) will fit within storage/RAM constraints, or be traversable in a reasonable amount of time. The solutions the SAT solver finds could be used to build up a ZDD, perhaps incrementally, for efficient searching later? There's a software package called CUDD that helps working with ZDDs. Anyway, it's interesting to think about how it might be useful in this context. A few caveats may limit its usefulness: * Because building a ZDD presupposes having solutions to encode into it, if finding the raw solutions is the primary difficulty, then ZDD's usefulness may be limited * And while a ZDD is a highly efficient datastructure, whether you can express the precise query you want to ask in a form that runs efficiently, depends on whether you can reuse existing efficient queries or figure out how to craft your own, which can be non-trivial.
@@Mikee512 You wouldn't create the ZDD by finding all the solutions and then converting them to a ZDD though (you could, but it's not efficient, as you identified), you'd build it by melding together ZDDs representing individual constraints
Oh yeah! My favorite Blokus tile is the pentomino that's, like, a 2x2 square with another square attached to it. I don't really think it's the *coolest* one but, it's pretty unique all things considered so it ends up being my favorite
This reminds me of something. A while ago a buddy and me made something similar using langtons ant. We used multiple ants at different starting positions to “encrypt” a message. The world the ants reside in basically was a 2d-grid representation of the input message bytes. They then would do their thing for many iterations. Just a fun project, nothing serious. But it was nice to see your message emerge from “random” noise :D Never thought about how one could crack it but it would not surprise me if it’s relatively easy lol
Thanks Brian, I love this video so much! I didnt know something like Z3 existed. This is like telling me "Did you know: if you wiggle your arms like that, you can fly?"
It's funny, because my first instinct when I saw this was "Oh, this is basically minesweeper, that's an NP complete problem we are pretty good at solving, you should just convert it to that to solve it" and after messing around with configuration space for a bit, making me worried I was doing it wrong, you eventually decided "this is basically the SAT problem, I'll just convert it to that to solve it". I basically had the solution right off the bat, but we picked different NP-complete problems to use. I wonder if one is better/more efficient?
That's the thing about NP-Completeness, they're all basically the same problem just phrased a little differently. If you could solve one in polynomial time, you could apply the same algorithms to all of the problems in NP-Complete.
There has probably been a lot more work put into SAT solvers. They have competitions and investment from industries that use it. Also most computer scientists are familiar with boolean logic, and can probably figure out how to encode their problem into it. I wouldn't even know where to begin encoding something into minesweeper.
Awesome video and explanation!! This reminds me of my research area of inverse problems, where one aims to reconstruct the object of interest from indirect measurements. As an example, in Computed Tomography (CT), we reconstruct the image of the imaged target from x-rays projected through the target. The information of the inner structures is encoded into the measured x-rays, which have been attenuated proportionally to the attenuation different tissues (for example very high in bones and lower in soft tissues). Commonly, the inverse problem is formulated using a model for the x-ray attenuation, known as the forward model and the sough after image is obtained by iteratively minimising the difference between the measured data and the predicted data produced by the forward model applied to the estimated image. Often, however, the problems are ill-posed, meaning that the solution might be non-unique, have many local maximum/minima, or noise can have a detrimental effect on the solution. In these cases we often employ regularisation, which imposes assumptions on the solution (such as smoothing) and makes the problem less ill-posed and more feasible to solve. Although this problem is completely different to ones I'm accustomed to, there are many parallels. The forward model is the algorithm that runs the game of life, the indirect measurements (measured x-rays in CT) are the starting point, the final sough after image (CT image) is the final image you are after for and problems with optimisation (local minima/maximima) are common. Very cool to video!
the game of life is turing complete - assuming you have enough grid space ... does that mean solving this is a limited space halting problem? also, solution to sat: "I can't get no satisfaction 'cause I try and i try and i try" - Description of brute force SAT solving strategies, [Jagger, Richards, et al.; published in "Out of Our Hands, 1965, available online]
> the game of life is Turing complete This only matters if you're running the game of life forwards for an indefinite number of steps (& an unbounded grid space). Trying to reverse a finite game of life configuration by a single step is in NP, so it can be solved using a SAT solver, as practically demonstrated in this video.
This is going backwards, not forwards. Comparing it to the halting problem, this is more like looking at the "end state" of the halting problem, in this case a program which never halts, and asking what input gets there.
It's not the halting problem. It's related to the existence of one-way functions, but the "reverse game of life" problem might be easy even if one way functions exist. The reason is that reductions rely on encoding Turing machines into the game of life, but you can reverse into states that aren't encoded Turing machines, which then won't give you the inverse of the one-way function.
5 minutes into the video and I was screaming at the screen USE SAT SOLVERS! Glad you went there eventually, and I hope you had a wonderful eureaka moment :)
I saw the thumbnail, thought about it for about a minute, and then said "this is just SAT isn't it" Boolean variables (cells) with constraints, and you want to see if there is a satisfying assignment to those variables.
This has given me a really good understanding of configuration space and p vs. np problems. I’ve always wondered about p vs. np problems but I could never find an explanation I found intuitive. Also, I can’t imagine just how useful configuration space will be to me, being able to think and compute higher dimensions! Thank you for this video, it’s awesome!
4:00 its impossible to compute THE previous state. Easy example: an empty board. Was the previous state an empty board too? 1 cell? 2 cells aligned but spaced by 2 empty ones? Or the space between was 3? 4? 99999? Graams number? Tree(3)?????? We cant know
The main lesson from this video seems to be that you should use the right tool for the job 😄 SAT problems are "too discrete" for a gradient descent algorithm. Arbitrary changes don't result in predictable increases in the result space as there's no real continuous output of a SAT-solver that can tell you how close you are to a solution. The number of requirements met is not a useful metric here, a single requirement may prevent you from solving a problem even if you hit all other requirements. So any notion of 'derivative' or 'gradient' doesn't really make sense for these types of problems. A couple times you referred to the configuration space as 'continuous', but it really isn't continuous at all. Even in the city example you're dealing with discrete cells and boolean assignments. The parameters you used to divide the city may have been continuous but the output is discrete, which should've been the first hint that gradient descent may not have been the best tool for the job. The parameter space also excluded many valid solutions, such as a checkerboard pattern, that would've evenly divided the city. Unless you're certain that your simplifications in parameter space don't exclude the valid solution you're looking for, that's almost always a bad idea. (though, I understand that was a simplified example to make a point 🙂).
True, it's not continuous, but it is at least Lipschitz. The checkerboard pattern isn't simply connected, however. Among the simply connected solutions, I would have liked to see those containing either 1 or 3 city quadrants.
'SAT solvers' in general seem like some of the most useful frameworks humans have come up with for formulating problems. Most of the 'work' in any model or problem solving is just formulating the problem in a way that computers can apply their 'dumb' brute force processing to, there's something very deep about this reformulation that I think we've been uncovering ever since Turing lit the spark 80 years ago.
I once heard a very clever computer scientist say: We programmers do no choose what we work on because it is easy. We choose it because we thought it would be easy!
FINALLY, an answer to the burning question I've been asking myself for decades: whether it's more difficult to reverse Conway's Game of Life or to gerrymander the state of North Carolina
3:47 As Brian found out, that's a specious question. The fact that the GoL has stable loops in the finite-state-machine, means that there's no deterministic way to find a unique pattern that would result in anything in particular. For example, the standard solid 2×2 block configuration could arise as the next step from a very large number of permutations of states from even just the surround few cells, or… another 2×2 block. … 33:12 👍 This video is speeding into NP-completeness territory. … 23:30 👍 34:19 Yeah, that makes sense, because the target pattern will be made from a LARGER pattern, which itself will be made from a larger pattern, and so on. You might get lucky and have a target pattern (or intermediate pattern) that can be made with gliders and spinners and such which can reduce the size of then parent pattern, but most patterns will usually be made from physically-larger non-repetetive patterns.
that is extremely abstract. but i love it. tickles my brain. i dont get this sorta thing at my work (site reliability engineer), but this reminds me of cool science classes i loved in higher ed. i love these vids thank you for making them :)
Just did a university homework on local search yesterday. The algorithm that selects the best next step based on a heuristic from a neighborhood of nodes is called the hill climbing algorithm. Even if finding the optimal solution is NP-complete, finding a solution might not be with a metaheuristic algorithm like hill climbing with random restarts. But of course there is no guarantee that a solution is found in a reasonable time, especially if the solution space is very sparse (which seems to be the case here). Not trying to say that anything is wrong in the video, just find it cool that this video popped in my feed with perfect timing and wanted to share.
Around the 7:30 mark, when you are explaining Configuration Space, the music is quite loud, to the degree that I think it becomes an obstacle to cognition. Love your videos, not hating on you or them, just wanted to mention this small spot that I think can be improved.
6:03 this seems similar to situations like anti derivatives where data is lost during a derivative such as any constant, and are unable to retrieve the constant that was lost, which is why with anti derivatives you add a character that represents a constant such as c. To make it more understandable, generally if a*b=c then c/b = a. But if you multiply a number by 0 it would become 0 and you can’t use 0/0 to return it back to its original number.
This is also why 0/0 is undefined rather then 0,1 or infinity. Even when you look at the rules 0 divided by any number is 0 Any number decided by itself is 1 And you can make a equation that uses limits to basically “prove” that 0/0 = any number
I've found it's very common for Redditors to over-generalise their knowledge. They might have read "solving P for all configurations of X is impossible in general" and will translate that, in their mind, to "solving P for any configuration of X is impossible". This video is a great example of the fact that 99% of the time you don't need to solve the general problem to find a solution that satisfies a more constrained subset of the problem!
Well, power to you! Discovering that your question is equivalent to SAT was already further than I would have gotten. When someone asks me “can you go backwards in the algorithm”, my usual answer is “no” without even thinking about it.
To me this sounds like a perfect inverse problem. Pretty much how do we go from a current state and work our way backwards to find the initial state, given that the initial state has pretty much infinite possibilities and there are infinite solutions. Inverse problems are pretty heavily studied at least in geophysics and hence there are some solid algorithms for them. That might be another approach to solve this problem which could make it much faster.
Oh when I started the video I thought "I hope he's using a SAT solver" and I wasn't disappointed hehe :-) 24:35 NP-hard doesn't mean "not polynomial time" and NP-complete doesn't mean "it can be solved in exponential time". I know this isn't a CS channel but I wish you were a little more precise at explaining these concepts the first time for people who might not already be familiar with them. The visualisation of (CDCL) SAT solvers as crossing out whole swathes of solution space where only suboptimal solutions exist, is really good! Thanks for this insight.
I know those aren't the rigorous definitions, but they are true statements. I went back and forth a LOT about that segment trying to say what needed to be said relevant to this video (and not too much), but not be incorrect. Edit: The verbiage I used to describe NP complete in the video was “at worst exponential time”, which is a very precise way to phrase it. Claiming that NP is “not polynomial time” in the graphic and saying “no known algorithm to solve in polynomial time” is also I think a very precise way to phrase it. The graphic annotation is practical and the part I said out loud was exact
@@AlphaPhoenixChannel Well, they're not entirely true, no. NP-hard doesn't imply "not polynomial time", it only implies "we don't know a polynomial time algorithm to solve this (and probably never will)". NP-complete does imply "can be solved in exponential time" but so does NP. So does P. etc. It just doesn't feel like a good explanation for why SAT in particular can be used to solve this problem. I think a better explanation would talk about how NP problems have easy to verify solutions, and how this problem has easy to verify solutions (just run the solution forward one step & check that it matches the desired output), and then link that to a brief discussion of NP-complete & SAT solvers (which you kinda already have in the video).
@@typeswitchmeh, I think that for all intents and purposes NP-hard does kind of imply "not polynomial time". Sure, it doesn't technically imply "we can prove that it's not polynomial time", but "there is no polynomial time algorithm" and "there is no known polynomial time algorithm and it's almost certain that there never will be" are basically equivalent in colloquial use (and I'd argue they are definitely equivalent in the Bayesian sense).
I saw a *really* cool video recently that rephrased NP vs P as "Can you reverse every polynomial algorithm in polynomial time" The moment I saw the title I went "of course running it backwards is hard, it's NP!"
@@typeswitch The verbiage I used to describe NP complete in the video was “at worst exponential time”, which is a very precise way to phrase it. Claiming that NP is “not polynomial time” in the graphic and saying “no known algorithm to solve in polynomial time” is also I think a very precise way to phrase it. The graphic annotation is practical and the part I said out loud was precise
When you opened with this video and started to explain the project, my initial thought was, oh this is like backtracking algorithms to solve sudoku, and then immediately was reminded of SAT solvers like Z3.
There's a lot of overlap with the gradient ascent stuff and evolutionary biology. Comes into play a lot in genetic engineering, specifically enzyme engineering
My approach would be to cache e.g. all 8x8 regions and what their future looks in 3 steps - that would be only 4x4 regions, because we don't want noise propagating from other regions. Now you cut your goal state in 4x4 blocks, look up all associated 8x8 regions, and try to find a solution that overlaps the start regions correctly. Still hard, but you can jump over several intermediate steps, which should be a huge time saver.
Nobody knows that NP complete problems cannot be solved in polynomial time. It is generally considered unlikely that they could be, but we do not know any reason that NP complete problems cannot be solved in polynomial time; that is, NP may be no larger than P, which of course would make it equal to P (trivially all P problems are also non-deterministic P). All that "NP complete" or "NP hard" means is that it is a problem that can be solved non-deterministically in polynomial time (that is, it is in NP), and also it is maximally hard. A maximally hard NP problem is one where a solution to this problem can be reduced to a solution to any NP problem (with at most polynomial slow-down). The typical example of such a problem is, of course, SAT. It's no coincidence that we use SAT solvers to solve all NP problems - SAT being NP hard exactly means that a SAT solver can solve all NP problems without significant slow-down. We also know many other NP complete problems, i.e. many problems that can be reduced to all NP problems, including each other. It's just that SAT is quite probably the "simpl. Secondly (but similarly), EXP (algorithms that run in exponential time) is weakly larger than NP, and it is suspected to be strictly larger than NP. Or put another way, NP is closely related to polynomial time (it's about a more general idea of polynomial time than P); it is not really about exponential time per se, although it is trivial that NP problems can be solved in exponential time by the naive brute force algorithm of checking all possible solutions. But note that also P problems, or indeed any easy problems, can be solved in exponential time. It's trivial in the same way that solving NP problems in exponential time is trivial. That doesn't necessarily mean exponential time is the fastest we can do for NP problems, and it doesn't mean that all exponential time problems can be verified / solved non-deterministically in polynomial time (they probably can't all be).
I made a “Pong Shader” where going through either goal just incremented the entire screen to the location where the score was displayed correctly. It wasn’t actually allocated. The score was just a set of rules defining its coordinate display.
You wouldn’t believe how hard I got downvoted for saying that SHA isn’t reversible but people are trying after someone on Reddit told me GoL wasn’t reversible
@@AlphaPhoenixChannel SHA is reversible - by brute force _lovingly slaps GPU compute server_ That's why you salt your strings. You can reasonably brute force one rainbow table, but you can't brute force all of them.
@@hammerth1421 Its not reversible. At best you can take a pick of any of the infinite amount of inputs that could have produced that SHA. Unless the input length is limited, but even then once you reach about half of the length of your hash output, you're very likely to hit a collision.
Have you considered a wave function collapse in causality space ( populate the 2d prediction list with all possible 3x3 starter grids for the current state), solve for overlap in the 3x3 grids ( as graphs) and then run wave function collapse from a random starting point to see if it has a solution?
Might actually work! But on the other hand WFC gets itself into situations where it needs to backtrack multiple times with a lot of rule sets, especially complicated ones, so I'm not too optimistic. Maybe a larger overlap area could do the trick? Maybe you could even use dynamic programming for that? I think I'd have to test it out
I worked on a similar problem, difference being that my solution did not have to be exact, just similar. I used genetic programing, but didn't get anything. Maybe I will revisit it with new ideas.
I love your explanations and visualizations! Your passion is clear! I've been trying to improve my intuition of latent/configuration spaces, and explanations like this are always welcome!
I know that it ain't gonna help for like doing many steps in parallel, but in general it looks like it could be GPU accelerated... Is this something the Z3 already leverages? Or at least maybe SIMD...
@@GerinoMorn if you're literally just trying all the solutions, that's parallelization with zero loss. they can each work on a subset. the problem is that you double the number of cores you need each time you add a pixel to the solution space
@AlphaPhoenixChannel but the 16,000 CUDA cores in a RTX 4090 can't hurt. I'm not sure if the ~2.2 Ghz would slow you down too much. With the likely pending of the 5090 and friends coming at some point in the not too distant future, more compute is coming.
Because something noisy devolves into something periodic and self-sustaining, it cannot be "undone" using an algorithm. It is similar to square-rooting a number: there are two possible answers to that one, but for this one, there could be an indefinite number of solutions.
So, Injective functions... now the fun part... This can be applied to Jpeg Compression as well. Which is a esoteric paper I want to write one day. Some configurations can't be reached, so they only ever are an initial setup. meaning there is some order to this too.
@@AlphaPhoenixChannel Think about it this way: there are fewer possible compressed images than uncompressed sources. So you eventually have to map two slightly different images to the same compressed one. It's sorta a form of noise reduction. Jepg isn't deterministic, since you can chose the quantization tables arbitrarily (which means some jpegs look better than others, even if they are the same resulting file size/entropy). but you should be able to do the reverse and find a whole set of images that compress to the same result. Now one fun idea would be to figure out all of these (I assume mostly noise). But the more interesting idea would be finding a more appropriate comrpessed image you want.... And then finding it's input. You might think that all compressed versions are the best possible ones but that's not quite true. Things like text compressed with jepg look horrible. But by adding extract lines, you can improve how the compressed result looks by targeting a specific compression table. Maybe sorta think of it like the extreme patterns that were needed in near UV lithography? Or people putting thin cavities in their 3D models so the slicer adds internal walls for additional support. Since jepg is meant to be visually lossless, that means the mapping won't be uniform.
6:15 my intuition would be this is because going forward there is only one possible branch to proceed, but going backwards there could be multiple branches that lead to the same result. The fact that the pattern you wish to design repeats infinitely in the future should be the first giveaway that there should be multiple paths of possibility into the past - otherwise the only way to generate an infinitely repeating pattern would be to start with an infinitely repeating pattern.
YOU underestimated this? My lecturing fingers were all ready to go for nothing. For everyone else, it's the exact thesis of Stephen Wolfram's entire life that you can't do this (I'm a fan, but poor guy, he'd never heard of church-turing, or long division, or the weather ;)
I remember when my work gave me a task; to construct a Revit macro which finds the most efficient way to make a target length assembly (or as close to it as possible) given a set of pre-manufactured pieces of set length, and place them in model. Come to find out after a few hours of work and some searches, that this problem is a known NP-Complete problem called the subset-sum problem (or, it is a similar derivative). It was both infinitely frustrating to find out the problem youve been struggling to optimize is computationally impossible, but relieving to know that it is not your fault you cannot find a solution. It is both when you realize all you can deliver is your incomplete product, with a note that says "do not use this if the length is greater than x, or you will spend years processing."
Loved the video. I love optimization so I guess that is no surprise. But I think you failed in explaining (the first step of) why the rules in conway's game of life are so hard to run backwards. It seems like if you have a live pixel you could calculate one step back for the 8 pixels around it. And even if you get 2 or 3 possible configurations that could have birthed that pixel, you could just make a "tree" and explore all solutions, that might end up with milions of configurations to be calculated, depending on the number of steps back. But at least all the configurations you are trying will return back to your taget solution. Basically what I am trying to say is that you "fail" to show the views what you explain from 5:51 to 6:23. Not saying that I could do any better 😂. I know this to be true, but honestly I do not "understand" in detail why this is true. And even if I did I don't think it is possible to explain in a youtube comment without being able to show simple examples. I know it would have made the video even longer, but as the video is now the viewer just have to except this as being true and I think that is a little disappointing taking into account how well you explain the solution space over the next 15 minutes. To be fair I don't remember ever seeing a satisfying explanation, I only have a "feel" for it by trying the problem myself. (I tried finding stating solutions that would just keep going, and quickly realized that it was a much harder problem than I could have ever imagined and have not touched the game since. Burnt child dreads the fire 😂)
I think a great way to deal with NP-complete problems is to use a neural net conditioned on fitness, but check it against the same neural net if it was conditioned using a novelty search and then train it based on that. That way, it puts equal priority on trying new things and getting to the goal. It’s like adding random noise, except the noise is smart.
\*chuckles* "You would think there's a different set of rules that advances it backwards in time..." No... no, I wouldn't. Welcome to The Arrow of Time. Also the reason why, even though integration and differentiation are basically inverse operations to each other, you cannot actually recover the full detail by differentiating and then integrating (you lose information in the form of any constant terms, which all go to zero in the derivative). Also the reason I think physics needs to let go of either: the conservation of information, or locality.
The fun bit is reality is almost the opposite, ie it is much easier to 'predict' the past than the future. There are multiple possible pasts, but not many, whereas the number of possible futures is near infinite even on very short time scales. This is why we remember the past but not the future. So a life form in Conways game of life would actually 'live' in the opposite time direction from us and 'remember' what we see as its future.
Eh? I don’t think that’s correct… Maybe you are referring to non-determinism? At the least, I’m fairly sure that entities in GoL can’t “remember the future”, because there is a communication speed limit in GoL , limiting how quickly one thing can influence another thing. You can’t respond ahead of time to a glider that would hit you in 100000 steps because the information that it is headed towards you hasn’t reached you yet. To remember something, one must be able to do something differently because of that remembering. As for the non-determinism, I think that like, it probably balances out equally as far as how undetermined the past and future are? Or… Hm. If you have some prior distribution over the state at time 0 ( i.e. P(S_0)) , and have a conditional probability distribution of P(S_t | S_{t-1} ) for each t… hm, I guess we could describe the amount of uncertainty as the entropy of the distribution? So, I guess the question is about how H(S_t | S_{t-1} , S_0) relates to H(S_{t-1} | S_t , S_0 ) ? Err.. what exactly was the question I’m trying to answer???
@@drdca8263 I could be thinking about it wrong, but in GoL, the future is absolutely determined. It can be predicted with 100% accuracy. Whereas the past is not. There are many possible past states that could have lead to the current one. In real life it is past determinism (the ability to accurately predict what states lead to the current one) that allows us to remember. But on further thought I can't actually think of a mechanism for 'recording the future' in GoL so maybe I am wrong. Maybe past determinism allows for memory but isn't the only requirement.
I am fairly sure though that a highly complex entity in GoL could calculate what its future would be, but could not possibly calculate what its past was, although keeping a 'memory' of the past in a finite region might actually be a possibility. Although on small scales it is really hard to predict the prior states, on large scales it might actually be much easier for specific configurations.
@@TimothyWhiteheadzm The future states in GoL are completely determined by the current entire state while the past states aren’t, yes, but a finite sized being in GoL can’t know the entire state of the GoL world. In reality, there is the idea that time evolves under unitary time evolution, in which case both the past and future are determined by the state at any one moment… except possibly for non-determinism in measurements. There is disagreement about how to interpret measurements. (“Is it fundamentally non-deterministic, or is the non-determinism just a subjective thing?”) But, the projections involved… I think they might produce the same kind of uncertainty about the past as they create about the future? Like, if something is in state (|0>+|1>)/sqrt(2) and it is measured in the basis |0>,|1> , there’s a 50% chance of it ending up being in state |0> and a 50% chance of |1> , … but, suppose from the current state you know that it was just measured whether something was in state |0> or state |1> , and the result was |1>, even suppose you know that there was a 50:50 chance there. Well, from that, you can’t tell if the state before the measurement was state (|0>+|1>)/sqrt(2) or state (|0>-|1>)/sqrt(2) (or state (|0>+i |1>)/sqrt(2) or for that matter). With unitary time evolution, “information is conserved”, I suspect that the non-determinism of measurement has to essentially “erase” the same amount of information as it “creates”.
@@TimothyWhiteheadzm Huh, your third comment didn’t show up for me until now. RUclips is weird. If the GoL world is infinite (or merely much bigger than the entity living in it) and the entity’s internal state can be influenced from the outside, then it cannot know its future internal states, because it cannot know what will influence it in the future. Different things far away which it doesn’t know about would influence it in different ways, resulting in different futures (even though only one of the possibilities for the far-away part of the current state is actually true, and so what will happen is determined.)
Wavefunction collapse algorithms are essentially a greedy algorithm with backtracking. Given the same SAT problem, take the variable with the fewest still-admissible states, pick one, and repeat from there. When you have potentially many valid solutions and just need to find one (such as when you're doing an aesthetic tiling), then this works out pretty well. The typical example is tiling a game board, where your new cell to fit might have water on one side, a forest on another, and you have several options to choose from. However, Conway's game of unlife does not have that property; it's very constrained since a cell can be only alive or dead. A cell will either be forced by its neighbours (it _must_ be alive or dead, with only one option), or it has two still-unresolved outcomes. There's no obvious way to choose which cell is next-best to consider, so the greedy approach of wavefunction collapse doesn't help very much.
@@Majromax yes fair. But what if you take the 3 by 3 grid araound a cell as a possible cell state? I think this could work but i dont have the courage to try out.
@@Majromax but isnt that te goal? Overlaping grids mean they depend on eaxh other. Meaning if you start comparing to overlapping grids you ll find out that there are only so many states these grids can take so that there are both still able to hold there state. Either dead or alive in the present. If you repeat this for every overlapp in case there exsists a pre state to the current state of the cellular automata, you would be left with a distribution of possible states vor every grid. Now you can start collapsing
Thanks dude, you are helping me a lot with this video, thanks to you i now have found a few new paths to solve a logic problem i had for some years now
As always this is such a good balance between complex, simplified but not too much that all the nuance of how it's complex is lost. Can't wait for the follow up!
13:06 Whenever I have to think about 4D space, I like to think of it as a thickness to our 2D shells. Let me explain: You and I see 3D space, but if you look at an object, we really only see a 2D "texture map" wrapped around the 3D object. Counting the removal of layers (like you're looking past them) as a step, you now have 4D movement: up/down, left/right, foreward/backward, and in/out! Another way to reach this, from an induction method: A 1D person can only see a 0D point in the direction they are facing. A 2D person can take a sweep of 0D points to see a 1D line warped in 2D space. A 3D person (Hi!) can take a sweep of 1D lines to produce a 2D image (the texture map) warped in 3D space. A 4D person must, then, take a sweep of 2D images to produce a 3D object warped in 4D space. If you convert the 2D layer you can see and a bunch of layers under that (generated with imagination) into a stack then flip through it rapidly, then you are probably seeing what a 4D person sees.
13:00 thank you so much! I have watched countless videos of people explaining higher dimensional spaces over the years and this finally clicked for me!
Rule 110 is much more impressive to me. Conway's game of life has 2D closest neighbors rules which is much less contrained than the 1D closest neighbors of rule 110. Yet, rule 110 is not only Turing complete, its also P-complete! You almost literally can't even compute anything much better than how rule 110 with all its limitations can!
The SATisfiable section reminded me of one of my favorite board games. Its called décorum. Essentially, its a party game where everyone is roommates. They each have specific needs for them to be happy, and so you take turns changing the furniture to make you happy, all while simultaneously making everyone else happy. Here’s the problem, . Every move you make, your roommates can only say if they like it or hate it. No reason why. Id highly suggest giving it a try if you have any friends
Z3 sounds like a modern version of the genetic algorithm concept we covered in AI back when I took the course ~30 years ago. GA's were mighty slick. You needed to express the problem as a topography and then provide a function that given a point on the topography would indicate how good a fit it was to the desired solution. The GA did the work of traversing the topography and avoiding foothills. Mainly it started with a set of random guesses and then did a combination of "walking uphill" and creating new guesses by combining a couple of the more promising existing guesses (thus the "genetic" part of the name) to create new guesses that may be even more promising. It was at the time one of the best and most flexible ways to solve the foothill problem
Amazing video ! I've been playing with the game of life a lot, to generate chords used in my music and now I know that I can decide that I want the last chord to be one specific chord, which is really useful !! Now I don't know if I will becaise it seems to be really hard, but… maybe one day 😅
About 10 years back me and my friend installed a light system in a building I worked that used each window as one pixel (16x6 windows on a corner of the building). We've actually made this for the first time in 2007 on a dormitory back in the university times but I digress. The later system, that run for a few years, allowed not only predefined animations but also plugins - small programs running the display for determined time like 20s or so. We even had a competition where people send their plugins and some where really good in visual terms. I contributed a few of my own and one of these was a game of life ;-) I think there was also one send by someone with some different colour scheme etc.
Спасибо за это увлекательное путешествие в мир NP-задач, теперь я стал понимать, в чём проблема NP. И ваше стремление найти решение в этой "тёмной комнате", где даже не известно, есть ли в ней "кошка", меня восхищает! Удачи и сил продолжать подобные поиски и спасибо за видео!
this was so cool! I took a course in uni that included sat solvers and i always thought they were really cool but this is the first time I see someone using one in the wild. I didn't really understand how you encoded the pixels into numbers, with rules and everything, but i really liked the video! subscribed :)
3:55 It shouldn't be easy, especially when you consider patterns that disappear into nothingness, meaning an infinite amount of patterns can lead to the exact same result.
Your "configuration hyperspace" reminds me of concepts I was grappling with about 20 years ago when I started learning about bioinformatics. The solution space for some of the problems are stupidly big with way too many dimensions. After working with the math for a while, I started getting a feel for imagining hyperspaces with many dimensions, similar to what one would do when you simply imagine objects in 3 dimensional space. The mind bending thing that happened, was that I started having dreams in hyper dimensional spaces... No halucinagenic drug could ever produce the dreams that math made me experience. Be careful when doing math kids!
I had a feeling that was coming. Just because you find a state which becomes your goal when played forward doesn’t mean that state has a valid prior state which can create it in turn! It’s a whole additional layer of phase space to work through… Nested phase spaces. Ouch!
FAQ and corrections in this comment!
1) Check out today's sponsor at brilliant.org/alphaphoenix/ for 20% off!
2) Yes, there will be a part 2 video, but no, it's not coming out next. these programming videos are monstrous to edit because they're all animations and I can't stand to look at this project anymore. hopefully the next video is something fun with the high speed camera!
3) I'd love to see more people take a crack at this! Someone on Patreon wrote a fantastic algorithm this week that could take loads of steps backwards on a glider, but apparently struggled with the larger "play button" pattern I was using.
4) A LOT of people are complaining about the complexity section, but I chose my words very carefully there. I said “NP complete is AT WORST exponential”. I also said “there’s no KNOWN algorithm” to solve NP hard questions in polynomial time.
5) AI would be awful for this problem. Neural nets also likes smooth parameter spaces, and they are good at finding “good” solutions not “exact” solutions.
6) a lot of people are pointing out that there can be more than one GoL board that leads to any given desired GoL board. On the surface it feels like this would make the problem easier, because there are more solutions to find, but it’s also this property that guarantees you will have “Eden” game boards that cannot be produced by a previous step.
7) Tile solver: a bunch of people have suggested pre solving tiles, and I love this idea - it’s one that I tossed around too late down my path of pixel-by-pixel SAT. You’re trading precompute time and memory to build an enormous library: find every way to back up a 3x3 area and save them, then catalog if a tile is used to back up a particular state, what tiles could be adjacent (this will chop down the long list to a less but still quite long list), then you can build a solution from tiles. Maybe by making tiles from tiles iteratively you could precompute solutions for quite large patterns. This may be an interesting way to cut down run-time complexity pretty massively. I didn’t initially go this way because I felt it would be difficult to keep the simulation small while taking multiple steps backwards, but that could be sorted by your tile selection method/biases (a sat solver would be faster but it’s not easily biasable)
Garden of Eden patterns were proven to exist. So in the worst case it is irreversible. It may be hard to find examples of Gardens of Eden --- oh, it seems you found some!
Why couldn’t you invert the rules and check the possibilities that meet those rules?
Minor correction is that "NP" (which as you say at one point) is "nondeterministic polynomial" which does _not_ mean it is not polynomial. We're "pretty sure" that NP-complete problems cannot be done in polynomial time, but proving it is literally a million dollar problem and it's not _known_ to be impossible to do in polynomial time.
It's not really important to the eventual purpose of the video, but I got caught up by the map at 17:30. I assume the edges wrap, which would account for the odd shape of the ring, but a lot of the top points on the ring are unbalanced despite being of equal height to full solutions next to them. I also don't see any space for solutions that are 3/4 one density and 1/4 of the other. In the end it's either just an imperfect example that demonstrates what it needs to just fine and I shouldn't be thinking about it this hard, or I'm missing something glaring, which is equally possible.
Can you say what that library was and maybe a link? Python I'm assuming?
Great video! I watch everything p vs np related and all of your videos so today was a great watch.
Bro just made a whole series about entropy and thermodynamics and now he’s surprised that reversing something like Game of Life is hard? :D
It is kinda unintuitive that a system can be deterministic in only one direction...
@@Windows__2000not really if there's multiple ways to undo a step but 1 way to go forward.
Or just create a measure of entropy
@@Windows__2000a little bit but it's the idea behind cryptographic hashing and RNGs and a lot of CS stuff.
@Windows__2000 Hashing is the fundamental principle of privacy on the internet, computers in general. Hashing is crazy simple in one direction, while becoming an NP Hard problem in reverse.
Large prime encryption key methods work in a similarly way. However they encryption is easily reverseable if you have the private prime key. To compute the private key is an NP Hard problem.
Gotta love NP problems, that is why it is one of the Millennium Problems.
Actually it's not as simple as that. Entropy can be decreased locally at the expense of an even bigger global increase. So it could be that you'd be able to reorder GoL at the expense of some other increase of entropy. (Not the case with GoL, however).
Surpised you didn't show a solved rubics cube and ask the viewer to try to figure out what unsolved state it began from
lol I love that
on step -1 of solving cube you could only rotate slices from single spatial direction, on step -2 from another (or same), etc. this kinda decrease field of possibilities
Perfect analogy!
@@DeadJDonabut a Rubix cube solve could've taken 20+ moves
@@HolmesHobbies Is it? Every starting state is a valid one for a solved rubics cube
"we did it not because it was easy, but because we thought it would be easy"
(Naive developer discovering that the algorithm they are trying to reverse is not bijective)
Bijective or not doesn't necessarily make much of a difference for how hard it is to compute.
Eg mapping between positive integers and their prime factorisation is bijective, but one direction is hard to compute.
Bijectivity doesn't imply ease of computability in each direction, but lack of bijectivity might imply a lack of ease of computability to find the inverse image, for some general enough class of functions?
@@imsleepy620 I don't think that's true either. As a simple example, consider the problem "for a source and a sink node, find the shortest path between them". There may be multiple shortest paths, but it's solved easily. The inverse "given a shortest path, find the source and the sink" is completely trivial, and has only one solution.
@@gabrielfonseca1642 You're right, revisiting my comment I realize I was definitely wrong.
Computer scientist here.
If you're just using the SAT-solver built-in to Z3 (as opposed to SMT constraints), you should really instead be using a specialized sat solver like cadical or kissat. They're much much much faster. >1 week for one sat solve at the problem scale you're dealing with is almost always unnecessary. The annual sat competition does a good job keeping track of what's currently fastest.
If you're actually using SMT constraints in Z3 (like you were previously with integers), you should almost certainly switch to CVC5. It is also much faster than Z3 in almost all cases (save for some theories that you won't use).
Also, you almost certainly should not be doing manual a local search for multiple steps like you are now. It should be very straightforward and fairly efficient to encode multiple steps of your constraints at once into SAT (or SMT). Then you don't implement manual backtracking (which I'm assuming is what you're doing currently with your "edens") - that's already what DPLL does (and hence what CDCL emulates).
I'm not sure what your constraint formulation looks like, but also see if you can add any "symmetry breaking constraints". It shouldn't be too hard to find good resources on why these are useful and what they look like.
Not super important, but also the thing you're calling a "phase space" is what would be called a "reconfiguration graph" in my field, since it's a graph over the space of configurations whose edges correspond to a certain type of "reconfiguration move" (in this case, flipping the colour of a grid point). There are some much cooler reconfiguration graphs. Fun fact the simplex method for linear programming is a greedy reconfiguration graph traversal.
The most successful algorithm I’ve tried takes a number of steps backwards at once - as many as I can until it hits a computational limit of multiple days. I’d be very interested to know if there’s a solver optimized for inequalities - my best-performing single-step iterative solver was seeking to leave as few live cells as possible.
I know it’s not kosher, but I colloquially call phase spaces, configuration spaces, and parameter spaces the same thing
@@AlphaPhoenixChannel You're much further along than I thought if you have a multi-step implementation, nice!
For inequalities, ILP solvers are often recommended, but in my experience they don't perform very well on these sorts of highly combinatorial problems. Z3 has an optimizer built-in that I'm sure you've already found, but it is extremely slow, even on much smaller problems than what you're dealing with.
Since actually finding the absolute minimum is actually a very strong constraint (requiring either exhaustion or a lower bound, or some combination of the two), you might get better results by relaxing it to some heuristic bound on the number of living cells - say, at most 10% more living cells than what you started with.
Any chance you're planning to publish your code? I'm curious to see if there are low-hanging improvements.
Also, if academic publications in overlapping math/cs ever interest you, this is exactly the kind of thing the folks at jcdcggg really like.
@@procedurecall4219 I had luck with the SMT solver using an inequality to limit the number of live cells and slowly climbing with that limit until I got a solve. It didn’t work well (ground to a halt) for multi-step solves where I tried to limit the number of live cells in the final step, and I couldn’t come up with a way to limit the number of live cells once I went full Boolean (aside from just giving it a small bounding box)
That 10% more limit is startlingly close to what I was actually able to attain lol
"Conways Game of Death" completely fair name because it kills the developer
Covid 19 killed the developer (Conway)
He survived this round.
Agreed.
During the video I considered this "what if you just checked every possible combination" but turns out that with a 10 by 10 board, if each combination takes one microsecond it'll take 40 quadrillion years to check them all.
That's the weird thing about problems like this. There's almost always a solution that's intuitive. "Well of COURSE it would work like this!"
Then you try it out and it fails almost immediately and you're left scratching your head like "how? how did that possibly fail??"
I have to admit, it's that fact that keeps me endlessly fascinated even about math I can barely comprehend :D
That false solution thing is also the mechanism behind a lot of "Why didn't evolution lead to X!" conversations. You basically never will go down a hill, even if the evolutionary mountain on the other side is four times as high. Creates these feedback loops where animals keep terrible systems because in order to improve they'd have to "unwind" first.
Meteors are the "unwind"
The maths term is "local maxima" which might not be as high as the "global maxima" (largest peak)
Yep, it's also what causes convergence, as certain local maxima are really massive in the phase space of selective pressure solutions, and thus regardless of how inefficient they might be, they continuously pull in more species.
(Crabs aren't the ultimate body plan for most niches, but they're a Very attractive one due to how much their traits reinforce each other)
my favorite fact is the laryngeal nerve needing to make a detour down to the heart before going to the neck. It'd be nice for the nerve to connect the right way from the start but there's no practical way to disconnect and reconnect elsewhere during embryo development. making it longer is way easier, which resulted in the poor giraffe not able to make proper sounds.
Yeah the common one I hear is, apart from flagella, why don't we see more wheels in nature for movement? Everything that is not quite a rotating wheel is bad. It requires walking down too much in phase space relative to competition
Another problem which is NP-hard is placing conveyor belts in Factorio.
Someone made a modified SAT solver that just makes custom belt balancers for you.
got a link by chance?👀
@@DackelDelayI concur
link plz
Been playing factorio for a while now and I think I’ve mastered or at least understood most systems but compact belt balancers are just black magic to me, just grab a blueprint online and pray to the omnissiah it keeps working fine
Dude I NEED that
I’m a software engineer, and I used to work with data that required n-dimensional arrays. At first I was really confused trying to imagine a hypercube of cubes in my brain. As you’ve noted, it’s extremely difficult. One day I was staring across my office at my file cabinet, and I realized that when dealing with arrays using grids & cubes are just an abstraction.
There’s no reason to conceptualize arrays in dimensions, it’s much easier to imagine them as containers. Like squares -> grid -> cube -> drawer -> cabinet -> storage room -> building -> etc. It’s not a complex shift in metaphor, but for me it made the notion of distance between cells much easier to understand.
I use the same logic when thinking about configuration space. Containers of containers of containers of containers. It’s surprisingly effective!
Same for me, i wondered how to solve a place-time problem and while thinking about higher dimensions, i stumbled upon the realization, that it is just simply a matter of simplifying the complex system and separating each of the dimensions until we get something which is manageable by the human brain to comprehend. Basically shadow projections. And i have it noted somewhere (as this was a passion project more than anything), but transforming each dimension to a lower one takes some amount of calculations and translating between dimensions is a matter of reversing the calculations, and communicating between the created layers of lower dimension objects is a matter of finding the correct path. Somehow i thought about optimal path cost adjustment, but sadly it was quite some time ago and i need to revise my notes.
I get SO frustrated with other people's videos because they DON'T understand the problem they're looking at to the depth you do and they don't explain it. Thank you for making the first video I've seen in a long time that lacks the hand waving of incomprehension.
I really love how Gerrymandering is relatively much more computationally easier than reversing GoL states.
Like, oh, making this cool pattern pop out? insanely hard, computationally intensive.
Abusing loopholes in a democratic system leading to political corruption? yeah dude, it's a walk in the park
the reason it is easier is because we are just looking for a "good" solution; we are okay with reaching false summits with our gradient ascent algorithm
with reversing GOL, we are specifically looking for a *perfect* solution
so yeah, as it turns out, election rigging is computationally easy! :D
Relevant XKCD: 1425
Get rid of the "representative" part, nobody can represent a group
@@Dong_Harvey Well, I'm convinced. No need for any context for your remark or data or that pesky "proof". Just a bold, clear, and very inaccurate statement.
I'm SOLD!!!
The N always stands for Nondeterministic. The NP-hard problems in NP are those that can be *verified* in polynomial time. It’s an open problem (with a million dollar prize!) to prove or disprove if all the problems verifiable in polynomial time can be decided in polynomial time. - So, polynomials are still very much involved!
(Note: the qualifier of “NP-hard in NP” in necessary, because there are undecidable problems that are NP-hard: the Halting problem is famously undecidable, but it’s trivially easy to reduce any NP problem to the halting problem: simply prepend a string representing a Turing machine that solves that NP problem. Likewise, if the N actually stood for “non” then it would include the halting problem as well.)
while that's *true* (and I would assume alphaphoenix knows it is), I read that "Not Polynomial" more as a simplification for the viewers that might be less involved in computer science; as all simplifications do, it leaves out some detail, but since we're not here to solve a millenium problem, we might as well treat NP problems as not solvable in polynomial time
Its easier to just say NP complete, and only say NP hard if you actually mean to include EXP and beyond.
@@3211learzaThe problem with using "Not" instead of "Nondeterministic" is that you actually make a statement that's kind of false if you use an equivalent definition of NP: the class of algoritms decided by a Nondeterministic Turing Machine in polynomial time. To say it's not polynomial is wrong without being more specific.
Heartbreaking: someone has solved your non trivial problem but used completely different terminology you were unaware of meaning that if you just knew what you didn't know you didn't know you would have saved yourself endless headache
Care to enlighten?
@@AlphaPhoenixChannel I mean to say, I didn't know about SAT solvers and it could have saved me a lot of time lol
@@Axolotinethere are people who have spent years writing custom solvers for problems that are easily done by SAT solvers. It is the worst marketed field of computer science.
I'm incredibly chuffed at the recent trend of not deciding that "this video idea takes way too long to arrive at the final stage to be worth a video" and instead adapting it to "this incredibly simple looking thing took me incredible effort, let me explain to you the surprising reasons why".
A lot of my favourite channels have been putting out partial successes or outright failures and creating absolutely wonderful pieces of content.
24:10 to be completely precise, it is not known if NP-complete problems require non-polynomial time to solve (but it is widely believed to require exponential time and all current SAT solvers are running in exponential time (with some heuristics to hopefully be fast in easy cases)). Just one of those open problems in computer science.
Arguably it's _the_ open problem in computer science. It's the most famous one: P=NP.
In addition, although the entire class of SAT problems is NP-complete, that does not mean any one instance of a SAT problem can not be solved in polynomial time. In fact, it's quite easy to come up with subsets of SAT problems that can be solved in polynomial time (e.g. those that can be separated into smaller, fixed-size SAT problems because variables do not overlap).
So it is very well possible that the problem of finding the previous state in game of life, expressed as a SAT problem, could be solved in polynomial time, even if it was proven that the entire class of SAT problems could not be solved in polynomial time.
@@Frank01985 reversing game of life is NP-hard. You can do Turing machines in game of life, hence you can also do boolean circuits in game of life, and if you try to reverse those you're essentially solving SAT.
@@jonasmaier6631 That makes sense when considering reversing more than one step. I wonder if this still holds for reversing just one step into one feasible solution (instead of enumerating all solutions). A boolean circuit will need more steps to compute a more complex circuit. I could see it being the case that reversing one step can be done in polynomial time, but considering the combinatorial explosion of all possibilities that need to be searched for multiple reverse steps, that that is not feasible in polynomial time.
@@Frank01985 Yeah he reduced game of life into SAT, which doesn't say anything about game of life's complexity. But I don't think the point of the video was to show reversing game of life is NP-hard. Other people pointed him to the SAT solver already knowing the problem was NP-hard. I think solving even one time step is NP-hard. If it was solvable in polynomial time, I don't see why you couldn't just iteratively solve it to get the desired starting configuration "n time steps ago" (I could be wrong though).
At 10:00 I noticed something interesting. Traversing diagonally across a face always flips the same two pixels. Look a the front face. The top two pixels change along each diagonal, but the bottom one is never affected. Along the top face, diagonals flip the left and bottom pixels, but never the right one. Similarly, traversing the 3D diagonal flips all 3 pixels. A 4D diagonal four flip 4 pixels, etc. This means a path length of sqrt(n) would always flip n pixels, making plotting a path much easier to calculate.
You could imagine a large image with huge chunks of pixels being the wrong color. Instead of navigating through configuration space in cardinal directions only, you could take a shortcut by pre-calculating and caching the diagonal that corresponds to flipping that entire chunk. Maybe divide the whole pixel array into smaller square chunks for faster processing, etc, etc.
also afaik z3 doesn't have any progress bar, did u find that at all anxiety inducing? like if you had stopped the solve 1.9wk after start, maybe you would have missed a solution that you could have found with just one more day of solving
I don't even remember what its solving right now, but there's been one going for like the past month that I haven't wanted to shut down. it's got to be close... right??
It's the nature of the fact that the solving involves random guessing - you can't know how close to a solution are you. Some pure SAT solvers like Kissat can give you numbers on how many variables they have left, which essentially shows you the search space that is still left, but that still isn't that useful, because often that remaining search space could take until the heat death of the universe to complete.
99% of z3 users terminate the program right before they find a solution
@@ivanovtv9817 *right before being a relative term in relation to the end of time itself
It has been shown that restarting after some (steadily increasing) time is optimal for certain SAT-solving algorithms. Probably Z3 already does this, though.
It's like trying to reverse a hash.
What season did Barry Allen finally defeat the Reverse Hash?
Reversing a (cryptographically strong) hash is kinda even worse, because there's no metric of how close you are, the only real algorithm is an exhaustive search of the inputs
I searched if simple hashes are threatened by quantum computing and apparently not as much as assymetric keys
@@deltamico I like adding color codes for an added layer of security. My favorite is hash browns
Imagine using a game-of-life simulation as a hash function. It would perform terribly, but it would be super cool XD
did you hear about Golly during this investigation? it runs finite automata extremely quickly because of a method called hashlife. it memoizes computed blocks to advance larger and larger patterns more quickly than evaluating the rules. you can use it to look backwards by looking forwards, unless there's an *incredibly* improbable situation that can never happen under a certain sized starting condition, it will find it
I was wondering if you could do something like Hashlife too. It would start with all the ways to reverse a 2*2 configuration to get some 4*4 configurations, then try following back only the parts in the end image and matching the overlaps together to build up the ways to run the needed 4*4 configurations backwards 2 steps to get 8*8 configurations, then match up the edges of those (by the recursive hash) etc. I expect the hash tables would soon get impractically huge though.
For much of the video, I was wondering why there would not be a way to precompute all (k+1)*(k+1) automata as a way to stitch together a k*k solution. If k^2^2 ends up being too large for memory, then a variation of rainbow tables could be used, albeit with the problem that many states evolve to "everything dead", and for large k there are many states that are translations of each other that should ideally be coalesced.
It would indeed help a lot for reversing large configurations when you have some canonical preceding configuration (e.g. there are a lot of gliders in the figure and you make their ancestor another glider), but it wouldn't help with the hard part of the problem which is finding some ancestor in the first place.
Have you looked at zero-suppressed decision diagrams also, they "break down" (by running out of memory) more often than SAT solvers but when they work they enable you to ask questions about the entire space of solutions (instead of just getting one of them, as SAT would, making you generate them one-by-one with extra constraints). Such as you can ask how many solutions there are, or even a list of pairs of how many solutions there are for each number of "live" cells, or find the solutions with the most or least life cells.
A ZDD is an efficient* way to encode solutions to a boolean function, but you still need to calculate the solutions, which I think is a significant portion of the difficulty here.
*ZDDs are efficient in that they encode the solutions in the smallest possible datastructure which, because boolean logic, also turns out to be maximally efficient to search/query (assuming you can figure out how to write an efficient algorithm to query it for what you want to know, which is part of what SAT helps with in the first place.) However, just because it's maximally efficient doesn't mean a complete solution set (or even a sufficiently-interesting subset of solutions) will fit within storage/RAM constraints, or be traversable in a reasonable amount of time.
The solutions the SAT solver finds could be used to build up a ZDD, perhaps incrementally, for efficient searching later? There's a software package called CUDD that helps working with ZDDs. Anyway, it's interesting to think about how it might be useful in this context. A few caveats may limit its usefulness:
* Because building a ZDD presupposes having solutions to encode into it, if finding the raw solutions is the primary difficulty, then ZDD's usefulness may be limited
* And while a ZDD is a highly efficient datastructure, whether you can express the precise query you want to ask in a form that runs efficiently, depends on whether you can reuse existing efficient queries or figure out how to craft your own, which can be non-trivial.
@@Mikee512 You wouldn't create the ZDD by finding all the solutions and then converting them to a ZDD though (you could, but it's not efficient, as you identified), you'd build it by melding together ZDDs representing individual constraints
Oh yeah! My favorite Blokus tile is the pentomino that's, like, a 2x2 square with another square attached to it. I don't really think it's the *coolest* one but, it's pretty unique all things considered so it ends up being my favorite
i always thought that was one kind of a weird lump shape - granted when you can use it perfectly to flatten a rough wall, its extremely satisfying
so the P-pentomino.
Also, the "r" mentioned earlier in the video is actually more commonly called the F-pentomino
@fuuryuuSKK It's almost always referred to as an R-pentomino in the context of Conway's Game of life, as Conway had his own names for the pentominos.
i gotta go with the C-pentomino
@@GDTerietto Pretty sure that one's commonly called the U?
This reminds me of something. A while ago a buddy and me made something similar using langtons ant. We used multiple ants at different starting positions to “encrypt” a message.
The world the ants reside in basically was a 2d-grid representation of the input message bytes. They then would do their thing for many iterations.
Just a fun project, nothing serious. But it was nice to see your message emerge from “random” noise :D
Never thought about how one could crack it but it would not surprise me if it’s relatively easy lol
That's amazing, how did you implement it?
You know you've watched too much This Old Tony when the music at 7:09 reminds you of TIG welding and drilling holes
Thanks Brian, I love this video so much! I didnt know something like Z3 existed. This is like telling me "Did you know: if you wiggle your arms like that, you can fly?"
It's funny, because my first instinct when I saw this was "Oh, this is basically minesweeper, that's an NP complete problem we are pretty good at solving, you should just convert it to that to solve it" and after messing around with configuration space for a bit, making me worried I was doing it wrong, you eventually decided "this is basically the SAT problem, I'll just convert it to that to solve it". I basically had the solution right off the bat, but we picked different NP-complete problems to use. I wonder if one is better/more efficient?
That's the thing about NP-Completeness, they're all basically the same problem just phrased a little differently. If you could solve one in polynomial time, you could apply the same algorithms to all of the problems in NP-Complete.
There has probably been a lot more work put into SAT solvers. They have competitions and investment from industries that use it. Also most computer scientists are familiar with boolean logic, and can probably figure out how to encode their problem into it. I wouldn't even know where to begin encoding something into minesweeper.
Awesome video and explanation!! This reminds me of my research area of inverse problems, where one aims to reconstruct the object of interest from indirect measurements.
As an example, in Computed Tomography (CT), we reconstruct the image of the imaged target from x-rays projected through the target. The information of the inner structures is encoded into the measured x-rays, which have been attenuated proportionally to the attenuation different tissues (for example very high in bones and lower in soft tissues). Commonly, the inverse problem is formulated using a model for the x-ray attenuation, known as the forward model and the sough after image is obtained by iteratively minimising the difference between the measured data and the predicted data produced by the forward model applied to the estimated image. Often, however, the problems are ill-posed, meaning that the solution might be non-unique, have many local maximum/minima, or noise can have a detrimental effect on the solution. In these cases we often employ regularisation, which imposes assumptions on the solution (such as smoothing) and makes the problem less ill-posed and more feasible to solve.
Although this problem is completely different to ones I'm accustomed to, there are many parallels. The forward model is the algorithm that runs the game of life, the indirect measurements (measured x-rays in CT) are the starting point, the final sough after image (CT image) is the final image you are after for and problems with optimisation (local minima/maximima) are common. Very cool to video!
the game of life is turing complete - assuming you have enough grid space ... does that mean solving this is a limited space halting problem?
also, solution to sat: "I can't get no satisfaction 'cause I try and i try and i try" - Description of brute force SAT solving strategies, [Jagger, Richards, et al.; published in "Out of Our Hands, 1965, available online]
What a great application of lyrics 😂
> the game of life is Turing complete
This only matters if you're running the game of life forwards for an indefinite number of steps (& an unbounded grid space). Trying to reverse a finite game of life configuration by a single step is in NP, so it can be solved using a SAT solver, as practically demonstrated in this video.
This is going backwards, not forwards. Comparing it to the halting problem, this is more like looking at the "end state" of the halting problem, in this case a program which never halts, and asking what input gets there.
It's not the halting problem. It's related to the existence of one-way functions, but the "reverse game of life" problem might be easy even if one way functions exist.
The reason is that reductions rely on encoding Turing machines into the game of life, but you can reverse into states that aren't encoded Turing machines, which then won't give you the inverse of the one-way function.
27:48 in practice we encode such "sum" constraints for SAT solving in a different way that's much more succint; see Tseitin's encoding
Oooooh
I tried to find something like this and didn’t know what to google! Thanks
5 minutes into the video and I was screaming at the screen USE SAT SOLVERS! Glad you went there eventually, and I hope you had a wonderful eureaka moment :)
I saw the thumbnail, thought about it for about a minute, and then said "this is just SAT isn't it" Boolean variables (cells) with constraints, and you want to see if there is a satisfying assignment to those variables.
This has given me a really good understanding of configuration space and p vs. np problems. I’ve always wondered about p vs. np problems but I could never find an explanation I found intuitive. Also, I can’t imagine just how useful configuration space will be to me, being able to think and compute higher dimensions! Thank you for this video, it’s awesome!
4:00 its impossible to compute THE previous state. Easy example: an empty board. Was the previous state an empty board too? 1 cell? 2 cells aligned but spaced by 2 empty ones? Or the space between was 3? 4? 99999? Graams number? Tree(3)?????? We cant know
Good s example
33:33
"Eden found" did the computer just die and ascend to heaven?..
The main lesson from this video seems to be that you should use the right tool for the job 😄
SAT problems are "too discrete" for a gradient descent algorithm. Arbitrary changes don't result in predictable increases in the result space as there's no real continuous output of a SAT-solver that can tell you how close you are to a solution. The number of requirements met is not a useful metric here, a single requirement may prevent you from solving a problem even if you hit all other requirements. So any notion of 'derivative' or 'gradient' doesn't really make sense for these types of problems.
A couple times you referred to the configuration space as 'continuous', but it really isn't continuous at all. Even in the city example you're dealing with discrete cells and boolean assignments. The parameters you used to divide the city may have been continuous but the output is discrete, which should've been the first hint that gradient descent may not have been the best tool for the job. The parameter space also excluded many valid solutions, such as a checkerboard pattern, that would've evenly divided the city. Unless you're certain that your simplifications in parameter space don't exclude the valid solution you're looking for, that's almost always a bad idea. (though, I understand that was a simplified example to make a point 🙂).
I really wanted to see someone try a neural network approach to this. I feel like "deep intuition" may help traverse the messy configuration space.
True, it's not continuous, but it is at least Lipschitz.
The checkerboard pattern isn't simply connected, however. Among the simply connected solutions, I would have liked to see those containing either 1 or 3 city quadrants.
'SAT solvers' in general seem like some of the most useful frameworks humans have come up with for formulating problems. Most of the 'work' in any model or problem solving is just formulating the problem in a way that computers can apply their 'dumb' brute force processing to, there's something very deep about this reformulation that I think we've been uncovering ever since Turing lit the spark 80 years ago.
I once heard a very clever computer scientist say:
We programmers do no choose what we work on because it is easy. We choose it because we thought it would be easy!
FINALLY, an answer to the burning question I've been asking myself for decades: whether it's more difficult to reverse Conway's Game of Life or to gerrymander the state of North Carolina
I bet reddit still thinks it can't be done after watching this video.
"He didn't ACTUALLY do it, he simply ..."
3:47 As Brian found out, that's a specious question. The fact that the GoL has stable loops in the finite-state-machine, means that there's no deterministic way to find a unique pattern that would result in anything in particular. For example, the standard solid 2×2 block configuration could arise as the next step from a very large number of permutations of states from even just the surround few cells, or… another 2×2 block. … 33:12 👍 This video is speeding into NP-completeness territory. … 23:30 👍
34:19 Yeah, that makes sense, because the target pattern will be made from a LARGER pattern, which itself will be made from a larger pattern, and so on. You might get lucky and have a target pattern (or intermediate pattern) that can be made with gliders and spinners and such which can reduce the size of then parent pattern, but most patterns will usually be made from physically-larger non-repetetive patterns.
that is extremely abstract. but i love it. tickles my brain. i dont get this sorta thing at my work (site reliability engineer), but this reminds me of cool science classes i loved in higher ed. i love these vids thank you for making them :)
Just did a university homework on local search yesterday. The algorithm that selects the best next step based on a heuristic from a neighborhood of nodes is called the hill climbing algorithm. Even if finding the optimal solution is NP-complete, finding a solution might not be with a metaheuristic algorithm like hill climbing with random restarts. But of course there is no guarantee that a solution is found in a reasonable time, especially if the solution space is very sparse (which seems to be the case here).
Not trying to say that anything is wrong in the video, just find it cool that this video popped in my feed with perfect timing and wanted to share.
Around the 7:30 mark, when you are explaining Configuration Space, the music is quite loud, to the degree that I think it becomes an obstacle to cognition. Love your videos, not hating on you or them, just wanted to mention this small spot that I think can be improved.
grr, hate that
Yep. music behind an explanation is really distracting. I didn’t enjoy that segment.
26:06 Hexagons *are* the Bestagons after all.
Videos like this can destroy somebody's entire year!
I'm glad you were able to set the project down every now and then.
Putting Alice and Bob outside of a cryptography problem should be considered a war crime
😁
You sound like Mallory!
Using exile as punishment is now a crime under international law (similar to stripping Alice and Bob of their citizenship to Cryptography land).
They know what they did. Or maybe did not.
6:03 this seems similar to situations like anti derivatives where data is lost during a derivative such as any constant, and are unable to retrieve the constant that was lost, which is why with anti derivatives you add a character that represents a constant such as c. To make it more understandable, generally if a*b=c then c/b = a. But if you multiply a number by 0 it would become 0 and you can’t use 0/0 to return it back to its original number.
This is also why 0/0 is undefined rather then 0,1 or infinity.
Even when you look at the rules
0 divided by any number is 0
Any number decided by itself is 1
And you can make a equation that uses limits to basically “prove” that 0/0 = any number
I've found it's very common for Redditors to over-generalise their knowledge. They might have read "solving P for all configurations of X is impossible in general" and will translate that, in their mind, to "solving P for any configuration of X is impossible". This video is a great example of the fact that 99% of the time you don't need to solve the general problem to find a solution that satisfies a more constrained subset of the problem!
Yeah it's reddit
Bunch of idiot with slightly more knowledge
I guess that's known as "over-fitting"
Well, power to you! Discovering that your question is equivalent to SAT was already further than I would have gotten. When someone asks me “can you go backwards in the algorithm”, my usual answer is “no” without even thinking about it.
0:52 this is called the f-pentomino, and it's mirror image is called the e-pentomino
26:05 by hexagonalizing the banquet table problem you've made it CGP-complete.
To me this sounds like a perfect inverse problem. Pretty much how do we go from a current state and work our way backwards to find the initial state, given that the initial state has pretty much infinite possibilities and there are infinite solutions. Inverse problems are pretty heavily studied at least in geophysics and hence there are some solid algorithms for them. That might be another approach to solve this problem which could make it much faster.
My first software job was at the company that convinced Microsoft to open-source Z3 :) we used it to find bugs in aerospace designs
Oh when I started the video I thought "I hope he's using a SAT solver" and I wasn't disappointed hehe :-)
24:35 NP-hard doesn't mean "not polynomial time" and NP-complete doesn't mean "it can be solved in exponential time". I know this isn't a CS channel but I wish you were a little more precise at explaining these concepts the first time for people who might not already be familiar with them.
The visualisation of (CDCL) SAT solvers as crossing out whole swathes of solution space where only suboptimal solutions exist, is really good! Thanks for this insight.
I know those aren't the rigorous definitions, but they are true statements. I went back and forth a LOT about that segment trying to say what needed to be said relevant to this video (and not too much), but not be incorrect.
Edit:
The verbiage I used to describe NP complete in the video was “at worst exponential time”, which is a very precise way to phrase it.
Claiming that NP is “not polynomial time” in the graphic and saying “no known algorithm to solve in polynomial time” is also I think a very precise way to phrase it. The graphic annotation is practical and the part I said out loud was exact
@@AlphaPhoenixChannel Well, they're not entirely true, no. NP-hard doesn't imply "not polynomial time", it only implies "we don't know a polynomial time algorithm to solve this (and probably never will)". NP-complete does imply "can be solved in exponential time" but so does NP. So does P. etc. It just doesn't feel like a good explanation for why SAT in particular can be used to solve this problem. I think a better explanation would talk about how NP problems have easy to verify solutions, and how this problem has easy to verify solutions (just run the solution forward one step & check that it matches the desired output), and then link that to a brief discussion of NP-complete & SAT solvers (which you kinda already have in the video).
@@typeswitchmeh, I think that for all intents and purposes NP-hard does kind of imply "not polynomial time".
Sure, it doesn't technically imply "we can prove that it's not polynomial time", but "there is no polynomial time algorithm" and "there is no known polynomial time algorithm and it's almost certain that there never will be" are basically equivalent in colloquial use (and I'd argue they are definitely equivalent in the Bayesian sense).
I saw a *really* cool video recently that rephrased NP vs P as "Can you reverse every polynomial algorithm in polynomial time"
The moment I saw the title I went "of course running it backwards is hard, it's NP!"
@@typeswitch The verbiage I used to describe NP complete in the video was “at worst exponential time”, which is a very precise way to phrase it.
Claiming that NP is “not polynomial time” in the graphic and saying “no known algorithm to solve in polynomial time” is also I think a very precise way to phrase it. The graphic annotation is practical and the part I said out loud was precise
When you opened with this video and started to explain the project, my initial thought was, oh this is like backtracking algorithms to solve sudoku, and then immediately was reminded of SAT solvers like Z3.
As much as I love math(s), the tables problem makes me want to say, "Shut up and eat."
There's a lot of overlap with the gradient ascent stuff and evolutionary biology. Comes into play a lot in genetic engineering, specifically enzyme engineering
Maybe this is why entropy and causality go in one direction, and we call it time.
My approach would be to cache e.g. all 8x8 regions and what their future looks in 3 steps - that would be only 4x4 regions, because we don't want noise propagating from other regions. Now you cut your goal state in 4x4 blocks, look up all associated 8x8 regions, and try to find a solution that overlaps the start regions correctly. Still hard, but you can jump over several intermediate steps, which should be a huge time saver.
Nobody knows that NP complete problems cannot be solved in polynomial time. It is generally considered unlikely that they could be, but we do not know any reason that NP complete problems cannot be solved in polynomial time; that is, NP may be no larger than P, which of course would make it equal to P (trivially all P problems are also non-deterministic P). All that "NP complete" or "NP hard" means is that it is a problem that can be solved non-deterministically in polynomial time (that is, it is in NP), and also it is maximally hard. A maximally hard NP problem is one where a solution to this problem can be reduced to a solution to any NP problem (with at most polynomial slow-down). The typical example of such a problem is, of course, SAT. It's no coincidence that we use SAT solvers to solve all NP problems - SAT being NP hard exactly means that a SAT solver can solve all NP problems without significant slow-down. We also know many other NP complete problems, i.e. many problems that can be reduced to all NP problems, including each other. It's just that SAT is quite probably the "simpl.
Secondly (but similarly), EXP (algorithms that run in exponential time) is weakly larger than NP, and it is suspected to be strictly larger than NP. Or put another way, NP is closely related to polynomial time (it's about a more general idea of polynomial time than P); it is not really about exponential time per se, although it is trivial that NP problems can be solved in exponential time by the naive brute force algorithm of checking all possible solutions. But note that also P problems, or indeed any easy problems, can be solved in exponential time. It's trivial in the same way that solving NP problems in exponential time is trivial. That doesn't necessarily mean exponential time is the fastest we can do for NP problems, and it doesn't mean that all exponential time problems can be verified / solved non-deterministically in polynomial time (they probably can't all be).
I made a “Pong Shader” where going through either goal just incremented the entire screen to the location where the score was displayed correctly. It wasn’t actually allocated. The score was just a set of rules defining its coordinate display.
This reminds me of trying to get a file from its hash.
You wouldn’t believe how hard I got downvoted for saying that SHA isn’t reversible but people are trying after someone on Reddit told me GoL wasn’t reversible
@@AlphaPhoenixChannel SHA is reversible - by brute force _lovingly slaps GPU compute server_
That's why you salt your strings. You can reasonably brute force one rainbow table, but you can't brute force all of them.
@@hammerth1421 Its not reversible. At best you can take a pick of any of the infinite amount of inputs that could have produced that SHA.
Unless the input length is limited, but even then once you reach about half of the length of your hash output, you're very likely to hit a collision.
@@AlphaPhoenixChannel I expect your problem is that you use "reversible" like a _physicist,_ and math people hate that.
Have you considered a wave function collapse in causality space ( populate the 2d prediction list with all possible 3x3 starter grids for the current state), solve for overlap in the 3x3 grids ( as graphs) and then run wave function collapse from a random starting point to see if it has a solution?
Might actually work! But on the other hand WFC gets itself into situations where it needs to backtrack multiple times with a lot of rule sets, especially complicated ones, so I'm not too optimistic. Maybe a larger overlap area could do the trick? Maybe you could even use dynamic programming for that? I think I'd have to test it out
I worked on a similar problem, difference being that my solution did not have to be exact, just similar. I used genetic programing, but didn't get anything. Maybe I will revisit it with new ideas.
I love your explanations and visualizations! Your passion is clear! I've been trying to improve my intuition of latent/configuration spaces, and explanations like this are always welcome!
2^36 *1ms /16 cores =2 months. Just saying
Best case, don't forget parallelization overhead.
And the fact we’ve got a LOT more than 36 pixels to consider
I know that it ain't gonna help for like doing many steps in parallel, but in general it looks like it could be GPU accelerated... Is this something the Z3 already leverages? Or at least maybe SIMD...
@@GerinoMorn if you're literally just trying all the solutions, that's parallelization with zero loss. they can each work on a subset. the problem is that you double the number of cores you need each time you add a pixel to the solution space
@AlphaPhoenixChannel but the 16,000 CUDA cores in a RTX 4090 can't hurt. I'm not sure if the ~2.2 Ghz would slow you down too much. With the likely pending of the 5090 and friends coming at some point in the not too distant future, more compute is coming.
Because something noisy devolves into something periodic and self-sustaining, it cannot be "undone" using an algorithm. It is similar to square-rooting a number: there are two possible answers to that one, but for this one, there could be an indefinite number of solutions.
So, Injective functions... now the fun part... This can be applied to Jpeg Compression as well.
Which is a esoteric paper I want to write one day.
Some configurations can't be reached, so they only ever are an initial setup. meaning there is some order to this too.
hahahaha i didn't know there were jpeg edens
@@AlphaPhoenixChannel Think about it this way: there are fewer possible compressed images than uncompressed sources.
So you eventually have to map two slightly different images to the same compressed one. It's sorta a form of noise reduction.
Jepg isn't deterministic, since you can chose the quantization tables arbitrarily (which means some jpegs look better than others, even if they are the same resulting file size/entropy). but you should be able to do the reverse and find a whole set of images that compress to the same result.
Now one fun idea would be to figure out all of these (I assume mostly noise). But the more interesting idea would be finding a more appropriate comrpessed image you want.... And then finding it's input.
You might think that all compressed versions are the best possible ones but that's not quite true. Things like text compressed with jepg look horrible. But by adding extract lines, you can improve how the compressed result looks by targeting a specific compression table. Maybe sorta think of it like the extreme patterns that were needed in near UV lithography? Or people putting thin cavities in their 3D models so the slicer adds internal walls for additional support.
Since jepg is meant to be visually lossless, that means the mapping won't be uniform.
incredible work on this video, you have made a major, meaningful contribution to educating the world about this!
Gotta love redditors. Never ask reddit for advice with anything if you want to be optimistic about it in any way.
6:15 my intuition would be this is because going forward there is only one possible branch to proceed, but going backwards there could be multiple branches that lead to the same result.
The fact that the pattern you wish to design repeats infinitely in the future should be the first giveaway that there should be multiple paths of possibility into the past - otherwise the only way to generate an infinitely repeating pattern would be to start with an infinitely repeating pattern.
YOU underestimated this? My lecturing fingers were all ready to go for nothing. For everyone else, it's the exact thesis of Stephen Wolfram's entire life that you can't do this (I'm a fan, but poor guy, he'd never heard of church-turing, or long division, or the weather ;)
I think I've been nerd sniped on whether or not Brian's comment has a dangling parenthesis.
I remember when my work gave me a task; to construct a Revit macro which finds the most efficient way to make a target length assembly (or as close to it as possible) given a set of pre-manufactured pieces of set length, and place them in model.
Come to find out after a few hours of work and some searches, that this problem is a known NP-Complete problem called the subset-sum problem (or, it is a similar derivative).
It was both infinitely frustrating to find out the problem youve been struggling to optimize is computationally impossible, but relieving to know that it is not your fault you cannot find a solution. It is both when you realize all you can deliver is your incomplete product, with a note that says "do not use this if the length is greater than x, or you will spend years processing."
Loved the video. I love optimization so I guess that is no surprise.
But I think you failed in explaining (the first step of) why the rules in conway's game of life are so hard to run backwards.
It seems like if you have a live pixel you could calculate one step back for the 8 pixels around it. And even if you get 2 or 3 possible configurations that could have birthed that pixel, you could just make a "tree" and explore all solutions, that might end up with milions of configurations to be calculated, depending on the number of steps back. But at least all the configurations you are trying will return back to your taget solution.
Basically what I am trying to say is that you "fail" to show the views what you explain from 5:51 to 6:23.
Not saying that I could do any better 😂. I know this to be true, but honestly I do not "understand" in detail why this is true. And even if I did I don't think it is possible to explain in a youtube comment without being able to show simple examples.
I know it would have made the video even longer, but as the video is now the viewer just have to except this as being true and I think that is a little disappointing taking into account how well you explain the solution space over the next 15 minutes.
To be fair I don't remember ever seeing a satisfying explanation, I only have a "feel" for it by trying the problem myself. (I tried finding stating solutions that would just keep going, and quickly realized that it was a much harder problem than I could have ever imagined and have not touched the game since. Burnt child dreads the fire 😂)
I think a great way to deal with NP-complete problems is to use a neural net conditioned on fitness, but check it against the same neural net if it was conditioned using a novelty search and then train it based on that. That way, it puts equal priority on trying new things and getting to the goal. It’s like adding random noise, except the noise is smart.
Bestagonal, i immidiately paused. 26:05
haha, nice!
We all share a common space in the Internet ✨
Even with high entropy!
Same, came here to see if anyone else heard it.
\*chuckles* "You would think there's a different set of rules that advances it backwards in time..."
No... no, I wouldn't.
Welcome to The Arrow of Time.
Also the reason why, even though integration and differentiation are basically inverse operations to each other, you cannot actually recover the full detail by differentiating and then integrating (you lose information in the form of any constant terms, which all go to zero in the derivative).
Also the reason I think physics needs to let go of either: the conservation of information, or locality.
The fun bit is reality is almost the opposite, ie it is much easier to 'predict' the past than the future. There are multiple possible pasts, but not many, whereas the number of possible futures is near infinite even on very short time scales. This is why we remember the past but not the future. So a life form in Conways game of life would actually 'live' in the opposite time direction from us and 'remember' what we see as its future.
Eh? I don’t think that’s correct…
Maybe you are referring to non-determinism?
At the least, I’m fairly sure that entities in GoL can’t “remember the future”, because there is a communication speed limit in GoL , limiting how quickly one thing can influence another thing. You can’t respond ahead of time to a glider that would hit you in 100000 steps because the information that it is headed towards you hasn’t reached you yet. To remember something, one must be able to do something differently because of that remembering.
As for the non-determinism, I think that like, it probably balances out equally as far as how undetermined the past and future are?
Or…
Hm.
If you have some prior distribution over the state at time 0 ( i.e. P(S_0)) , and have a conditional probability distribution of P(S_t | S_{t-1} ) for each t…
hm, I guess we could describe the amount of uncertainty as the entropy of the distribution?
So, I guess the question is about how H(S_t | S_{t-1} , S_0) relates to H(S_{t-1} | S_t , S_0 )
?
Err.. what exactly was the question I’m trying to answer???
@@drdca8263 I could be thinking about it wrong, but in GoL, the future is absolutely determined. It can be predicted with 100% accuracy. Whereas the past is not. There are many possible past states that could have lead to the current one. In real life it is past determinism (the ability to accurately predict what states lead to the current one) that allows us to remember. But on further thought I can't actually think of a mechanism for 'recording the future' in GoL so maybe I am wrong. Maybe past determinism allows for memory but isn't the only requirement.
I am fairly sure though that a highly complex entity in GoL could calculate what its future would be, but could not possibly calculate what its past was, although keeping a 'memory' of the past in a finite region might actually be a possibility. Although on small scales it is really hard to predict the prior states, on large scales it might actually be much easier for specific configurations.
@@TimothyWhiteheadzm The future states in GoL are completely determined by the current entire state while the past states aren’t, yes, but a finite sized being in GoL can’t know the entire state of the GoL world.
In reality, there is the idea that time evolves under unitary time evolution, in which case both the past and future are determined by the state at any one moment… except possibly for non-determinism in measurements.
There is disagreement about how to interpret measurements. (“Is it fundamentally non-deterministic, or is the non-determinism just a subjective thing?”)
But, the projections involved…
I think they might produce the same kind of uncertainty about the past as they create about the future?
Like, if something is in state (|0>+|1>)/sqrt(2) and it is measured in the basis |0>,|1> , there’s a 50% chance of it ending up being in state |0> and a 50% chance of |1> ,
… but, suppose from the current state you know that it was just measured whether something was in state |0> or state |1> , and the result was |1>,
even suppose you know that there was a 50:50 chance there.
Well, from that, you can’t tell if the state before the measurement was state (|0>+|1>)/sqrt(2) or state (|0>-|1>)/sqrt(2) (or state (|0>+i |1>)/sqrt(2) or for that matter).
With unitary time evolution, “information is conserved”,
I suspect that the non-determinism of measurement has to essentially “erase” the same amount of information as it “creates”.
@@TimothyWhiteheadzm Huh, your third comment didn’t show up for me until now. RUclips is weird.
If the GoL world is infinite (or merely much bigger than the entity living in it) and the entity’s internal state can be influenced from the outside, then it cannot know its future internal states, because it cannot know what will influence it in the future. Different things far away which it doesn’t know about would influence it in different ways, resulting in different futures (even though only one of the possibilities for the far-away part of the current state is actually true, and so what will happen is determined.)
This is a captivating topic and an amazing presentation, thanks for putting in so much effort!
Am I mistacking or cant you solve this problem with a wave function collaps algorythem
Wavefunction collapse algorithms are essentially a greedy algorithm with backtracking. Given the same SAT problem, take the variable with the fewest still-admissible states, pick one, and repeat from there. When you have potentially many valid solutions and just need to find one (such as when you're doing an aesthetic tiling), then this works out pretty well. The typical example is tiling a game board, where your new cell to fit might have water on one side, a forest on another, and you have several options to choose from.
However, Conway's game of unlife does not have that property; it's very constrained since a cell can be only alive or dead. A cell will either be forced by its neighbours (it _must_ be alive or dead, with only one option), or it has two still-unresolved outcomes. There's no obvious way to choose which cell is next-best to consider, so the greedy approach of wavefunction collapse doesn't help very much.
@@Majromax yes fair. But what if you take the 3 by 3 grid araound a cell as a possible cell state? I think this could work but i dont have the courage to try out.
@@magie3490 Your problem then is that large grids would be overlapping, so you're not solving independent constraint problems.
@@Majromax but isnt that te goal? Overlaping grids mean they depend on eaxh other. Meaning if you start comparing to overlapping grids you ll find out that there are only so many states these grids can take so that there are both still able to hold there state. Either dead or alive in the present. If you repeat this for every overlapp in case there exsists a pre state to the current state of the cellular automata, you would be left with a distribution of possible states vor every grid. Now you can start collapsing
Thanks dude, you are helping me a lot with this video, thanks to you i now have found a few new paths to solve a logic problem i had for some years now
It actually simple. Just reverse the animation. 🙈 .... joking
As always this is such a good balance between complex, simplified but not too much that all the nuance of how it's complex is lost. Can't wait for the follow up!
No wonder it took so long, you used Python!
13:06 Whenever I have to think about 4D space, I like to think of it as a thickness to our 2D shells. Let me explain:
You and I see 3D space, but if you look at an object, we really only see a 2D "texture map" wrapped around the 3D object. Counting the removal of layers (like you're looking past them) as a step, you now have 4D movement: up/down, left/right, foreward/backward, and in/out!
Another way to reach this, from an induction method:
A 1D person can only see a 0D point in the direction they are facing.
A 2D person can take a sweep of 0D points to see a 1D line warped in 2D space.
A 3D person (Hi!) can take a sweep of 1D lines to produce a 2D image (the texture map) warped in 3D space.
A 4D person must, then, take a sweep of 2D images to produce a 3D object warped in 4D space.
If you convert the 2D layer you can see and a bunch of layers under that (generated with imagination) into a stack then flip through it rapidly, then you are probably seeing what a 4D person sees.
13:00 thank you so much! I have watched countless videos of people explaining higher dimensional spaces over the years and this finally clicked for me!
Rule 110 is much more impressive to me.
Conway's game of life has 2D closest neighbors rules which is much less contrained than the 1D closest neighbors of rule 110. Yet, rule 110 is not only Turing complete, its also P-complete!
You almost literally can't even compute anything much better than how rule 110 with all its limitations can!
It was an interesting thing to discover that "blockus tile" pattern as a kid
The SATisfiable section reminded me of one of my favorite board games. Its called décorum. Essentially, its a party game where everyone is roommates. They each have specific needs for them to be happy, and so you take turns changing the furniture to make you happy, all while simultaneously making everyone else happy. Here’s the problem, . Every move you make, your roommates can only say if they like it or hate it. No reason why. Id highly suggest giving it a try if you have any friends
Z3 sounds like a modern version of the genetic algorithm concept we covered in AI back when I took the course ~30 years ago. GA's were mighty slick. You needed to express the problem as a topography and then provide a function that given a point on the topography would indicate how good a fit it was to the desired solution. The GA did the work of traversing the topography and avoiding foothills. Mainly it started with a set of random guesses and then did a combination of "walking uphill" and creating new guesses by combining a couple of the more promising existing guesses (thus the "genetic" part of the name) to create new guesses that may be even more promising. It was at the time one of the best and most flexible ways to solve the foothill problem
You make some of the best content on the platform. Incredibly informative without being dry
Amazing video ! I've been playing with the game of life a lot, to generate chords used in my music and now I know that I can decide that I want the last chord to be one specific chord, which is really useful !! Now I don't know if I will becaise it seems to be really hard, but… maybe one day 😅
About 10 years back me and my friend installed a light system in a building I worked that used each window as one pixel (16x6 windows on a corner of the building). We've actually made this for the first time in 2007 on a dormitory back in the university times but I digress. The later system, that run for a few years, allowed not only predefined animations but also plugins - small programs running the display for determined time like 20s or so. We even had a competition where people send their plugins and some where really good in visual terms. I contributed a few of my own and one of these was a game of life ;-) I think there was also one send by someone with some different colour scheme etc.
I have no memory how I stumbled on your channel but I am very, very glad I did. Keep these coming, they make my brain satisfied.
Спасибо за это увлекательное путешествие в мир NP-задач, теперь я стал понимать, в чём проблема NP. И ваше стремление найти решение в этой "тёмной комнате", где даже не известно, есть ли в ней "кошка", меня восхищает! Удачи и сил продолжать подобные поиски и спасибо за видео!
Seriously underrated RUclips channel. Most this stuff is way above my intelligence but can’t stop watching these videos super interesting stuff!
this was so cool! I took a course in uni that included sat solvers and i always thought they were really cool but this is the first time I see someone using one in the wild. I didn't really understand how you encoded the pixels into numbers, with rules and everything, but i really liked the video! subscribed :)
3:55 It shouldn't be easy, especially when you consider patterns that disappear into nothingness, meaning an infinite amount of patterns can lead to the exact same result.
Your "configuration hyperspace" reminds me of concepts I was grappling with about 20 years ago when I started learning about bioinformatics. The solution space for some of the problems are stupidly big with way too many dimensions. After working with the math for a while, I started getting a feel for imagining hyperspaces with many dimensions, similar to what one would do when you simply imagine objects in 3 dimensional space. The mind bending thing that happened, was that I started having dreams in hyper dimensional spaces... No halucinagenic drug could ever produce the dreams that math made me experience. Be careful when doing math kids!
I had a feeling that was coming. Just because you find a state which becomes your goal when played forward doesn’t mean that state has a valid prior state which can create it in turn! It’s a whole additional layer of phase space to work through… Nested phase spaces. Ouch!
3:47 to 4:00 reminds me of a banner I've seen recently, which reads: "We do this not because it is easy, but because we thought it would be easy". Lol