Your method is very ingenious and it definitely seems like it worked out most of the positive/generative part. All that really remains is figuring out how to model the constraints/rules that steer/converge the generative part into something desirable. Keep up the good work! You're definitely on the right track to something incredibly useful! In Control Theory there are the dual concepts of Positive and Negative Feedback, all you need is the negative/regulating part.
I love that you're using differential geometry to determine which primitives are valid candidates for each other. Takes me back to the turning a sphere inside out days!
This is great stuff. It reminds me loosely of a class I took at UC Berkeley's School of Environmental Design back in the late 1980's taught by Horst Rittel. It was more of a seminar on Design Theory, actually, but not what I was expecting when I signed up. I expected the material to cover famous Architects and their philosophies/theories of designing buildings and so forth. After all, you generally look to the range of historical precedents for inspiration. But, Rittel had a completely unconventional approach (at the time). He described a method for solving design problems through using a "morphological box" that would essentially list all of the various possible design elements in an array of rows stacked vertically. So, one row would list possible floor plans, another would list window styles, another would list roof profiles, etc. Then, the designer would essentially run through and evaluate all possible combinations, discarding the ones that were non-sensical, and grading the ones that could work. Presumably, since all possibilities would be considered, the best options would percolate to the top. At the time, as a student of design, I had a real problem with this approach, not only because it seemed incredibly tedious for all but the simplest of design problems, but also because it ignored artistic inspiration and individual creativity. But, in retrospect, I think Rittel was simply decades ahead of his time. With the power of modern computers, and clever algorithms such as yours, I suspect approaching design in such a way is more practical than ever, even for complicated scenarios. I think the type of automation you are investigating is going to be a game changer in a variety of design fields very soon. Providing mechanisms for a designer to influence the automation, directing it in some manner, seems to be a key element that could make this kind of thing very useful. I'll be honest, I thought Rittel was kind of out of his mind. His approach would be so burdensome to implement on paper for a real-world situation, that I felt a little hostile toward it; I mean, it was almost absurd. Nobody in Architecture would actually work that way. But, now, with the computer at hand to do the heavy lifting sorting through the variations, it makes a lot more sense.
It's interesting to look back at the history of this and see that a lot of important theoretical work was done before it was practical to actually use it. I have another paper that focus specifically on residential architecture and on optimizing those designs: paulmerrell.org/wp-content/uploads/2021/06/floorplan-final.pdf. Certainly, you can see in my work that I agree with the basic philosophy of trying out a huge number of possible combinations. However, I will quibble with the idea of trying "all possible combinations". Even today, this is still impractical for all for most problems. There is a combinatorial explosion in the number of possible combinations for many small problems.
@@PaulMerrellResearch Awesome. This paper looks really interesting, and certainly useful for the applicability of it for real estate developers and architects, and game developers/film makers, too. You are right about the combinatorial explosion, and of course, one of the reasons one might hire a particular architect is that they have their experience, insights, and taste that cuts through and avoids all the options that aren't "right", and ideally they produce masterpieces with their signature vibe. I remember reading where Frank Lloyd Wright hurriedly sketched out Falling Water in a couple of hours because the client was suddenly in town and wanted to drop by and see the progress; Wright hadn't even started working on it since he was commissioned months earlier. LOL. But, such is the skill of a master who can generate a design in the mind's eye quickly.BTW, how much of your system relates to fractals, if at all? Fractals were a hot topic 25-30 years ago, and maybe they have a renewed practicality here.
@marshallross3373 This is closely related to fractals. One of the main features of fractal is that they are self-similar. Grammars are good at producing self-similar shapes. Here's an example where I use my technique to create something similar to a Koch Snowflake: imgur.com/a/opF0iof. And I could probably get it to work on several other fractal patterns.
@@PaulMerrellResearch Very cool. It is amazing what can be generated from a small set of primitives and grammars. Thanks for sharing your research and explorations. I'll definitely dig deeper on this subject. Really fascinating stuff.
You work is very inspiring! One additional think you could work out in the future is space proportions. Either inside or outside of your generated objects become too thin. For example too narrow spaces between skyscrapers, but there could be example almost in every object you presented. If you could obtain the constrains of minimal space and then generate with them, final results would much more believable. Thank you for your work and this presentation, I hope to use this knowledge in the near future.
This is awesome! I am curious how this compares to the 3d performance of the model synthesis algorithm (I think this was NP hard and required back tracking), and if you could compare the two algorithms and the models they can create. A lot of this still goes over my head - but I still find this fascinating for game dev. Thanks for the content!
@tenthlegionstudios1343 The main difference is that model synthesis uses tiles on a grid while my new method is not tied to a grid. Model synthesis kind of looks like a video game where everything is tiled. But the real world is mostly not on a grid and my new method can create more natural objects. However if you are dealing with tiled objects, model synthesis has some advantages. Also, my new method works by making small incremental changes. Each time we apply a rule we make another incremental change and produce another valid shape. But model synthesis works by trying to fill a large space without running into any contradictions. The problem is NP-hard and when you make a mistake you may have to backtrack quite a bit. In my new method you can also run into contradictions, but just within one incremental step. You may have edges that intersect. And actually it has been shown that deciding if a graph can be drawn without self-intersection is an NP-hard problem. But this is much less of a problem because we are making small incremental changes. If we have any difficulty applying a rule we can discard it and try another one. So we don't need to backtrack very far.
One even could say, that C. Alexander pioneered the concept of "generative grammar" in relation to architectural/geographic auto-modeling, which the methods you describe elaborate further...
It makes sense to call it "grammar". It is common to speak of design "language", or per Christoph Alexander "Pattern Language" related to Architecture. Makes perfect sense to me to invoke "grammar" to refer to the underlying combinatorial logic.
That's an interesting thought to tie it to Alexander's Pattern Language, but the term "grammar" originated from Chomsky's hierarchy of formal grammars. Chomsky was a linguist and was thinking about this in the context of noun phrases and verb phrases. Most computer scientists are familiar with regular, context-free, and context sensitive grammars. Graph grammars were invented in the late 60s, so I believe this was independent of Alexander's pioneering work.
in the 3d model case, how do you decide where to cut the model to generate the primitives? both in terms of position, orientation and scale. In the example at 15:38 there seems the choice seem to follow a grid
@ValerioBelcamino-m5j Every edge is cut in half (see 6:49). While many of the examples look like they're on a grid, this was just done for simplicity. This method works fine for shapes at any angles. You asked about the position, orientation, and scale. The edges can stretch to any length (see 3:34) and the primitives can go in any position. So the position and scale of the cut is irrelevant. The orientation matters for determine the edge label (see 4:29).
Thanks! It took a lot of work. Hopefully you were able to follow from the video how I made the shapes, but I also applied a lot of textures and decorations to the edges, etc. and then rendered the 3D shapes in 3DS Max. I have some more results and will be making some more videos soon.
@@PaulMerrellResearch Oh, no I don't really understand your research :). I actually meant the visualization of concepts, f.ex at 9:24, 10:27, etc. Seems like every single sentence is animated, and the animation syncs up really well with what you're saying. Must be an insane amount of work to do that in something like PowerPoint.
Oh, yes that also took a good amount of work. You couldn't do something like this in PowerPoint. I recorded all the audio and then animated it so it matched the audio. For someone who does a lot of 3D animation it's not difficult, but definitely more advanced than PowerPoint.
Really interesting, and I'm very pleased to know that your work on this is ongoing. I recently implemented a version of wave function collapse for a game I'm working on. Along the way, I discovered model synthesis. It's my understanding that model synthesis is more attuned to 3d generation and I switched to something more like that in my approach, since I was working in 3d. Since then I haven't done much on the project, but this video has reinvigorated my interest and I'm looking forward to implementing this new iteration. the main stumbling block for my work was the time taken for the generation. I was wondering if you had any insight into this aspect, in particular in cases with large tilesets (or many 'primitives' as I understand in the new version). I looked briefly into parallel implementations as well and was wondering if this was something you'd considered as well. Keep up the great work, I'd be interested in a closer look at how you used graph theory and category theory in your work too, in case you're looking for video ideas :D
@julius9055 Can you tell me a little more about the tilesets where the generation is taking a long time? I've looked at many tilesets and they seem to work reasonably fast, although I haven't done exhaustive tests and for some real-time applications it might be slow. The Escheresque tileset is one of my more complicated ones with 100+ different tiles. And I can 100 x 100 x 10 = 100,000 voxel output in a few minutes. Is the problem related to having to it failing and restarting often? If you're not using my strategy of modifying the output in parts, you would have that problem. That was part of the original Model Synthesis Paper. For some reason it was not copied into the Wave Function Collapse code. I'm hoping to make more videos on lots of related topics, but I don't know you'll be seeing me dig into category theory too much. Honestly, I could use some good videos explaining that stuff to me. I find it confusing and I don't think it's terribly necessary for understanding my work.
@PaulMerrellResearch thanks for the response. To be honest, I'd need to look back into it since it's been a few months. From my recollection though, my method was already a simplified version of your work and I'm sure I've missed some of the speed improvements that you've employed. I know for a fact that I haven't implemented a version of the modifying of the output in parts so that'll be the first port of call when I start work on this again.
Yes, absolutely, you can control the granularity. Remember the part where you can fill in the faces with different colors. You can do the exact same thing with the edges. You can assign different colors. This gives you a lot of control over how the primitives can be glued together. You can force certain primitives to be glued together meaning the final shape will consists of larger, more granular pieces.
Super interesting! I've read through a lot of your work on model synthesis (and implemented something similar myself but found creating matching components in 3d was not easy at all), was just wondering about the performance characteristics of the algorithm for 3d. Is this something you see being used during runtime? Or a time-saving step when doing modeling? How does it compare to model synthesis?
The 3D shapes shown took about 1 - 1.5 minutes to generate. Whether that's fast enough depends on the application. It's fine if you just need to build a virtual world once and then you spend lots of time exploring it. I'm sure the performance can be improved. It shouldn't be difficult to parallelize it. You could have multiple processes applying rules, you would just need to make sure they apply to different parts of the shape and don't interfere with one another. Model Synthesis requires the input shape to be made up of tiles that neatly fit on a grid, requiring it to be rectilinear in some sense. The new approach doesn't use tiles at all. It can handles meshes at any angles. It can create curved organic shapes.
This is fantastic! I wonder if there's a way to get around the issue of edges getting unnaturally long in the output. My instinct is to put some sort of length constraint on edges, but the edges are lost in the reconstruction process. Maybe you could tag edges in the input, and match those tags in the output using the angle/adjacent region info. I wonder about multi-stage generation (eg. marking edges as possible spots for windows/doors) or the ability to mark a region with a minimum area constraint.
There actually is an constraint on the edge lengths. They have a minimum and maximum length. Although I may not have set it correctly everywhere. It is necessary to do something like this. The alternative would be to look the right edge lengths from training data, but the whole point is to not require the user to provide a lot of training data. A multi-stage generation is certainly an interesting possibility.
Question, i see alot of this being applied to map creation but could you apply it to things like enemy creation, character creation, etc How granular of an application space does it have? How much would it have to be changed to work in applications outside of map/level creation?
The idea is built around the concept of geometric similarity. Basically, this can work on any shapes that are self-similar: building, trees, flowers, terrain, space stations, factories, etc. All the things that procedural modeling works on right now. It doesn't matter how big or small the objects are. But I don't know that it would be good for characters as they aren't really self-similar.
@@thegoldenatlas753I'm not sure what you mean by break down to more arbitrary shapes. It's good for objects that a self-similar meaning they contain repeated patterns. A fractal pattern is the ultimate example of this. A tree is also a good example. It repeats the same leaf patterns and the same branch pattern over and over.
@@PaulMerrellResearch ahhhh ok that makes sense. Im definitely still curious what i could make with this so im gonna explore implementing it into Rust/Bevy to see what it can do in 3D. This honestly reminds me of how AI is but feels like the formalized math that an AI would be trying to approximate, which is genuinely amazing. Im very glad i found this even.
@@ItsJustAstronomical Absolutely! It's where most developers I know are and it's a good place for your community to interact. Think skype meets forums.
"Babe wake up, new Paul Merrell procedural generation video just dropped."
lolol.
Nice to finally meet you! Great research, thank you.
your work is amazing , your results are impressive
Impressive work!
really inspring stuff!
Your method is very ingenious and it definitely seems like it worked out most of the positive/generative part. All that really remains is figuring out how to model the constraints/rules that steer/converge the generative part into something desirable. Keep up the good work! You're definitely on the right track to something incredibly useful!
In Control Theory there are the dual concepts of Positive and Negative Feedback, all you need is the negative/regulating part.
Thank you!
@@PaulMerrellResearch You're welcome!
Insta-sub, thanks for your great work!
I love that you're using differential geometry to determine which primitives are valid candidates for each other. Takes me back to the turning a sphere inside out days!
Nice, now I'm curious what you mean by turning a sphere inside out?
Just remember to use your powers for good instead of evil.
very cool!
This is great stuff. It reminds me loosely of a class I took at UC Berkeley's School of Environmental Design back in the late 1980's taught by Horst Rittel. It was more of a seminar on Design Theory, actually, but not what I was expecting when I signed up. I expected the material to cover famous Architects and their philosophies/theories of designing buildings and so forth. After all, you generally look to the range of historical precedents for inspiration. But, Rittel had a completely unconventional approach (at the time). He described a method for solving design problems through using a "morphological box" that would essentially list all of the various possible design elements in an array of rows stacked vertically. So, one row would list possible floor plans, another would list window styles, another would list roof profiles, etc. Then, the designer would essentially run through and evaluate all possible combinations, discarding the ones that were non-sensical, and grading the ones that could work. Presumably, since all possibilities would be considered, the best options would percolate to the top. At the time, as a student of design, I had a real problem with this approach, not only because it seemed incredibly tedious for all but the simplest of design problems, but also because it ignored artistic inspiration and individual creativity. But, in retrospect, I think Rittel was simply decades ahead of his time. With the power of modern computers, and clever algorithms such as yours, I suspect approaching design in such a way is more practical than ever, even for complicated scenarios. I think the type of automation you are investigating is going to be a game changer in a variety of design fields very soon. Providing mechanisms for a designer to influence the automation, directing it in some manner, seems to be a key element that could make this kind of thing very useful. I'll be honest, I thought Rittel was kind of out of his mind. His approach would be so burdensome to implement on paper for a real-world situation, that I felt a little hostile toward it; I mean, it was almost absurd. Nobody in Architecture would actually work that way. But, now, with the computer at hand to do the heavy lifting sorting through the variations, it makes a lot more sense.
It's interesting to look back at the history of this and see that a lot of important theoretical work was done before it was practical to actually use it. I have another paper that focus specifically on residential architecture and on optimizing those designs: paulmerrell.org/wp-content/uploads/2021/06/floorplan-final.pdf. Certainly, you can see in my work that I agree with the basic philosophy of trying out a huge number of possible combinations. However, I will quibble with the idea of trying "all possible combinations". Even today, this is still impractical for all for most problems. There is a combinatorial explosion in the number of possible combinations for many small problems.
@@PaulMerrellResearch Awesome. This paper looks really interesting, and certainly useful for the applicability of it for real estate developers and architects, and game developers/film makers, too. You are right about the combinatorial explosion, and of course, one of the reasons one might hire a particular architect is that they have their experience, insights, and taste that cuts through and avoids all the options that aren't "right", and ideally they produce masterpieces with their signature vibe. I remember reading where Frank Lloyd Wright hurriedly sketched out Falling Water in a couple of hours because the client was suddenly in town and wanted to drop by and see the progress; Wright hadn't even started working on it since he was commissioned months earlier. LOL. But, such is the skill of a master who can generate a design in the mind's eye quickly.BTW, how much of your system relates to fractals, if at all? Fractals were a hot topic 25-30 years ago, and maybe they have a renewed practicality here.
@marshallross3373 This is closely related to fractals. One of the main features of fractal is that they are self-similar. Grammars are good at producing self-similar shapes. Here's an example where I use my technique to create something similar to a Koch Snowflake: imgur.com/a/opF0iof. And I could probably get it to work on several other fractal patterns.
@@PaulMerrellResearch Very cool. It is amazing what can be generated from a small set of primitives and grammars. Thanks for sharing your research and explorations. I'll definitely dig deeper on this subject. Really fascinating stuff.
Beautiful : }
You work is very inspiring! One additional think you could work out in the future is space proportions. Either inside or outside of your generated objects become too thin. For example too narrow spaces between skyscrapers, but there could be example almost in every object you presented. If you could obtain the constrains of minimal space and then generate with them, final results would much more believable. Thank you for your work and this presentation, I hope to use this knowledge in the near future.
Thanks, yes, I completely agree. The spacing and proportions are really important and certainly this is something that can be improved.
Sick!
This is awesome! I am curious how this compares to the 3d performance of the model synthesis algorithm (I think this was NP hard and required back tracking), and if you could compare the two algorithms and the models they can create. A lot of this still goes over my head - but I still find this fascinating for game dev. Thanks for the content!
@tenthlegionstudios1343 The main difference is that model synthesis uses tiles on a grid while my new method is not tied to a grid. Model synthesis kind of looks like a video game where everything is tiled. But the real world is mostly not on a grid and my new method can create more natural objects. However if you are dealing with tiled objects, model synthesis has some advantages.
Also, my new method works by making small incremental changes. Each time we apply a rule we make another incremental change and produce another valid shape. But model synthesis works by trying to fill a large space without running into any contradictions. The problem is NP-hard and when you make a mistake you may have to backtrack quite a bit. In my new method you can also run into contradictions, but just within one incremental step. You may have edges that intersect. And actually it has been shown that deciding if a graph can be drawn without self-intersection is an NP-hard problem. But this is much less of a problem because we are making small incremental changes. If we have any difficulty applying a rule we can discard it and try another one. So we don't need to backtrack very far.
This is so cool! I'd heard of WFC before and it's cool to see a more robust method. Looking forward to more video about this.
this is amazing
One even could say, that C. Alexander pioneered the concept of "generative grammar" in relation to architectural/geographic auto-modeling, which the methods you describe elaborate further...
It makes sense to call it "grammar". It is common to speak of design "language", or per Christoph Alexander "Pattern Language" related to Architecture. Makes perfect sense to me to invoke "grammar" to refer to the underlying combinatorial logic.
That's an interesting thought to tie it to Alexander's Pattern Language, but the term "grammar" originated from Chomsky's hierarchy of formal grammars. Chomsky was a linguist and was thinking about this in the context of noun phrases and verb phrases. Most computer scientists are familiar with regular, context-free, and context sensitive grammars. Graph grammars were invented in the late 60s, so I believe this was independent of Alexander's pioneering work.
Im SO excited to see you upload! I gotta turn that notification bell on.
Keep up the good work mate :)
in the 3d model case, how do you decide where to cut the model to generate the primitives? both in terms of position, orientation and scale. In the example at 15:38 there seems the choice seem to follow a grid
@ValerioBelcamino-m5j Every edge is cut in half (see 6:49). While many of the examples look like they're on a grid, this was just done for simplicity. This method works fine for shapes at any angles. You asked about the position, orientation, and scale. The edges can stretch to any length (see 3:34) and the primitives can go in any position. So the position and scale of the cut is irrelevant. The orientation matters for determine the edge label (see 4:29).
how did you manage to do so much visuals? It’s incredible.
Thanks! It took a lot of work. Hopefully you were able to follow from the video how I made the shapes, but I also applied a lot of textures and decorations to the edges, etc. and then rendered the 3D shapes in 3DS Max. I have some more results and will be making some more videos soon.
@@PaulMerrellResearch Oh, no I don't really understand your research :). I actually meant the visualization of concepts, f.ex at 9:24, 10:27, etc.
Seems like every single sentence is animated, and the animation syncs up really well with what you're saying.
Must be an insane amount of work to do that in something like PowerPoint.
Oh, yes that also took a good amount of work. You couldn't do something like this in PowerPoint. I recorded all the audio and then animated it so it matched the audio. For someone who does a lot of 3D animation it's not difficult, but definitely more advanced than PowerPoint.
Should put the enterprise into this to generate new starships!
Really interesting, and I'm very pleased to know that your work on this is ongoing. I recently implemented a version of wave function collapse for a game I'm working on. Along the way, I discovered model synthesis. It's my understanding that model synthesis is more attuned to 3d generation and I switched to something more like that in my approach, since I was working in 3d.
Since then I haven't done much on the project, but this video has reinvigorated my interest and I'm looking forward to implementing this new iteration. the main stumbling block for my work was the time taken for the generation. I was wondering if you had any insight into this aspect, in particular in cases with large tilesets (or many 'primitives' as I understand in the new version). I looked briefly into parallel implementations as well and was wondering if this was something you'd considered as well.
Keep up the great work, I'd be interested in a closer look at how you used graph theory and category theory in your work too, in case you're looking for video ideas :D
@julius9055 Can you tell me a little more about the tilesets where the generation is taking a long time? I've looked at many tilesets and they seem to work reasonably fast, although I haven't done exhaustive tests and for some real-time applications it might be slow. The Escheresque tileset is one of my more complicated ones with 100+ different tiles. And I can 100 x 100 x 10 = 100,000 voxel output in a few minutes.
Is the problem related to having to it failing and restarting often? If you're not using my strategy of modifying the output in parts, you would have that problem. That was part of the original Model Synthesis Paper. For some reason it was not copied into the Wave Function Collapse code.
I'm hoping to make more videos on lots of related topics, but I don't know you'll be seeing me dig into category theory too much. Honestly, I could use some good videos explaining that stuff to me. I find it confusing and I don't think it's terribly necessary for understanding my work.
@PaulMerrellResearch thanks for the response. To be honest, I'd need to look back into it since it's been a few months. From my recollection though, my method was already a simplified version of your work and I'm sure I've missed some of the speed improvements that you've employed. I know for a fact that I haven't implemented a version of the modifying of the output in parts so that'll be the first port of call when I start work on this again.
很棒的算法!
谢谢你!
Amazing work !
is it possible to adjust the primitive cutting step to control the "granularity" of the analysis of the input?
Yes, absolutely, you can control the granularity. Remember the part where you can fill in the faces with different colors. You can do the exact same thing with the edges. You can assign different colors. This gives you a lot of control over how the primitives can be glued together. You can force certain primitives to be glued together meaning the final shape will consists of larger, more granular pieces.
Oh and @@ItsJustAstronomical is also me. It's my other account. Sometimes I forget to switch them.
WAO
Have you thought of turning this into a product like City Engine by Pascal Mueller?
@chloesun1873 I have thought about it. I would need a team of people to help me out and I haven't found that yet.
@@PaulMerrellResearch I'm quite interested in turning this into a MVP in my free time...
Super interesting! I've read through a lot of your work on model synthesis (and implemented something similar myself but found creating matching components in 3d was not easy at all), was just wondering about the performance characteristics of the algorithm for 3d. Is this something you see being used during runtime? Or a time-saving step when doing modeling? How does it compare to model synthesis?
The 3D shapes shown took about 1 - 1.5 minutes to generate. Whether that's fast enough depends on the application. It's fine if you just need to build a virtual world once and then you spend lots of time exploring it. I'm sure the performance can be improved. It shouldn't be difficult to parallelize it. You could have multiple processes applying rules, you would just need to make sure they apply to different parts of the shape and don't interfere with one another.
Model Synthesis requires the input shape to be made up of tiles that neatly fit on a grid, requiring it to be rectilinear in some sense. The new approach doesn't use tiles at all. It can handles meshes at any angles. It can create curved organic shapes.
This is fantastic!
I wonder if there's a way to get around the issue of edges getting unnaturally long in the output. My instinct is to put some sort of length constraint on edges, but the edges are lost in the reconstruction process. Maybe you could tag edges in the input, and match those tags in the output using the angle/adjacent region info. I wonder about multi-stage generation (eg. marking edges as possible spots for windows/doors) or the ability to mark a region with a minimum area constraint.
There actually is an constraint on the edge lengths. They have a minimum and maximum length. Although I may not have set it correctly everywhere. It is necessary to do something like this. The alternative would be to look the right edge lengths from training data, but the whole point is to not require the user to provide a lot of training data.
A multi-stage generation is certainly an interesting possibility.
Question, i see alot of this being applied to map creation but could you apply it to things like enemy creation, character creation, etc How granular of an application space does it have? How much would it have to be changed to work in applications outside of map/level creation?
The idea is built around the concept of geometric similarity. Basically, this can work on any shapes that are self-similar: building, trees, flowers, terrain, space stations, factories, etc. All the things that procedural modeling works on right now. It doesn't matter how big or small the objects are. But I don't know that it would be good for characters as they aren't really self-similar.
@@PaulMerrellResearch So its good for things i can break down to more arbitrary shapes?
@@thegoldenatlas753I'm not sure what you mean by break down to more arbitrary shapes. It's good for objects that a self-similar meaning they contain repeated patterns. A fractal pattern is the ultimate example of this. A tree is also a good example. It repeats the same leaf patterns and the same branch pattern over and over.
@@PaulMerrellResearch ahhhh ok that makes sense. Im definitely still curious what i could make with this so im gonna explore implementing it into Rust/Bevy to see what it can do in 3D.
This honestly reminds me of how AI is but feels like the formalized math that an AI would be trying to approximate, which is genuinely amazing. Im very glad i found this even.
here: future matrix architect
Do you have a discord channel?
No, should I get on discord?
@@ItsJustAstronomical Absolutely! It's where most developers I know are and it's a good place for your community to interact. Think skype meets forums.
Why bro sound like 3blue1brown lmao 😮
graph grammars for neural networks. lol