Shedding light on Quake I and II lightmapping

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2024

Комментарии • 119

  • @hughjanes4883
    @hughjanes4883 11 месяцев назад +96

    Ive treid to research the quake rendering engine, but i never figured out the light! I hope this, illuminates my problems, it seems like a bright idea to look into this, these are my shining examples of puns, im sorry

    • @hylianbran7273
      @hylianbran7273 11 месяцев назад +4

      You made me laugh.

    • @hughjanes4883
      @hughjanes4883 11 месяцев назад +1

      @@hylianbran7273 good to see i, brightned someones day

    • @purple781
      @purple781 11 месяцев назад +8

      you must feel enlightened after watching this video, don't you?

  • @SleepyAdam
    @SleepyAdam 11 месяцев назад +6

    As cool as RTX and games with weather effects and different times of day and a lot of the more simulative lighting effects of modern gaming are, there is an exactness and artistry to baked lightmaps that I really adore.

  • @buzinaocara
    @buzinaocara 11 месяцев назад +21

    I've seen the baked lightmaps from quake be explained so many times, but never found a detailed explanation of the process for computing the dynamic lightsources from explosions and debris... That's what I've always been most curious about.

  • @q6manic
    @q6manic 11 месяцев назад +55

    It's amazing how you are able to create visualisations for all of these processes. Love all your videos!

  • @PeterLawrenceYT
    @PeterLawrenceYT 11 месяцев назад +3

    Holy Quake, this video editing is phenomenal!

  • @23Scadu
    @23Scadu 11 месяцев назад +45

    Excellent video, and superbly visualized! I used to make a lot of Quake and Quake II maps back in the day, so I already had a decent understanding of how the lighting worked in practice, but getting some insight into the specifics is really fascinating.

    • @Under_the_Iceberg
      @Under_the_Iceberg 11 месяцев назад

      Same. I made a few Q1 levels using TheForge/WorldCraft back in the day although I was a teenager and was just messing around, never made anything that I released to anyone else. I remember when the Tenebrae engine released, and was able to make some interesting levels by playing around with the updated lighting and bump mapping effects even if the game was basically a glorified slideshow for me.

  • @Capewearer
    @Capewearer 11 месяцев назад +2

    What an amazing analysis of Quake lighting! It would be good if you'll make new video, explaining topic a bit deeper, because Quake 1 lightstyles and Quake 3 light grid deserve attention. There are a lot additional features made by community itself, like custom Warsow/Warfork FBSP (Quake 3 BSP fork) with high-res lightmaps and lightstyles, Darkplaces/QFusion realtime lighting, RBDoom-3-BFG light probes etc

  • @KokoRicky
    @KokoRicky 11 месяцев назад +6

    It's worth noting that Jurassic Park was released just three years prior to Quake, and its renders lack bounce lighting...yet a video game was simulating it not long after. Quake was way ahead!

  • @AgsmaJustAgsma
    @AgsmaJustAgsma 11 месяцев назад +5

    12:42 The combination of lightmapping and vertex coloring makes this map showcase absolutely sublime. It feels like a completely lost art that stands the test of time.

  • @timquestionmark
    @timquestionmark 11 месяцев назад +23

    Very cool 👍👍
    Would love to see you talk about goldsrc and source too and how it iterated on the techniques youve covered from quake

  • @Artoooooor
    @Artoooooor 11 месяцев назад +5

    10:34 that map is beautiful. And precalculated lightmaps were so good, because they allow for later improvements.

  • @YourCRTube
    @YourCRTube 11 месяцев назад +1

    So, this is why the Q2 Remaster looked so authentic!

  • @uis246
    @uis246 11 месяцев назад +8

    Thank you a lot for your series on quake map compilation and rendering. It is so useful.

  • @Darkcrafter07
    @Darkcrafter07 11 месяцев назад +2

    The moment I just thought about it a few hours ago and this comes in, thank you!

  • @avensCL
    @avensCL 11 месяцев назад +3

    Extraordinary, as always

  • @AdamPlaver
    @AdamPlaver 11 месяцев назад +1

    The radiosity method of calculating lighting is part of the few methods of lighting calculations that are physically accurate. The modern version of radiosity that is used for performing such is called Radiance.

  • @golarac6433
    @golarac6433 11 месяцев назад +2

    I love your videos, really great job on explaining and creating those visualizations.

  • @mankrip-
    @mankrip- 11 месяцев назад +3

    Excellent video, with a lot of food for thought. Thanks.

  • @GroovyReject
    @GroovyReject 11 месяцев назад +1

    Your vids are always rad, and the effort that goes into the editing and your research is insane. (It's been about half a year ago but) Thanks again for making the tools/scripts you did for porting Q1 demos to Blender available to the public (plus resources for them) and helping me out with using them at the time. I haven't done a whole lot with them (like a test and a short cringe music video I made as another test) but your work still fascinates me, and I always seem to circle back around to wanting to use your demo importing tools for a video. Please keep up making these cool videos and overall research, I genuinely feel like I learn something new whenever you upload.

  • @SilverstringsMusings
    @SilverstringsMusings 11 месяцев назад +2

    I always love seeing your videos.

  • @averesenso
    @averesenso 11 месяцев назад +9

    Apparently Valve improved upon this in the Source engine with "Radiosity Normal Mapping".
    As far as I could understand light maps are no longer baked in every face, but they calculate 3 directions of light so that the lighting of faces of any direction is easily computed at runtime as a function of the normal, the 3 light maps, and albedo. The 3 directions of light are stored on disk, being triple the size of a light map of the same resolution, the advantage I think, could be from being able to tweak the parameters of the lighting function during runtime, though I don't know if that's the actual motivation.

    • @badsectoracula
      @badsectoracula 11 месяцев назад +5

      A way to think of it is that instead of a single lightmap, it calculates three lightmaps. Each lightmap is made the same way as you'd make regular lightmaps, but instead of using the surface normal you use a normal that is calculated from the surface normal rotated a bit to look at some other direction - and when you look at all three normals (that is the normals used for each lightmap) they are all perpendicular with each other. Then when you're rendering you just mix the three lightmap values based on the normalmap's normal. Valve published a paper back then that mentions exactly how they calculated these normals (you could use any three normals as long as they look away from the surface and are perpendicular with each other but Valve already did that part so no reason to not use theirs :-P).
      I implemented this some years ago in an older engine of mine (if you check my channel you can find a video with it titled "Directional lightmaps" from 7 years ago), it works remarkably well (the video has zero dynamic lights, even the specular is precalculated by averaging the light directions and storing the XYZ values of the direction normal it in the alpha channel of each lightmap - this was a hack though and could certainly do a better job with it, like ignoring lights that do not contribute much and taking into account the surface color).

    • @jal051
      @jal051 11 месяцев назад +1

      Current Q3 map compilers support it. Shortcut name 'deluxemaps'. Games like Warsow or Nexuiz make use of them, but Q3 itself doesn't support rendering them so they aren't of much use there.

  • @OfflineOffie
    @OfflineOffie 11 месяцев назад +1

    Amazing topic once again! It's great to learn about the techniques that were used to make Quake the was it is

  • @TheRealWalterClements
    @TheRealWalterClements 11 месяцев назад

    John Carmack back at it again with revolutionary tech!

  • @3DSage
    @3DSage 11 месяцев назад

    man I love learning about this! :)

  • @glospiwniczaka
    @glospiwniczaka 11 месяцев назад

    Thank yutube for recommending me this video!!
    This is so much high quality!!

  • @average_ms-dos_enjoyer
    @average_ms-dos_enjoyer 11 месяцев назад +10

    Well put together, though nothing about light styles? Flickering or pulsing lights in a static lightmap was always impressive to me

    • @MattsRamblings
      @MattsRamblings  11 месяцев назад +8

      Yes, I left that one out for brevity but I agree it adds a lot. In short each face can have up to 4 separate lightmaps, one for the base light and one for each animated light that is near it. At runtime the individual lightmaps are scaled and added together.

    • @mfaizsyahmi
      @mfaizsyahmi 2 месяца назад

      @@MattsRamblings It's also interesting how the light style patterns survive Quake, to GoldSource, to Source, to Source 2. Half-Life: Alyx has flickering light pattern that is identical to Quake's.

  • @Willis-hm5vw
    @Willis-hm5vw 11 месяцев назад +1

    I love your videos! I'm just commenting to help out the algorithm.

  • @pepe6666
    @pepe6666 11 месяцев назад

    wow thats really amazing. thank you for going to the effort to explain this. the quality of your work is extreme

  • @Tigrou7777
    @Tigrou7777 11 месяцев назад

    I was wondering how radiosity works in Q1 engine, so I started looking into game code, reading stuff there and there, ... until I found this video. Couldn't be happier.

  • @badsectoracula
    @badsectoracula 11 месяцев назад +1

    Nice video :-). Lightmaps are neat, i think even today they are very useful when the lighting doesn't really change, though obviously you need slightly more stuff that what Quake 1 used :-P.
    One thing i think needed a bit more explanation is the part about the samples moving to surface center, but someone else mentioned it (and you replied) already. Having implemented lightmapping a few times this part was always annoying to avoid (and Quake "cheats" a bit by having a solid world with no overlapping polygons - making a lightmapper for a generic polygon soup is trickier).

  •  11 месяцев назад

    Finally another video of yours. Amazing as always! 🎉

  • @teiman
    @teiman 11 месяцев назад +2

    Your videos are AMAZING

  • @jollygrapefruit786
    @jollygrapefruit786 11 месяцев назад

    Look who's back!

  • @skope2055
    @skope2055 11 месяцев назад +2

    Great content!

  • @hansdietrich83
    @hansdietrich83 11 месяцев назад +1

    Small correction at 2:54 : affine is not a linear transformation, as it includes shifting the origin

  • @soulsphere9242
    @soulsphere9242 11 месяцев назад +4

    Really good explanation. Thumbs up. Out of interest, were there any changes with respect to Quake 3? I briefly mapped for Quake 2 but ended up moving onto Unreal whose tooling was really much more intuitive and the editor could rebuild lightmaps in seconds. The 3D viewport was even real-time and you could drag lights around and immediately see the effect on the map, minus shadows, which required a lighting rebuild.

    • @laptopstuff8886
      @laptopstuff8886 11 месяцев назад +1

      Quake 3 had higher quality lightmaps, i think it is 128x128 by default. Also, it has overbrightbits, which kind of simulates HDR Lightmaps, but it's not as pretty. Years later Half Life 2 used real HDR lightmaps, which are much nicer looking. Fun fact: you can actually inrease the lightmap detail in quake 3 using q3map2, and converting with external lightmaps. This makes some nice sharp edgees on shadows.

  • @pmrd
    @pmrd 11 месяцев назад +1

    Love your videos! Great work!

  • @nicolamarchesan4597
    @nicolamarchesan4597 9 месяцев назад

    Amazing video

  • @TrentRobertson
    @TrentRobertson 11 месяцев назад

    This was very entertaining and informative!

  • @Silikone
    @Silikone 11 месяцев назад +1

    Despite the impressiveness of QRAD at the time, the neglect of gamma correction is probably what led to its regression in Quake 3. Half-Life's QRAD derivative uses gamma-corrected lightmaps and looks much more realistic as a result.

  • @AwesTube
    @AwesTube 11 месяцев назад

    Best channel ever

  • @kommanderkeen
    @kommanderkeen 11 месяцев назад

    Thank you for share it! Love u dude

  • @matka5130
    @matka5130 11 месяцев назад

    Great stuff, thank you

  • @Deagle195
    @Deagle195 11 месяцев назад +2

    You're the decino of Quake

    • @Capewearer
      @Capewearer 11 месяцев назад

      Well, he is better than decino. Decino doesn't try to explain Doom technical side, or explains it rarely (kudos for him fixing Serious Sam notorious bug).

  • @kerryhall
    @kerryhall 8 месяцев назад

    Love your videos! Can I request a video discussing how Quake stores the actual polygon data used for drawing? Not so much the bsp, but moreso the surfaces / triangles / verts / polygons used for drawing the walls of a map, etc, and how does this differ from other full 3d engines of the time? (ie System Shock, Unreal, etc)

    • @charlieking7600
      @charlieking7600 8 месяцев назад +1

      From what I know, all Quake store their surface not as polygons, but as surfaces. The surface equation is A*x + B*y + C*z + D, where A, B, C are coordinates of orthogonal vector to the surface, x, y, z are coordinates of any point on surface, and D is coefficient, determining the offset of surface in 3D space. All surfaces with equal normal vectors (A, B, C) are parallel each to other.
      For surface storage in .map file you need only 4 numbers (A, B, C, D), so you can create later polygons in map compiling stage to .bsp file.

  • @nolram
    @nolram 11 месяцев назад +2

    Amazing video! I'm curious if Valve's VRAD (specifically the one used in GoldSrc and also the one from the Source engine, the infamous VRAD) differs from this method.

    • @Capewearer
      @Capewearer 11 месяцев назад +2

      Valve VRAD is the derivative of Quake 2 lightmapper. It strongly differs from it, because it calculates indirect lighting and stores that information in so-called "ambient cubes" (simplified kind of spherical harmonics). These cubes allow to shade dynamic objects. Unlike Quake 3 light grid, these cubes are distributed not on 32 * 32 * 64 units grid, but on visleafs edges.

  • @ryonagana
    @ryonagana 11 месяцев назад +5

    i used to create quake2 maps
    and this 10:12 always happened
    i used arghrad

  • @xxsemb
    @xxsemb 11 месяцев назад +3

    It is not a bit like a second texture, it is a second texture.

  • @chinodesu3184
    @chinodesu3184 11 месяцев назад +1

    great video as always, anyone knows the name of music starting from 1:00?

  • @kleytman
    @kleytman 4 месяца назад

    What is life without Quake? that's the question!

  • @ethanwasme4307
    @ethanwasme4307 11 месяцев назад

    I wish an engine would come out that supports every lighting and shading model ever made

    • @Capewearer
      @Capewearer 11 месяцев назад

      It would be a hell to debug such engine. That's why engines are designed with customization in mind. E.g. Open 3D Engine Atom renderer allows you to implement custom render passes.

  • @meanmole3212
    @meanmole3212 11 месяцев назад +3

    5:27 I implemented Quake style lightmaps for my engine but never figured out how to solve this properly, and to be honest even with your explanation I don't understand what is happening here.
    What does "any sample points that cannot see the face center" mean? Are they the sample points that are hidden inside the geometry where light cannot reach? You say that the sample points are shifted "towards the center of the face", but the points are moved only on the y-axis downward "towards" the center in the visualization.
    Is this process the same thing as replacing the hidden sample point values by closest sample point values that are visible to the light? Basically like clamping the hidden samples with first visible samples that exist at the visibility border? I remember thinking about this solution but not sure if I tried to implement it or got it to work.

    • @MattsRamblings
      @MattsRamblings  11 месяцев назад +2

      Well spotted! The exact algorithm for shifting the hidden points is here: github.com/id-Software/Quake-Tools/blob/master/qutils/LIGHT/LTFACE.C#L270 . It looks like it shifts 8 units in the vertical axis first, then 8 units in the horizontal if the shifted point is still not visible. If after that it is still not visible then repeat the whole thing, for up to 6 iterations. I never got to the bottom of why this exact behavior was selected.

    • @meanmole3212
      @meanmole3212 11 месяцев назад +1

      @@MattsRamblings I see, that sounds very arbitrary but if it works most of the time and performs well I guess it makes sense. Thanks!

    • @Biel7318
      @Biel7318 11 месяцев назад

      ​@@meanmole3212 @MattsRamblings i think the same is done when there is a light grid system that uses probes( cube-maps ) in order no to have proves within wall or in the wall edge ( which would contaminate the final light calculations with wrong data ), the proves are moved within the correct space for the rendering and as well as for their final position, being kept out of the gird, and if their position needs to be corrected a lot, I think they are discarded

    • @badsectoracula
      @badsectoracula 11 месяцев назад

      Yeah that bit sounded weird to me too. I also implemented lightmaps years ago in an older engine and this is something i faced. Personally i solved it in a different manner: when i take a sample i check if it is part of a polygon face (*any* polygon face, this is because two polygons might share an edge in world space but use different lightmaps and the sample might end up over another polygon) and the face has the same normal. The sample is ignored if there is no polygon for it (e.g. lies outside the world) or if there is a polygon face with a different normal (failed at corner or there is another polygon covering it - e.g. imagine a box on a floor, the floor samples below the box would also have the box's faces on them). For each lightmap i also keep track of a mask for the lumel that had no samples contributing to it - after all lumels with at least one sample have been calculated, i go through all lumels with no samples and calculate the average color of the lumels with at least one sample that surround them and set their mask as if they had a sample. This is repeated until all lumels are set.
      Making a lightmapper that looks ok is easy but fixing little issues like that is almost 90% of of the work. No surprise that many lightmappers even to this day have edge cases (though these days you can bruteforce your way by calculating something like 32 or 64 rays per lumel, which hides a ton of edge case issues). Unreal Engine 1 had these "black corners" which weren't solved until UE2 and i remember reading environment artist tips about cutting the bottom of models in UE3/UDK to avoid the black outlines in floor lightmaps.
      These days i'd probably look into voxelizing the world and calculating lighting from that instead of using the geometry directly.

    • @jal051
      @jal051 11 месяцев назад

      You can precompute lightmaps with blender, if that helps.

  • @TheUKNutter
    @TheUKNutter 11 месяцев назад +1

    One video you could try and do is explain why you could have only 5 or so enemies in one screen.

  • @Biel7318
    @Biel7318 11 месяцев назад +2

    as far as I know Qauke3 changed this system maintaining the final result of a luxel textures, but implemented it in a 3d texture way? having a gird of point that kept the direction of all incoming fixed lights per point and their colour in order to have the 3d models well light, by the fixed lights of the map, at all times?

    • @hemostick
      @hemostick 11 месяцев назад +2

      The 3d texture part is the lightgrid, which coincidentally was also added in the Q2 Remaster. Maybe we'll get another chapter explaining what changed with Q3 on that front - the Q3/idtech3 at large modding community also introduced some interesting light compiler development which surely must have influenced ericwtools we use today for Q1/2. I'm thinking of q3map2 with some work by ydnar et al. introducing phong, refining bounces and filtering options, etc.

  • @davep8221
    @davep8221 11 месяцев назад +1

    "Dad, what's a SeeDee?"
    Why is the falloff linear vs 1/r^2?

  • @syntaxerorr
    @syntaxerorr 11 месяцев назад

    I remember a console command that was something like r_drawflat = 1

  • @sinphy
    @sinphy 11 месяцев назад

    the fan maps on the same level as the source maps where youll go "wait thats [engine]?"

  • @just__khang
    @just__khang 11 месяцев назад

    very nice

  • @MolotovEcho
    @MolotovEcho 11 месяцев назад

    Do you have any ideas, maybe some modern approach to enhance current light calculations with ericw tools?

  • @ninjacat230
    @ninjacat230 8 месяцев назад

    I wonder if ericw tools can be made to to work in goldsrc, or mabye ACTUAL quake

  • @LinxOnlineGames
    @LinxOnlineGames 5 месяцев назад

    Out of curiosity are the individual face lightmaps stitched together into an atlas? It seems like an expensive operation to bind a lightmap texture on a per-face draw call.

  • @aussieexpatwatches
    @aussieexpatwatches 11 месяцев назад +1

    How are the dynamic objects lit?

    • @h4724-q6j
      @h4724-q6j 11 месяцев назад +4

      Moving objects such as doors do not change lighting when they move; they'll look the same even when moving to an area with very different lighting. Flashing, flickering, pulsing, or otherwise changing lights are done with a "style" value that the mapper sets, each of which is essentially just a list of brightness values stored as a string of letters which are computed during compilation and cycled through at runtime (fun fact - Valve still uses these strings to this day, resulting in lights in Half-Life Alyx that flicker in exactly the same pattern as the ones in Quake). Lights that can be toggled on and off through ingame events work in a similar way. In order to reduce the amount of additional work the compiler has to do, there is a limit of 4 light styles (including toggled lights) per face, and ericw-tools also disables bounces on these lights by default.
      The Quake 1 and Quake 2 rereleases contain their own dynamic light system which is rendered separately and on top of the lightmap, which allows for crisper shadows that can move with things like doors and rotating objects.

    • @aussieexpatwatches
      @aussieexpatwatches 11 месяцев назад

      @@h4724-q6j Great answer. I was also actually talking about say monsters in the middle of the room. Now I know they won't have very fancy lights, but is the light at the center of the room precalculated in any way?

    • @h4724-q6j
      @h4724-q6j 11 месяцев назад +3

      ​@@aussieexpatwatches Things like monsters take a light value from the world geometry directly below their origin. This excludes things like doors and moving platforms. This value is then combined with the vertex normals to make it lighter on one side and darker on the other, so it looks like there's some directional lighting.

    • @jal051
      @jal051 11 месяцев назад +3

      Vertex lighting. It's a very basic method. There is a lightgrid precomputed at map compile which subivides the space in in a 3d grid of points. Each point has a precomputed light direction. When lighting the model the light direction is calculated from the grid and applied to the model vertexes. It doesn't allow for 2 lights to be displayed on the model, just averages them. This works the same for Quake I, 2 and 3

  • @Bodenman
    @Bodenman 11 месяцев назад

    you got a cool profile picture

  • @Fantasticanations
    @Fantasticanations 5 месяцев назад

    Nice chatgpt intro.

  • @relt_
    @relt_ 11 месяцев назад

    somebody needs to bring this to the source engine

    • @Capewearer
      @Capewearer 11 месяцев назад

      It's proprietary and source-closed, it's not worth to work for free to Valve inc.

    • @gustavodutra3633
      @gustavodutra3633 11 месяцев назад +1

      ​@@CapewearerActually 🤓, Source Engine is technically open-source, you only pay if your mod is paid, if your mod if completely free then the source code is there, it's obviously not the complete code that Valve uses but it's has a lot of things.
      And yes, this type of lighting is already in Source, there are other compilers that improve VRAD such as Slammin' Source Tools, which adds Ambient Occlusion, support for more threads, improved Radiosity algorithm, faster lightmap compilation for static mesh models (displacement maps for example), and alpha textures shadows support.

    • @Capewearer
      @Capewearer 11 месяцев назад

      @@gustavodutra3633 take a look for license, you'll never own this engine sources unless you pay a big money. It's not open-source, but source available, and you probably meant SDK (which is not the whole engine).

  • @thoughts0utloud
    @thoughts0utloud 11 месяцев назад

    If you shed light does that mean you dim it by shaking off of you!?

  • @josiahjack455
    @josiahjack455 11 месяцев назад

    Wait wait wait....Quake uses lambertian model by reducing the brightness as the angle gets sharper??

    • @MattsRamblings
      @MattsRamblings  11 месяцев назад +1

      It's not fully lambertian. The effect is actually scaled by a half - see the `scalecos` term here (which is fixed to 0.5): github.com/id-Software/Quake-Tools/blob/master/qutils/LIGHT/LTFACE.C#L410

  • @averesenso
    @averesenso 11 месяцев назад

    How far is this method of bounce lighting from being "true" ray tracing?

    • @MLWJ1993
      @MLWJ1993 11 месяцев назад +1

      It simply is tracing rays into a (simplified) scene. The only thing that matters is that you're doing it before shipping your game & the results won't change at runtime, so it's static. Something that only matters to dynamic objects that can move around and should therefore change the lighting.

  • @redfoxbennaton
    @redfoxbennaton 11 месяцев назад +1

    Quake 2 did what Crysis did 10 years before it's conception. Radiosity.

  • @davymachinegun5130
    @davymachinegun5130 4 месяца назад

    Source games still do this, as it is pretty much a glorified Quake engine fork :)

  • @gsestream
    @gsestream 11 месяцев назад

    why dont you just render per pixel lightmaps with simple gpu raster, ie the cubemap view from each lightmap pixel, no need to resort to cpu only stuff

    • @hi-i-am-atan
      @hi-i-am-atan 11 месяцев назад +2

      you're making some p. strong assumptions about how common and capable gpus were back in the 90s
      the original release of quake didn't even have gpu support; glquake was an update id released half a year after the fact

    • @GarrettInShadows
      @GarrettInShadows 11 месяцев назад

      OK zoomer, great plan! Remind me again, which GPU should we be using...in 1995?

    • @gsestream
      @gsestream 11 месяцев назад

      I'm not limiting myself to 95 cpu's, use silicon graphics workstations, or a cray super computer time, if you lack now or then, ID@@GarrettInShadows

    • @gsestream
      @gsestream 11 месяцев назад

      why do you assume that the task would need to be completed with an 95's cpu or gpu, well you have those dual voodoo2 cards and nvidia's first cards are starting to pop-up, amd rage pro etc@@hi-i-am-atan

    • @gsestream
      @gsestream 11 месяцев назад

      and intels cpu package for ray tracing supports which cpu's, there is something rotten here with the "needs to be this-and-this" attitude@@GarrettInShadows

  • @jal051
    @jal051 11 месяцев назад

    I'm sorry, but Quake 2 didn't have radiosity lighting. It simply used an ambient (minimum light value) setting. Radiosity wasn't even present in the original q3map compiler. It was introduced in q3map2, a comunity modified version of q3map source code which was later adopted by iD and is now mantained by Tim Willits.
    The edition work of your videos is always stunning!

  • @cesario04
    @cesario04 11 месяцев назад

    isnt easier to use instead goldsrc

  • @Qwerasd
    @Qwerasd 11 месяцев назад +2

    The intro to this video is written by AI...

  • @Foxconnpc
    @Foxconnpc 11 месяцев назад +1

    You shouldn't have generated intro text with chat gpt. It sounds like an alien trying to pass as a hooman

    • @averesenso
      @averesenso 11 месяцев назад

      how did you gather that?

    • @richardvlasek2445
      @richardvlasek2445 11 месяцев назад

      @@averesenso all machine generated "press-release text" has a lot of similarities
      things like overuse of adjectives and descriptors, "passionate community" when speaking about literally any group, at least one bad corporate presentation pun and it always ends with some variation of it telling you to get ready for a journey into XYZ