Ok now this is some timing. I was watching your video on Bresenham per a friend's recommendation and now that I'm trying to understand Xaiolin Wu you post a video on it.
Funnily enough I was checking out the HLP discord yesterday on a whim and saw your status. I immediately looked it up and now the goat makes a video on it XD
It seems to me that the opacity should not be the vertical/horizontal distance to the line but instead the distance to the line. That would probably produce smoother motion but also be computationally heavier. I might try to implement that, although someone has already most likely.
I was thinking the same... Maybe calculate dot product to project the center of the pixel into the line, and measure distance to that? Definitely heavier as we now have more multiplication and have to take a sqrt, but would be keen to see if it changes the profile of the drawn line.
I'm no expert but a quick and dirty solution is probably just layering lines with half-integer distance between them; 1 line for 1 width, 3 lines for 2, 5 for 3 etc
There is also the probably more efficient DDA algorithm. It also uses floating point arithmetic. DDA stands for "digital differencial analyzer". The derivatives are predetermined. Add the correct slope/derivative to get to the next integer pixel. The difference to Xiaolin Wu is, only the pixels the line goes through are considered/affected.
I think there is something wrong with the opacity and blending, causing dark spots on the line. the alpha should be A^(1/2.2) (or 1-(1-A)^2.2 idk), where A is the original alpha from 0 to 1.
As the 2.2 exponent in your comment implies, this could be due to compositing in sRGB or another nonlinear color space. Blending in a linear color space and then converting to sRGB or Rec.709 might look better.
0:15 is this really "perfectly anti-aliased" line? It seems like a good approximation. Perfectly anti-aliased should calculate orthogonal distance to the line, and not just delta X or Y. This requires more compute power
Hope you will make video explaining anti aliased thick line algorithm because have not found video about it at all. With your comprehensive and solid explanations it would be awesome.
There are two debatable assumptions here: first, that Bresenham looks worse than Xiaolin Wu (in my opinion Bresenham looks better); and more importantly, that it is worthwhile to draw single-pixel-wide lines. With modern displays having three or four hundred dots per inch, you can’t see a line one pixel wide. A better approach is to draw lines that have a zero-width centre line and a non-zero width, by stroking a path using a circular or sometimes non-circular pen. Graphics systems compute the envelope of the line, which is the Minkowski sum of the pen and the zero-width line, and then fill it using a suitable anti-aliasing system.
@@HoSza1 In the fragment shader, usually using a distance function, and fract for the opacity. They don't use bresenham! A loop is exactly what you'd avoid!
@@tbird-z1r The essence of this algorithm is not a loop, but to determine the proportion of the areas created by the line segment in the two pixels. You can do that in paralel without ever using a loop.
This is an a great idea for can use the pixels Matrix with flotas values so assume that make an a sub Matrix that pixels for each pixel. This is an a great way to gain resolución on the radials corners of the circles 😊😊😊 Goog explanation of this kind of renderization tricks!!
In my opinion, the lines produced by your algorithm look stripy (dark and bright). Try pausing at 6:46, and you'll see what I mean I think you made a naive assumption that two pixels whose opacity sums to 1 will look as as bright as a single pixel with full opacity, but this isn't true. The opacity of a pixel should be some non-linear mapping of the distance from the center of the pixel to the line. Maybe try something like: opacity = 1-numpy.float_power(distance_from_line, strength) Where strength is set to some float between 1 and 2.
One minor flaw - 45 degree lines will be about 30 percent thinner than horizontal or vertical lines, since the algorithm basically "shears" a square rather than rotating and stretching it.
Amazing video. One tip though when explaining math... use allot of pauses to allow the words to transform from noise to comprehension in our minds. For example at this point: ruclips.net/video/f3Rs20k-hcI/видео.html when you say "the change in y" add a pause and then continue to say "for one step in x". These two pieces of information are seperate and have different meaning. Allow your listeners and viewers a chance to recognise the distinct meaning of each piece of information your conveying.
As an OpenGL C++ developer, it's nice to find anti-aliasing techniques for non textured objects. However, my gut feels that over time with many MANY lines rendered would take a significant performance hit due to all these forloops to iterate through for each line. There is a quick and dirty method where you multiply the derivative for the slope of the line by 4.0 to thicken the line but as with anything I suppose you must benchmark it to really know...
this method uses approximate distances from the lune to rhe center of the pixel. There's going to be less distortion for a line that is nearly zero slipe compared to a line that is nearly diagonal. I wonder how better it would look to do a proper perpendicular distance from the line to the center point of the cell.
Note: it's possible to combine Bresenham's integer only multiply-free slope update computation with anti-aliasing à la Wu if you do basically do 2 parallel Bresenham iterations per step of the main loop, but while one is for the position of the pixel to draw & controlling the loop exit, the other one runs independently of the former one in the loop & is responsible for the opacity. Both of these Bresenham parts need to be initialized differently though. Note: & these Bresenham algorithms can be further generalized to draw arbitrary polynomial curves (including the AA) if you subdivide the polynomial into screen diagonal bound parts & add root-finding (of these diagonals to the shapes) prior the main loops while in the main loop you further differentiate the polynomial step update by using newton's method of iterative polynomial differences (from Newton's divided differences triangular scheme, except here we aren't dividing the differences because we want integer only solutions so we end up simply using two integers per resulting difference coefficient updating in lock step, which is what Bresenham is all about).
it'd be neat if you looked into signed distance fields for fast anti-aliased vector graphics, it's quite versatile. i can give some ressources if you want to look into it (or even discuss about it) !
I believe I first saw this in the fall of 1989 at Apple (at the time of the Loma Prieta earthquake). I was reviewing code samples for the first issue of "develop" the new developer journal from Apple. Color Quickdraw was featured and one of the papers was the color anti-aliasing for techniques. Particularly about rendering fonts IIRC. I wonder if Wu was involved. This is before first publication in Dr. Dobbs by over a year. (The editor of develop insisted on the lower case D in the name.)
Still waiting for the Digital Differential Analyzer, or DDA, it could also be used with floats. It's most popular application would be in psuedo 3D, raycasting, present in Wolfenstein 3D and other games when they weren't powerful enough to render true 3D
Given things I've heard from Minute Physics about displaying brightness, wouldn't Xaiolin Wu look better if we took the square root of the distance instead of just the distance? My understanding is that this would smooth out some of the uglier non-uniformity of the lines produced.
for some reason that anti aliasing is very annoying sometimes, especially when you’re drawing a thin circle and try to fill it it fills the entire thing
i think yes, if alpha in range 0..255 then we can multiplie all distances by 255. ex. x = 255 as 1, 512 as 2, 1024 as 4... and fast calc this use rightshift x = 512 >> 8
I'm disappointed for two reasons. First off, it's nicely animated, no worries there. The problem is the algorithm is simple. I was interested because it had a fancy name, but it turns out to be something I independently invented long ago, and probably many others did so too. Second, you don't seem to apply sRGB correction, i.e. 50%+50% ≠ 100%. In fact it's more like 69%+69% = 100% in sRGB space. That's why in all your animations, the line looks like it has gaps in every place where two half-opaque pixels meet.
The era of integer graphics are long gone. I did it in 1999, when you have to buy 80387 math coprocessors to have hw assisted floating point operations. Also, integer implementation relays heavily on divisions, as everything as scaled up to be divided later to fit on canvas. Integer graphics sucks.
@@Lemure_Noah true, but not my point here. The floating point variations are easy algorithms, where Bresenham is just very clever. I thought the author had upgraded Bresenham to anti-alasing, but no such thing.
todays computing speed forgives this algorithm performance. in python. i remember learning graphics programming in 90s with a 486. efficiency is a top matter. and assembly codes were commonly smuggled into other language codes.
Nope. It was an extremely useful algorithm back in the day too. And the reason was simple, it provided excellent antialiased line quality with "speed that’s close to that of non-antialiased Bresenham’s line drawing", and it is not me that's making the claim, its Michael Abrash the man himself.
@@meandtheboyz4796 there was also Bresenham’s run length algorithm. i think that's what it was called, but not 100% sure. there were modifications that completely avoided divisions for those old slow cpus.
The opacity is messed up. Lines look terrible, not "perfectly antialiased". Even if the opacity is corrected, vertical/horizontal distance is just an approximation due to triangular inequality. No DDA? Floats? Python?? C'mon! The video had a lot of potential for the way it is presented.
Ok now this is some timing. I was watching your video on Bresenham per a friend's recommendation and now that I'm trying to understand Xaiolin Wu you post a video on it.
Funnily enough I was checking out the HLP discord yesterday on a whim and saw your status. I immediately looked it up and now the goat makes a video on it XD
Antialiased explanation 😉
Nice and clean job at every angle.
Excellent job.
It seems to me that the opacity should not be the vertical/horizontal distance to the line but instead the distance to the line. That would probably produce smoother motion but also be computationally heavier. I might try to implement that, although someone has already most likely.
I was thinking the same... Maybe calculate dot product to project the center of the pixel into the line, and measure distance to that?
Definitely heavier as we now have more multiplication and have to take a sqrt, but would be keen to see if it changes the profile of the drawn line.
I wonder what box will produce, where you consider horizontal as well. This should about in he middle between dot and vert computationally
Only use |(Ax+By+C)/(A*A+B*B)|= dist, to reduce calculation use (A,B,C)/(A*A+B*B)=(A',B',C') => |(A'x+B'y+C')|= dist
This way can use vector and SIMD of groups of points to be more eficient
I'd really appreciate a video on AA lines with thickness
I'm no expert but a quick and dirty solution is probably just layering lines with half-integer distance between them; 1 line for 1 width, 3 lines for 2, 5 for 3 etc
There is also the probably more efficient DDA algorithm. It also uses floating point arithmetic. DDA stands for "digital differencial analyzer". The derivatives are predetermined. Add the correct slope/derivative to get to the next integer pixel. The difference to Xiaolin Wu is, only the pixels the line goes through are considered/affected.
Bresenham's algorithm is very similar to DDA, it uses .5 as the error threshold.
DDA can also be done easily with scaled integers. Since it's a generic algorithm, it can be used for triangle rasterization as well. :)
Can't wait for the next videos of this series!
I think there is something wrong with the opacity and blending, causing dark spots on the line. the alpha should be A^(1/2.2) (or 1-(1-A)^2.2 idk), where A is the original alpha from 0 to 1.
Yes, I noticed it too
As the 2.2 exponent in your comment implies, this could be due to compositing in sRGB or another nonlinear color space. Blending in a linear color space and then converting to sRGB or Rec.709 might look better.
I would actually love a video on thick lines. :)
0:15 is this really "perfectly anti-aliased" line?
It seems like a good approximation. Perfectly anti-aliased should calculate orthogonal distance to the line, and not just delta X or Y. This requires more compute power
Hope you will make video explaining anti aliased thick line algorithm because have not found video about it at all.
With your comprehensive and solid explanations it would be awesome.
Nice style of explanation!
Thanks for the video. More vids about algorithms please
This channel has so much potential!! Amazing ❤️
There are two debatable assumptions here: first, that Bresenham looks worse than Xiaolin Wu (in my opinion Bresenham looks better); and more importantly, that it is worthwhile to draw single-pixel-wide lines. With modern displays having three or four hundred dots per inch, you can’t see a line one pixel wide.
A better approach is to draw lines that have a zero-width centre line and a non-zero width, by stroking a path using a circular or sometimes non-circular pen. Graphics systems compute the envelope of the line, which is the Minkowski sum of the pen and the zero-width line, and then fill it using a suitable anti-aliasing system.
With modern displays you'd use a GPU. But it's useful to know vaguely how this algorithm works.
And how do you think gpus draw antialiased lines?
@@HoSza1 In the fragment shader, usually using a distance function, and fract for the opacity. They don't use bresenham! A loop is exactly what you'd avoid!
@@tbird-z1r The essence of this algorithm is not a loop, but to determine the proportion of the areas created by the line segment in the two pixels. You can do that in paralel without ever using a loop.
One pixel wide lines are still all over the place. The pixel is not as small as you think
This is an a great idea for can use the pixels Matrix with flotas values so assume that make an a sub Matrix that pixels for each pixel.
This is an a great way to gain resolución on the radials corners of the circles 😊😊😊
Goog explanation of this kind of renderization tricks!!
im making a graphics library rn your videos really help
In my opinion, the lines produced by your algorithm look stripy (dark and bright). Try pausing at 6:46, and you'll see what I mean
I think you made a naive assumption that two pixels whose opacity sums to 1 will look as as bright as a single pixel with full opacity, but this isn't true. The opacity of a pixel should be some non-linear mapping of the distance from the center of the pixel to the line.
Maybe try something like:
opacity = 1-numpy.float_power(distance_from_line, strength)
Where strength is set to some float between 1 and 2.
One minor flaw - 45 degree lines will be about 30 percent thinner than horizontal or vertical lines, since the algorithm basically "shears" a square rather than rotating and stretching it.
Amazing video. One tip though when explaining math... use allot of pauses to allow the words to transform from noise to comprehension in our minds. For example at this point: ruclips.net/video/f3Rs20k-hcI/видео.html when you say "the change in y" add a pause and then continue to say "for one step in x". These two pieces of information are seperate and have different meaning. Allow your listeners and viewers a chance to recognise the distinct meaning of each piece of information your conveying.
As an OpenGL C++ developer, it's nice to find anti-aliasing techniques for non textured objects. However, my gut feels that over time with many MANY lines rendered would take a significant performance hit due to all these forloops to iterate through for each line. There is a quick and dirty method where you multiply the derivative for the slope of the line by 4.0 to thicken the line but as with anything I suppose you must benchmark it to really know...
An OpenGL developer should know that you aren’t just “forlooping” over the pixels, it’s parallelized with the GPU.
Gamma correction. I implemented this once. As i recall i needed to add a gamma correction to the brightness to make it look nice.
Nice... That same guy also make a color quantizatiion algorithm which i like a lot!
this method uses approximate distances from the lune to rhe center of the pixel. There's going to be less distortion for a line that is nearly zero slipe compared to a line that is nearly diagonal. I wonder how better it would look to do a proper perpendicular distance from the line to the center point of the cell.
Note: it's possible to combine Bresenham's integer only multiply-free slope update computation with anti-aliasing à la Wu if you do basically do 2 parallel Bresenham iterations per step of the main loop, but while one is for the position of the pixel to draw & controlling the loop exit, the other one runs independently of the former one in the loop & is responsible for the opacity. Both of these Bresenham parts need to be initialized differently though.
Note: & these Bresenham algorithms can be further generalized to draw arbitrary polynomial curves (including the AA) if you subdivide the polynomial into screen diagonal bound parts & add root-finding (of these diagonals to the shapes) prior the main loops while in the main loop you further differentiate the polynomial step update by using newton's method of iterative polynomial differences (from Newton's divided differences triangular scheme, except here we aren't dividing the differences because we want integer only solutions so we end up simply using two integers per resulting difference coefficient updating in lock step, which is what Bresenham is all about).
it'd be neat if you looked into signed distance fields for fast anti-aliased vector graphics, it's quite versatile. i can give some ressources if you want to look into it (or even discuss about it) !
that was great. thank you!
This channel is so underrated, I've used bresenham on gba but never heard of this one but i bet it will look great
What a nice video ! I didn't know this method 🙂
Nice. Do you plan to make video for bezier curve?
I think one can get a better result by assuming the line is actually a skinny rectangle, and calculate the percentage of the pixel covered.
I believe I first saw this in the fall of 1989 at Apple (at the time of the Loma Prieta earthquake). I was reviewing code samples for the first issue of "develop" the new developer journal from Apple. Color Quickdraw was featured and one of the papers was the color anti-aliasing for techniques. Particularly about rendering fonts IIRC. I wonder if Wu was involved. This is before first publication in Dr. Dobbs by over a year. (The editor of develop insisted on the lower case D in the name.)
Still waiting for the Digital Differential Analyzer, or DDA, it could also be used with floats. It's most popular application would be in psuedo 3D, raycasting, present in Wolfenstein 3D and other games when they weren't powerful enough to render true 3D
fantastic series
7:25 you can also just check if the slope is greater than 1
amazing, hidden gem
Given things I've heard from Minute Physics about displaying brightness, wouldn't Xaiolin Wu look better if we took the square root of the distance instead of just the distance? My understanding is that this would smooth out some of the uglier non-uniformity of the lines produced.
Good idea thanks
Are you using something/some program to represent those pixels or are they just edited in?
Love it!
I wonder if theres a version of this that is tweaked for the edge of triangles
Lol, overlap distance is guaranted to be 0.5, and I saw that the moment he talked about horizontal overlap dustance
How would you draw thick lines?
olm kanalını daha bugün gördüm ne kadar az video var diye üzüldüm ya allahu teala sesimi duymuş demek ki
I'm struggling with the end point of the line. if the end point is 0.1, 0.2, 0.3 or 0.4 above pixel, I get a gap at the end
You showed the partially finished result but not the end result.
Show how it looks completed.
for some reason that anti aliasing is very annoying sometimes, especially when you’re drawing a thin circle and try to fill it it fills the entire thing
Can we improve this algorithm to work only with integers?
i think yes, if alpha in range 0..255 then we can multiplie all distances by 255. ex. x = 255 as 1, 512 as 2, 1024 as 4... and fast calc this use rightshift x = 512 >> 8
@@ivankramarenkoFixed point instead of Floating point
I'm disappointed for two reasons. First off, it's nicely animated, no worries there. The problem is the algorithm is simple. I was interested because it had a fancy name, but it turns out to be something I independently invented long ago, and probably many others did so too. Second, you don't seem to apply sRGB correction, i.e. 50%+50% ≠ 100%. In fact it's more like 69%+69% = 100% in sRGB space. That's why in all your animations, the line looks like it has gaps in every place where two half-opaque pixels meet.
I thought we were going to see an integer only implementation, because that is the genius part of Bresenham. Sorry just a bit disappointed here.
Exactly it seems as comparing apples with burgers.
Don't expect too much from the masses...
The era of integer graphics are long gone. I did it in 1999, when you have to buy 80387 math coprocessors to have hw assisted floating point operations.
Also, integer implementation relays heavily on divisions, as everything as scaled up to be divided later to fit on canvas.
Integer graphics sucks.
@@Lemure_Noah true, but not my point here. The floating point variations are easy algorithms, where Bresenham is just very clever. I thought the author had upgraded Bresenham to anti-alasing, but no such thing.
Well there's no way to get an integer implementation with fractional brightness values being the whole point
todays computing speed forgives this algorithm performance. in python.
i remember learning graphics programming in 90s with a 486. efficiency is a top matter. and assembly codes were commonly smuggled into other language codes.
Nope. It was an extremely useful algorithm back in the day too. And the reason was simple, it provided excellent antialiased line quality with "speed that’s close to that of non-antialiased Bresenham’s line drawing", and it is not me that's making the claim, its Michael Abrash the man himself.
@@meandtheboyz4796 there was also Bresenham’s run length algorithm. i think that's what it was called, but not 100% sure. there were modifications that completely avoided divisions for those old slow cpus.
I believe the N64 used a similar algorithm
Why the pixels above and below, not left and right?
It gets swapped once it’s going primarily upward. It’s that’s way because it guarantees no overlapping pixels need to be drawn
@@MilesIsReal That's reasonable, thanks!
The opacity is messed up. Lines look terrible, not "perfectly antialiased". Even if the opacity is corrected, vertical/horizontal distance is just an approximation due to triangular inequality. No DDA? Floats? Python?? C'mon! The video had a lot of potential for the way it is presented.
Hello!
nick wu!!
bro is really called shaolin wu. that's because wu tang is forever.
What a load of BS! 🎉
why is that?
The channel is called “Know BS”. 💩
BUT BRESENHAM LOOKS BETTER
i don’t think so
implementation is ass