Maybe you can also add -DNDEBUG to your compiler flag for benchmark. Most implementation of std::vector will check bounds in std::vector::operator[] in DEBUG mode, which is an extra step compared to raw arrays. If you switch on -DNDEBUG, the difference might be even smaller.
@@noop9k memory allocations are part of the use model, while rand is not.. for heavy hitters we always use our own allocator in C to keep the performance in check.
I default to vector unless I know the size I need, and that it will never change, but try to be conscious of memory allocations in program design. Memory allocation is the only real slow part of vectors, so it helps to preallocate extra if you know you may need them, to minimize reallocations. The non contiguous containers you need to consider performance more, but of course they offer much more flexibility. The extra program size is from including the header, and will be a one time cost.
Your code at 5:13 in the first loop is copying v into matrix[i]. This is slow because it requires allocating a new array and copying all the contents of v into it. Better options would be to either (1) resize and fill matrix[i], (2) matrix[i] = std::move(v), or (3) matrix[i].swap(v). Moving or swapping just transfers ownership of the underlying array to the new vector. The advantage of swap is it works on C++98, while move semantics were added in C++11.
I'm glad you accepted the lack of significant timing difference between the different methods. I've seen several other channels that stop at the first difference/result they can see and call it a win, leading to the comment section getting furious about it (there's for instance a dozen of videos out there comparing C# with C++ and saying that C# is much faster because they forgot to turn on optimizations in C++). You've earned yourself a new subscriber :)
Wouldn’t it be a better approach to measure the single operation you’re trying to optimize (addition of matrix)? Right now you’re measuring total time for setup (allocation and initialization), the operation (addition of matrix) and teardown (deallocation and and application termination). I find when optimizing for speed it’s best to linearize any logic (unroll any 2d loop into a 1d array) and to prefer the most likely outcome of a conditional operation at the start of a code section. Additionally in the case of arithmetic operations the use of vectorized instructions should squeeze out more performance.
This definitely is the case. If you look at the simple example of an int[][], the access is super slow due to the double dereference and number of cache misses (because the memory for each array in the 2nd index can be in a completely different location each time for each member in the 1st index). The same rule applies for a vector of vectors, suggesting it isn't the good idea as well
@@ShivamJha00 the CPU automatically "caches" blocks of memory (in chunks of 64 bytes) meaning it stores it in the cache which is much faster to access than normal RAM. So every time you read from memory, you are usually reading from cache first. But the cache is obviously much smaller in size, so the CPU needs to be smart about how to stores memory into the cache. This means that often you'll try read from an address that isn't in cache, resulting in a "cache miss". The CPU then has to do the slower read from main memory. We want to write code which has data stored closer together so we avoid cache misses when possible
I remember from my Data Structures & Algos class a few years back that when you use a vector you are essentially just resizing an array dynamically by doubling its size.
Yes. Under the hood, a vector is just a pointer to a heap-allocated array. The advantage of a regular C array can be that it's on the stack (so no dynamic memory allocation needed). If you know the size of your vector beforehand you can eliminate the need to resize by just reserving enough memory from the start.
std::vector is life. std::vector is love. Just don't forget to .reserve() if you know how many elements you are going to put into it. Not reserving when you know the amount of elements is basically a performance bug. std::array is love when you know the size at compile-time. Something like a list of compile-time constants or configuration values. I've heard that std::deque is cool, basically a bit slower than a vector but has faster insertions at the beginning, do profile before using it, default to vector and profile for performance-sensitive parts. std::map is bad, just default to std::unordered_map unless you know you need sorted keys, and look at third party libraries for more specific implementations if you have a performance-sensitive use case. std::list is implemented fine, but it's just bad because linked lists are bad in modern computers.
A lot of the time when you know you're not constrained with memory but you need the performance, it helps preallocating space in a vector. You can do that using the reserve method, which allocates but does not initialize the memory.
Comparing vector avec array don't make sense, it's like comparing malloc with static array. Of course, the array is better in term of performance but the vector is more flexible
all a vector is, is an pointer to some memory, the size of that memory, and the number of allocated elements into that memory, and with behavior to increase the size (generally by powers of 2), and reclaim that size
Really though, if you code it right you should only be allocating space once. It's possible to set the vector to reserve a specific amount of space when you create it (or even after). If you know the projected size of your vector, you shouldn't be resizing it. And if you don't know the maximum number of elements, then you can't use a regular array anyway, you must use a vector (or some other structure capable of dynamic growth).
@@taragnor, yes, that’s true. But sometimes you can’t precisely predict the amount of memory the application will consume during execution and therefore you have to decide in development which data structure will be best. Dr. Sorter’s example gives guidance on factors to consider. That’s what I was really asking about. Thanks for your comments. They too help.
Well. The first example is a little silly, which I guess is kinda the point, this is something a C++ newbie could do. You really need to know what each type is meant for. Vector for when you don't know how much space you need, std::array for when you always know the size... etc. The vector has a little more overhead overall but when accessing items, it should result in same or very similar assembly to C. Overhead is often from program size and having interrupts enabled.
Sir Jacob for some reason when the word ends with letter S, youre mic make loud noise and it sounds annoying, please try to fix this issue. Thanks for great content!!
I haven’t watched your video completely yet but I can guess you are going to get very similar results or even results varying significantly as you change the total size, because this is not how you write stable benchmarks. The main bottleneck here is memory bandwidth and possible cache aliasing. It is (likely) going to hide the perf differences between vectors, arrays etc. These would be visible with much smaller arrays and large number of repetitions. And you don’t really get stable measurements from single-shot runs in any case. Update: haven’t noticed you are also measuring the speed of memory allocation, not the just the performance of actual algorithm. And the rand() calls are also a part of the measurement. This is silly and a waste of time.
CPP flag in a Makefile is for cpp pre-processor. C++ compiler should be specified in CXX flag and his flags in CXXFLAGS. Also "clear", "all" and "timing" its better to mark as a PHONY. My previous post was deleted, maybe to much code in a comment and some youtube algorithm didn't like it? Any idea wats going on?
@@noop9k It seems not in this case. My posts about make that I wrote in few different channels were deleted as well :-/. Could be something in a makefile syntax that triggers algorithm, who knows.
When I saw this video the first thought that came to my head was, "Why is this even a question?" The allocation process for arrays will be faster. It would either simply be allocated off the stack or allocated in memory with the program image. There is no comparison between those allocation methods and dynamic memory allocation. In terms of performance, A vector will be slower because of any overhead the design might have. I would suspect that at the minimum the vector would at least check if the index is in-bounds. On the other hand, arrays are rather dumb objects with no build-in security features within C/C++. It will blindly trust the programmer not to destabilize the program. That isn't to say that vectors are useless. If you need to dynamically allocate an array of unknown size, they're perfectly fine. I personally never use them since I prefer allocating and managing the memory myself.
theoretically vectors should be marginally slower. to run this test properly you need to page in the memory first. ie: overwrite the array /vector multiple times with random values and then run the test. and yes compiler optimizations are super important. that is because without it the code to let you debug stl is a huge overhead. I'm a kernel engineer so i don't use stl or templates, but the whole idea behind them is you can get static checking, algorithm copying ect for free. so in theory you could create a template for a 2d matrix that the compiler reduces down to c, and then you won't ... or shouldn't have performance differences. but yes you would have to do the template coding to support this. for many specialized cases this is trivial.
why don't you do some real comparison of your code of the different types (array vs. vector) on different compilers usind godlbolt (like Jason Turner does)?!
Well yes slower for the following reasons: [I did not watch the video] 1. Arrays are stack located while vectors are heap located. 2. adding elements in the vector that exceeds the size of the vector requires allocating new heap location and copy all elements in the first vector to the second one, usually capacity and size are not equal so allocating more capacity in std::vector requires bigger heap request to OS. 3. allocating in heap require OS heap allocating request, in Windows OS this heap allocator is a kernel object, meaning if you want to allocate some space in heap you have to jump from user-space to kernel space and do lots of checking like security checks, call stack check (to check this is indeed coming from user-space not an application that runs in kernel mode). 4. the Windows OS do some tricks to reduce heap fragmentation, like virtual memory with windows handles, sometime pushing allocated data to open a room for next allocation request and this is costy. 5. In windows 10 and after, there is a new technique in handling heap called Segmented-Heap, if enabled a new trick is added to handle heap allocation requests. So, don't use heap unless you have to, if you want to allocate dynamic memory on stack use alloca() or use assembly function like below, the following code only works on Windows due to x64 Windows Calling Convention that is not the same as Linux, where Linux pass function arguments to rax first AllocateHeap proc sub rsp, rcx ret AllocateHeap endp Last warning, if you are using stack make sure you to check for stack overflow exceptions, this exception can be easily reached specially in recursion functions[That's why I hate recursion, unless I have to of course]
Jacob your conclusions are a bit silly. The singlearr and singlevec tests both use static allocation. Why would you use a vector in this case? A vector uses more memory. Then you switch on the optimization. The compiler optimizes the code and makes it about the same speed. The conclusion should be use the optimizer for faster code and smaller size. Comparing the code size instead of the data memory size is also silly. For a small program like this the majority of the code size is the C-runtime.
I implemented a 256bit encryption algorithm. After trying arrays and vectors extensively on files from 6mb to 9gb, I have to say there is absolutely no need to use arrays what so ever. Because when I try to ensure the free and reallocation of array memories are secure, I have to make a class to handle the arrays. I watched one the c++ talks by the funder, vector is really there to be a drop-in replacement for arrays, that handles exactly what I wanted for arrays. Also, be sure that you preallocate and preload things, asking the OS for memory can be slow, more so with vectors.
@@paherbst524 if you know how many elements your vector will hold at max then you can use reserve() function after the declaration of vector to reserve enough space for it from the beginning. It will not reallocate again unless the capacity is exhausted again. Then it will reallocate again but your program will not die LOL. But while using stack based arrays, after the array becomes full there won't be any extension of capacity. The program will encounter many problems.
Please be more specific. Are you concerned that I'm using a vector of vectors? Critiques without explanations or recommendations aren't very helpful to other viewers.
@@JacobSorber a big mistake of you: Instead of writing: std::vector< std::vector > matrix(ROWS); you should write: std::vector< std::vector > matrix(ROWS, std::vector(COLS)); So the matrix vector and its internal elements(of type std::vector) will be allocated all at once. The program won't need to reallocate matrix every time you want to push another std::vector object to it. You won't even need to push anything. So much much faster.
@Jacob Sorber Really? Come on, be honest like L. Torvalds and tell us frankly that you hate C++. You surely know, that vector is not intended to be used, like you did use/ it. Furthermore, I am sure, you know how to properly benchmark, as well. However, your C related content is great.
I think you might be thinking of mathematical vectors as a member of a the set {(x,y,z) : x, y, and z are real numbers}. This video means vectors as lists of variable (finite) length eg [683, -73, 74, …, -82] or [‘g’, ‘9’, ‘ ‘, ‘€’]. The first is a vector of integers and the second is a vector of characters.
You should not use vector of vectors, use something appropriate. A C++ programmer should know that. Also, this comparison does not look like a representative valid benchmark (for several reasons). Looks, like you are ranting C++, lately 🤔. Really disappointing, because considering C, I enjoyed your content, so far.
How is this a rant? It's a simple comparison (never meant to be a definitive or scientific comparison), and the take-away (if there is one) is that the cost is insignificant. It's not anti-C++ in any way.
@@JacobSorber You answered your question yourself in the very same comment. Thanks. You made a comparison in a way in an improper way, therefore you can't conclude anything from it. But you didn't explain that in the first place. Inexperienced programmers take you conclusion for real a d now thin C is better than C++ and std::vector is bad and they use plain C arrays instead. IMHO, I don't think, that was your intention, but you need to be more sensitive about such things. With "ranting" I refer to your latest videos in general, and I really don't get, what you are up to. Especially, because until recently, I really liked (still like) your content.
Maybe you can also add -DNDEBUG to your compiler flag for benchmark. Most implementation of std::vector will check bounds in std::vector::operator[] in DEBUG mode, which is an extra step compared to raw arrays. If you switch on -DNDEBUG, the difference might be even smaller.
I think they only check bounds on windows. in my experience they'll just overrun on mac and linux in debug mode all the same
@@lordadamson GCC doesn't check bounds for the std::vector::operator []
In any case his benchmarks include the cost of memory allocations and rand() calls. This is like a joke video or something.
[] operator seems to never check bounds, .at member function has it I think
@@noop9k memory allocations are part of the use model, while rand is not.. for heavy hitters we always use our own allocator in C to keep the performance in check.
I default to vector unless I know the size I need, and that it will never change, but try to be conscious of memory allocations in program design. Memory allocation is the only real slow part of vectors, so it helps to preallocate extra if you know you may need them, to minimize reallocations. The non contiguous containers you need to consider performance more, but of course they offer much more flexibility. The extra program size is from including the header, and will be a one time cost.
Your code at 5:13 in the first loop is copying v into matrix[i]. This is slow because it requires allocating a new array and copying all the contents of v into it. Better options would be to either (1) resize and fill matrix[i], (2) matrix[i] = std::move(v), or (3) matrix[i].swap(v).
Moving or swapping just transfers ownership of the underlying array to the new vector. The advantage of swap is it works on C++98, while move semantics were added in C++11.
i agree .. having a vector outside of the loop and growing it bigger with every loop iteration if it is too small might speed it up a little bit
Good point. Thanks!
That should be optimized away in O2
FYI do a .reserve(x) in this kinda case. This is probably the optimization that the compiler did.
I'm glad you accepted the lack of significant timing difference between the different methods. I've seen several other channels that stop at the first difference/result they can see and call it a win, leading to the comment section getting furious about it (there's for instance a dozen of videos out there comparing C# with C++ and saying that C# is much faster because they forgot to turn on optimizations in C++). You've earned yourself a new subscriber :)
Wouldn’t it be a better approach to measure the single operation you’re trying to optimize (addition of matrix)? Right now you’re measuring total time for setup (allocation and initialization), the operation (addition of matrix) and teardown (deallocation and and application termination). I find when optimizing for speed it’s best to linearize any logic (unroll any 2d loop into a 1d array) and to prefer the most likely outcome of a conditional operation at the start of a code section. Additionally in the case of arithmetic operations the use of vectorized instructions should squeeze out more performance.
Yes, his tests are downright silly.
Why would you compare the performance when they have different use cases?
I heard that std:vector is a bad idea and instead we should do the indexing ourselves.
What is your opinion on that?
This definitely is the case. If you look at the simple example of an int[][], the access is super slow due to the double dereference and number of cache misses (because the memory for each array in the 2nd index can be in a completely different location each time for each member in the 1st index). The same rule applies for a vector of vectors, suggesting it isn't the good idea as well
@@gvcallen what are cache misses
@@ShivamJha00 the CPU automatically "caches" blocks of memory (in chunks of 64 bytes) meaning it stores it in the cache which is much faster to access than normal RAM. So every time you read from memory, you are usually reading from cache first. But the cache is obviously much smaller in size, so the CPU needs to be smart about how to stores memory into the cache. This means that often you'll try read from an address that isn't in cache, resulting in a "cache miss". The CPU then has to do the slower read from main memory. We want to write code which has data stored closer together so we avoid cache misses when possible
If you need a matrix, use a matrix, not a vector of vectors. You can use a linearized vector as a base for your matrix implementation.
I remember from my Data Structures & Algos class a few years back that when you use a vector you are essentially just resizing an array dynamically by doubling its size.
Yes. Under the hood, a vector is just a pointer to a heap-allocated array. The advantage of a regular C array can be that it's on the stack (so no dynamic memory allocation needed). If you know the size of your vector beforehand you can eliminate the need to resize by just reserving enough memory from the start.
@@taragnor so, there is std::array, which is the C++ equivalent to a fixed size raw C array.
@@muadrico Ah yeah. I forgot about that. I rarely use basic arrays for anything honestly.
std::vector is life. std::vector is love. Just don't forget to .reserve() if you know how many elements you are going to put into it. Not reserving when you know the amount of elements is basically a performance bug.
std::array is love when you know the size at compile-time. Something like a list of compile-time constants or configuration values.
I've heard that std::deque is cool, basically a bit slower than a vector but has faster insertions at the beginning, do profile before using it, default to vector and profile for performance-sensitive parts.
std::map is bad, just default to std::unordered_map unless you know you need sorted keys, and look at third party libraries for more specific implementations if you have a performance-sensitive use case. std::list is implemented fine, but it's just bad because linked lists are bad in modern computers.
A lot of the time when you know you're not constrained with memory but you need the performance, it helps preallocating space in a vector. You can do that using the reserve method, which allocates but does not initialize the memory.
Comparing vector avec array don't make sense, it's like comparing malloc with static array. Of course, the array is better in term of performance but the vector is more flexible
all a vector is, is an pointer to some memory, the size of that memory, and the number of allocated elements into that memory, and with behavior to increase the size (generally by powers of 2), and reclaim that size
It would be nice if you used google benchmark for this! or quick-bench for that matter
Thanks!
This goes to my question from that video: does all that space allocation cause a hit in execution speed? Now I know.
Thanks again.
Really though, if you code it right you should only be allocating space once. It's possible to set the vector to reserve a specific amount of space when you create it (or even after). If you know the projected size of your vector, you shouldn't be resizing it. And if you don't know the maximum number of elements, then you can't use a regular array anyway, you must use a vector (or some other structure capable of dynamic growth).
@@taragnor, yes, that’s true. But sometimes you can’t precisely predict the amount of memory the application will consume during execution and therefore you have to decide in development which data structure will be best. Dr. Sorter’s example gives guidance on factors to consider.
That’s what I was really asking about.
Thanks for your comments. They too help.
Well. The first example is a little silly, which I guess is kinda the point, this is something a C++ newbie could do. You really need to know what each type is meant for. Vector for when you don't know how much space you need, std::array for when you always know the size... etc. The vector has a little more overhead overall but when accessing items, it should result in same or very similar assembly to C. Overhead is often from program size and having interrupts enabled.
Sir Jacob for some reason when the word ends with letter S, youre mic make loud noise and it sounds annoying, please try to fix this issue. Thanks for great content!!
is rand() good enough for this benchmark?
yes
Why not allocate in all versions the same arrays and vectors and just don't use the ones you don't want to check?
I haven’t watched your video completely yet but I can guess you are going to get very similar results or even results varying significantly as you change the total size, because this is not how you write stable benchmarks.
The main bottleneck here is memory bandwidth and possible cache aliasing. It is (likely) going to hide the perf differences between vectors, arrays etc.
These would be visible with much smaller arrays and large number of repetitions.
And you don’t really get stable measurements from single-shot runs in any case.
Update: haven’t noticed you are also measuring the speed of memory allocation, not the just the performance of actual algorithm.
And the rand() calls are also a part of the measurement.
This is silly and a waste of time.
CPP flag in a Makefile is for cpp pre-processor. C++ compiler should be specified in CXX flag and his flags in CXXFLAGS. Also "clear", "all" and "timing" its better to mark as a PHONY.
My previous post was deleted, maybe to much code in a comment and some youtube algorithm didn't like it? Any idea wats going on?
Channel owner can also delete comments ;)
@@noop9k It seems not in this case. My posts about make that I wrote in few different channels were deleted as well :-/. Could be something in a makefile syntax that triggers algorithm, who knows.
@@voytechj I sometimes have this exact situation with my comments silently deleted, no matter what content, until I change VPN server.
Thank you ❤️
havent seen you cover cpp before
When I saw this video the first thought that came to my head was, "Why is this even a question?" The allocation process for arrays will be faster. It would either simply be allocated off the stack or allocated in memory with the program image. There is no comparison between those allocation methods and dynamic memory allocation. In terms of performance, A vector will be slower because of any overhead the design might have. I would suspect that at the minimum the vector would at least check if the index is in-bounds. On the other hand, arrays are rather dumb objects with no build-in security features within C/C++. It will blindly trust the programmer not to destabilize the program. That isn't to say that vectors are useless. If you need to dynamically allocate an array of unknown size, they're perfectly fine. I personally never use them since I prefer allocating and managing the memory myself.
theoretically vectors should be marginally slower. to run this test properly you need to page in the memory first. ie: overwrite the array /vector multiple times with random values and then run the test. and yes compiler optimizations are super important. that is because without it the code to let you debug stl is a huge overhead. I'm a kernel engineer so i don't use stl or templates, but the whole idea behind them is you can get static checking, algorithm copying ect for free. so in theory you could create a template for a 2d matrix that the compiler reduces down to c, and then you won't ... or shouldn't have performance differences. but yes you would have to do the template coding to support this. for many specialized cases this is trivial.
Hi Dr. Sorber can you start a c++ series? We have been extremely benefitted by your c series.
checkout the cherno: ruclips.net/video/18c3MTX0PK0/видео.html
@@herrdingenz6295 cherno is really the best
Hopefully, not. 💁♂️
why don't you do some real comparison of your code of the different types (array vs. vector) on different compilers usind godlbolt (like Jason Turner does)?!
Well yes slower for the following reasons: [I did not watch the video]
1. Arrays are stack located while vectors are heap located.
2. adding elements in the vector that exceeds the size of the vector requires allocating new heap location and copy all elements in the first vector to the second one, usually capacity and size are not equal so allocating more capacity in std::vector requires bigger heap request to OS.
3. allocating in heap require OS heap allocating request, in Windows OS this heap allocator is a kernel object, meaning if you want to allocate some space in heap you have to jump from user-space to kernel space and do lots of checking like security checks, call stack check (to check this is indeed coming from user-space not an application that runs in kernel mode).
4. the Windows OS do some tricks to reduce heap fragmentation, like virtual memory with windows handles, sometime pushing allocated data to open a room for next allocation request and this is costy.
5. In windows 10 and after, there is a new technique in handling heap called Segmented-Heap, if enabled a new trick is added to handle heap allocation requests.
So, don't use heap unless you have to, if you want to allocate dynamic memory on stack use alloca() or use assembly function like below, the following code only works on Windows due to x64 Windows Calling Convention that is not the same as Linux, where Linux pass function arguments to rax first
AllocateHeap proc
sub rsp, rcx
ret
AllocateHeap endp
Last warning, if you are using stack make sure you to check for stack overflow exceptions, this exception can be easily reached specially in recursion functions[That's why I hate recursion, unless I have to of course]
Jacob your conclusions are a bit silly. The singlearr and singlevec tests both use static allocation. Why would you use a vector in this case? A vector uses more memory. Then you switch on the optimization. The compiler optimizes the code and makes it about the same speed. The conclusion should be use the optimizer for faster code and smaller size. Comparing the code size instead of the data memory size is also silly. For a small program like this the majority of the code size is the C-runtime.
I implemented a 256bit encryption algorithm. After trying arrays and vectors extensively on files from 6mb to 9gb, I have to say there is absolutely no need to use arrays what so ever. Because when I try to ensure the free and reallocation of array memories are secure, I have to make a class to handle the arrays. I watched one the c++ talks by the funder, vector is really there to be a drop-in replacement for arrays, that handles exactly what I wanted for arrays. Also, be sure that you preallocate and preload things, asking the OS for memory can be slow, more so with vectors.
The interesting thing to look at for vector would be time for realloc and memory fragmentation.
Vectors use contiguous memory allocation because they are implemented using arrays, of course.
@@mitya , right, but over time you'll be realloc'ing all over the place, making malloc slower, no?
@@paherbst524 if you know how many elements your vector will hold at max then you can use reserve() function after the declaration of vector to reserve enough space for it from the beginning. It will not reallocate again unless the capacity is exhausted again. Then it will reallocate again but your program will not die LOL. But while using stack based arrays, after the array becomes full there won't be any extension of capacity. The program will encounter many problems.
@@paherbst524 The same is true for regular C arrays, I guess.
I can tell that C++ isn't your thing, that's not really how you should use vectors 😅
Please be more specific. Are you concerned that I'm using a vector of vectors? Critiques without explanations or recommendations aren't very helpful to other viewers.
@@JacobSorber a big mistake of you:
Instead of writing:
std::vector< std::vector > matrix(ROWS);
you should write:
std::vector< std::vector > matrix(ROWS, std::vector(COLS));
So the matrix vector and its internal elements(of type std::vector) will be allocated all at once. The program won't need to reallocate matrix every time you want to push another std::vector object to it. You won't even need to push anything. So much much faster.
@Jacob Sorber Really? Come on, be honest like L. Torvalds and tell us frankly that you hate C++. You surely know, that vector is not intended to be used, like you did use/ it. Furthermore, I am sure, you know how to properly benchmark, as well. However, your C related content is great.
@@muadrico you are right about C++. It's 💩
@@PavitraGolchha Nope.
Dynamic allocation is slow in general
It's definitely slower than not doing an allocation. I don't get the point of this video.
I don't know why this is recommended but I know that vectors are what accelerator controls
I think you might be thinking of mathematical vectors as a member of a the set {(x,y,z) : x, y, and z are real numbers}.
This video means vectors as lists of variable (finite) length eg [683, -73, 74, …, -82] or [‘g’, ‘9’, ‘ ‘, ‘€’]. The first is a vector of integers and the second is a vector of characters.
@@gregoryfenn1462 oh though it makes even less sense that this got recommended
answer: no
No.
why
You should not use vector of vectors, use something appropriate. A C++ programmer should know that. Also, this comparison does not look like a representative valid benchmark (for several reasons). Looks, like you are ranting C++, lately 🤔. Really disappointing, because considering C, I enjoyed your content, so far.
How is this a rant? It's a simple comparison (never meant to be a definitive or scientific comparison), and the take-away (if there is one) is that the cost is insignificant. It's not anti-C++ in any way.
@@JacobSorber You answered your question yourself in the very same comment. Thanks. You made a comparison in a way in an improper way, therefore you can't conclude anything from it. But you didn't explain that in the first place. Inexperienced programmers take you conclusion for real a d now thin C is better than C++ and std::vector is bad and they use plain C arrays instead. IMHO, I don't think, that was your intention, but you need to be more sensitive about such things.
With "ranting" I refer to your latest videos in general, and I really don't get, what you are up to. Especially, because until recently, I really liked (still like) your content.
vector is extremely important and correct answer for a graph
UwU
UwU
UwU
UwU
@@0xdead982 UwU++;
@@michaelespinoza4562 OwO = !UwU;