"Geometric growth behavior makes large vectors impossible unless the maximum size is known ahead of time" - as far as I can tell, this is false. Appending n items to an empty vector would perform O(log n) allocations, and O(n) move constructions and destructions, so overall there's only a linear amortized overhead, same as a deque.
The example from the video was if you have 4GB ram and 2GB vector that you need to add one more element to, the doubling of reserved size will take up all memory. For this kind of "large", there is a real problem (maybe modulo overcommit capability of OS).
Exactly. But it's actually worse than that, because you can't (typically) grow memory in place, so if your growth factor is 2, you need space for a new vector of twice the size, so temporarily you need three times the memory used by the current vector.
See my answer to Eric Norige below about the growth issue. As for the number of moves, it is O(n), but for typical implementations it will be between 2 and 3 times n, which might matter, especially if your elements do not benefit from move semantics. So pre-reserving is always desirable if you know the final size (or know a reasonable upper bound).
> you can't (typically) grow memory in place Even if the low-level allocator/memory manager supports in-place reallocations, std::vector wouldn't be able to use this feature simply because std::allocator's interface doesn't provide any equivalent to realloc() from the C standard library.
Set speed to 1.25 for best experience.
Thank you
Efficiency you mean :)
True dat. Still feels sluggish.
x1.5
Very well said Sir. Very thoughtful talk.Excellent examples in the slides.
would 43:04 actually be optimal? i wonder if we might get a strlen call due to the indirection
That was quite interesting, for good usage of parameters and return values for instance
Good presentation.
Alan looks like a hybrid between Bryan Cranston and Kiefer Sutherland
was short string optimization not prohibited by the latest standard ?
No. Copy-on-write was prohibited from C++11 on.
@@SebastianHasler thanks I couldn't remember
Interesting intro
"Geometric growth behavior makes large vectors impossible unless the maximum size is known ahead of time" - as far as I can tell, this is false. Appending n items to an empty vector would perform O(log n) allocations, and O(n) move constructions and destructions, so overall there's only a linear amortized overhead, same as a deque.
The example from the video was if you have 4GB ram and 2GB vector that you need to add one more element to, the doubling of reserved size will take up all memory. For this kind of "large", there is a real problem (maybe modulo overcommit capability of OS).
Exactly. But it's actually worse than that, because you can't (typically) grow memory in place, so if your growth factor is 2, you need space for a new vector of twice the size, so temporarily you need three times the memory used by the current vector.
See my answer to Eric Norige below about the growth issue. As for the number of moves, it is O(n), but for typical implementations it will be between 2 and 3 times n, which might matter, especially if your elements do not benefit from move semantics. So pre-reserving is always desirable if you know the final size (or know a reasonable upper bound).
> you can't (typically) grow memory in place
Even if the low-level allocator/memory manager supports in-place reallocations, std::vector wouldn't be able to use this feature simply because std::allocator's interface doesn't provide any equivalent to realloc() from the C standard library.
Get a glass