* std::variant generates a vtable for each call to std::visit * One way of dealing an open set of operations together with an open set of types is to use type erasure, and that was what Sean Parent actually ended up with. You can bind by value (smart pointer), or by reference. You may have a class template that implements the interface as in the talk. You can also design you vtable on your own with raw function pointers, if you want to control memory layout. You can even use some data oriented design, with parallell arrays for function pointers and object pointers.
This is another great talk. The disambiguation of CRTP in terms of static interface and mixin is very helpful, and all of this was presented very clearly. The end is actually a bit of a cliffhanger that is resolved in another recent talk by Klaus at cpponsea (ruclips.net/video/m3UmABVf55g/видео.html), where he cites a "value-based OO" solution based on type erasure as possible alternative to virtual functions.
Good talk, but tbh the problem with dynamic polymorphism and all forms of type erasure is just a lazy design decision. We all know(or should know) what problem we're trying to solve, what types we're using, what time/space constraints we have, so just do it. Future proving it brings nothing to the table, especially if there are baked in performance pessimizations already in. There's no point pretending that allocating and writing to memory and creating a file and writing to it are the same just because the types have the same interface. Type erasure is just premature pessimization, you know the type, but you're willingly forgetting it. The obvious design choice is to have N vectors of a specific type instead of a vector of a base class which has N types deriving from it. Abstract only if there's no other option.
* std::variant generates a vtable for each call to std::visit
* One way of dealing an open set of operations together with an open set of types is to use type erasure, and that was what Sean Parent actually ended up with. You can bind by value (smart pointer), or by reference. You may have a class template that implements the interface as in the talk. You can also design you vtable on your own with raw function pointers, if you want to control memory layout. You can even use some data oriented design, with parallell arrays for function pointers and object pointers.
This is another great talk. The disambiguation of CRTP in terms of static interface and mixin is very helpful, and all of this was presented very clearly. The end is actually a bit of a cliffhanger that is resolved in another recent talk by Klaus at cpponsea (ruclips.net/video/m3UmABVf55g/видео.html), where he cites a "value-based OO" solution based on type erasure as possible alternative to virtual functions.
Did the crowd really moan when he said virtual functions?
Good talk, but tbh the problem with dynamic polymorphism and all forms of type erasure is just a lazy design decision. We all know(or should know) what problem we're trying to solve, what types we're using, what time/space constraints we have, so just do it. Future proving it brings nothing to the table, especially if there are baked in performance pessimizations already in. There's no point pretending that allocating and writing to memory and creating a file and writing to it are the same just because the types have the same interface. Type erasure is just premature pessimization, you know the type, but you're willingly forgetting it. The obvious design choice is to have N vectors of a specific type instead of a vector of a base class which has N types deriving from it. Abstract only if there's no other option.