Dependency Injection | Prime Reacts
HTML-код
- Опубликовано: 20 ноя 2024
- Recorded live on twitch, GET IN
/ theprimeagen
Link to reviewed video: • Dependency Injection, ...
Posted by: CodeAesthetic | / @codeaesthetic
MY MAIN YT CHANNEL: Has well edited engineering videos
/ theprimeagen
Discord
/ discord
Have something for me to read or react to?: / theprimeagenreact
Hey I am sponsored by Turso, an edge database. I think they are pretty neet. Give them a try for free and if you want you can get a decent amount off (the free tier is the best (better than planetscale or any other))
turso.tech/dee... - Наука
We need to praise code aesthetics more: amazing production quality, deep insights, real-world advanced examples, 0 bs. All been made single handedly by an independent small youtuber. Such a gem
"Don't write comments". It's all hipster nonsense. That is unusable in the real world.
It’s all hype no substance. Just read one chapter of POODR on dependency injection and you are good to go
@@gagagero
1. if you don't share an opinion, asking the other to stop "writing comments" is quite ignorant.
2. Considering dependency injection unusable in for real world is even more ignorant. 🙂
Or did I miss your point?
@@josefpharma4714He's talking about code aesthetics saying code shouldn't have comments, not telling people to stop commenting...
@@SlavaEremenko is it good if you don’t use ruby?
I have three rules for DRY
1: if the code has reason to change independently, just copy paste instead.
2: if I notice that a function has become aware of where it is being used, I know that I messed up. Signs of this is when parameters keep getting added. Usually those parameters contain words like skip or include.
3: don’t bother reusing trivial stuff
Usually rule 2 is a result of not following rule 1.
#2 is a great. I call them slutty functions (they're too desperate to make everyone happy).
I'd also add that base functions should mostly be abstract. If you find yourself writing virtual functions to avoid typing more words, you should either copy/paste or make a component. When I have trouble figuring out what a class is even doing (without digging through its parents), I know I fucked up.
#2 is a symptom of mistaking syntactic repetition for semantic repetition. The former should not be refactored.
@@DryBones111 likely, with a strategy
@@PoorlyMadeSweater HAHAHAHAHAHAHAHAHAHAHAHHAAHHAA
Yep, some ppl think I am being lazy, when I tell then to just copy the component and change it a bit. It is difficult to explain that those are different use cases and I am trading some duplication for less buggy code
Ohh fLiP dO yOur jOb
Scuffed audio
mean
He probably could've EQd it out, but I bet he would make some lame excuse like editing 20 videos. Pfffft
i hate you guys
@@flipmediaprodI for one, thank you, the videos wouldn't be as fun otherwise
So, for the testing, they are testing the actual uploading code with the fake encryption- so if that test fails, you know the uploading has an error, rather than just knowing that the error is in either uploading or encryption or the key. Then for the encryption test, using a fixed test key makes total sense- and then integration testing by uploading an encrypted file, but, with a fixed test key still. This makes total sense as a great way to test stuff.
And he might have not shown all the upload tests that use mock encryption and mock storage. If you have a good mockup of the storage service (maybe Amazon provides one?), you can test error handling etc in isolation. Showing like 15 test cases wouldn't have fit in the video tho.
In Haskell, dependency injection is the norm (as your Monads are implicitly given to monadic functions when called, and they can easily be replaced with compatible Monads). Since it's such an integral part of the language, having DI everywhere is no longer an issue, since the typing system is so beautiful.
I program in Haskell, by the way.
Like a vegan crossfitter... :p
I think it's worth pointing out that even though you say it's an integral part of the language, it's not like the designers of Haskell set out to design a language that has built-in support for dependency injection. It's more that the features you have available in Haskell lend themselves well to implementing dependency injection in a very idiomatic way.
Do you use Arch Linux too?
Then you've got monad transformers which let you dependency inject your dependency injections
Thank you for your service!
DRY is often misunderstood. The idea isn't to never repeat yourself, but to have all _information_ have a single source of truth. That is, you don't want to have to change some fact in several places since that will definitely come to haunt you.
In the Pragmatic Programmers book they actually explicitly give an example of deliberate code repetition in case it represents different concepts/information.
Just FYI :)
So true! This is also why WET isn't really any better
I think that's conceptually true, but in practice the "haunting" is often just having to change the same line in 4 places when it breaks during a refactor, which is often faaaaaar better than the deep rooted architectural cancer that comes with the dependency and inheritance spaghetti people use to avoid writing some simple procedure a few times.
I agree that important dogmatic information is def something that should come from one source.
@@PoorlyMadeSweater Yes I agree people shouldn't go nuts that way. Apply pragmatism at all times I guess.
My personal very simple explanation is that I should not do the work for the computer, the work at which it's even excellent.
Repetition should be programmed where possible, not directly written. And even in cases where I am in a situation where I actually need tons of duplicate code due to technical limitations, then it's time to look for ways to generate that code (metaprogramming, macros).
@@jongeduard To emphasize the point I was trying to make: sometimes two segments of code might be identical, but they represent completely unrelated things, a consequence of which is that if one changes, the other shouldn't. Had you abstracted this repetition away, your code would've been harder to change for the wrong reasons.
Not disagreeing btw, just wasn't sure if I made my point clearly enough.
Having had quite some dozens of juniors come and go, a good chunk of them try to think of the pattern before prototyping. Some of them have this mindset of "If I use a good pattern from the start, I won't need to rewrite anything". Then they come up with the most convoluted solutions to easy problems. That was before we had code review so thankfully things are way better now.
I understand this line of thinking, and it makes sense on a logical level, but never on a pratical level.
And yeah, I think 99% of the good readable and testable code are strategy patterns that spawned after a functional prototype.
I agree with this. Even seniors I encounter are often bent on using design patterns everywhere. I tend to favor using patterns when they become apparent in what I'm doing as opposed to try to shoe horn some desired behavior into a pattern. Only exception to that is when I'm thinking architectural patterns when standing up or integrating with a service
Brother, you're like my go to for learning the how-tos of programming. Because of your teaching and memes i got a new job with higher salary. thanks Prime.
Strategy Pattern is one of the things that always seemed so obvious to me that I question the utility of giving it a name.
Yes! I was just like: This shit has a name?? But as he said it just comes with experience
try other patterns, get out of your comfort zone!
@@pah967 That's a weird comment that assumes a lot I didn't say.
"Strategy pattern is an obvious pattern" does not equal "Strategy pattern is literally the only pattern I ever use."
Turning your comment back around: maybe don't give unsolicited advice especially when it is also not even relevant advice.
@@timseguine2 well, based on your reply to my half-assed attempt at promoting exploration, I can only say... well aren't you a belligerent self-important c*nt?
@@pah967in addition to the above, if you're only hitting nails, then all you need is a hammer 😉 using some arbitrary design pattern for the sake of it is a quick way to ruin your code
the only bad thing about dependency injection is that it breaks my "Go to Definition" as IDEs can't figure out which implementation you want to jump into so they take you to the interface .. pain
If you’re in Visual studio you can ctrl + F12 on the name and it will take you to the implementation.
On Jetbrains IDEs this is not an issue.
@@juxuanu it actually is, if you just use hotkeys and don't know there's a different one to get the implementation. if your current IDE can't figure out implementation from interface, it's time to upgrade to an IDE made in the last 2 decades. Or learn the hotkey.
@@tsalVlog I'm pretty sure most LSPs can list implementations if I'm not mistaken?
That's my biggest gripe with it. These graphs show you some data flow, which is nice and well, but actually seeing it is borderline impossible without a debugger. I don't mind that it can be complex (it has to be), but it's even worse to navigate than inheritance...
And the simple thing is, that you "shouldn't" care about how the storage works, but you absolutely have to care when you deal with some obscure issue, or must extend it, etc.
I agree, don't repeat yourself a lot (3-5 times is fine) and code first so that things work, then you can clean and optimize. once you start cleaning and tuning, you can write tests to be sure and stable, before that you don't need the tests too much.
Greate video and great comments ;)
Mocking allows you to test the happy and the bad paths... So you can tell your mocked dependencies to return a bad result or a service outage or something from your result and handle that in the tests that way you know your code is going to handle everything properly...
sort of
this works with services you run and control, it gets a bit more different when you are mocking a service you don't know (everything) or control
@@ThePrimeTimeagen yes, I think this works well for our services and well documented third-party services
@@ThePrimeTimeagen But if you don't understand the service you're using completely, then you probably also don't want to handle all of its errors. You probably just want to catch ThisService::SaveFailedException, and in that case, a mock impl that just returns a generic "shit went bad", would be equivalent.
It comes down to costs often. It is harder for some teams to get the budget so have to mock/short circuit/substitute the costly resources so they can validate the rest of the pipeline. We depend on SLA to provide the certainty of those resources and then being validated during integration testing.
Ideally you would never want to mock it and your team would have it budgeted.
@@ThePrimeTimeagen
@@ThePrimeTimeagenthat's the thing. With unit testing you are testing a unit. You don't want a broken AWS or a broken image scaler to break your uploader test. These should be tested separately, so when something breaks it doesn't affect testing of your other components which are not broken.
The testing issue raised around 23:50 is something I run into a lot, especially with teams that are just starting out on their automated testing journeys. I have a simple rule: If that mock could conceivably be used in production (e.g. sqlite instead of postgres, a file directory instead of s3), then its a useful mock. If it would be worthless in production, its a worthless mock, and a worthless test, throw it away. For those, the only real way you can test them is to spin it up in docker compose or k3s and test it for real.
Unit tests that ensure that some service burried deep within a method is called with the parameters you expect are useful and important and they should use mocks because you are are testing the service call itself, not what happens after.
I like to divide unit tests in 2 categories: testing pure functions or testing lateral effects. In the former you test expected outputs for given inputs. In the latter you test that a class/service/whatever (eg an api or storage) gets called like you expect it to. Both are important IMO. But of course to test the real deal you need integration tests and there your statement is of course valid.
So perhaps I misunderstood something but I don't get the lashing on mocks/stubs
@@joaomacedo673 "Was this called or not" is not a terribly useful test. The common way to mock such a thing is to validate that the arguments passed match what is expected and to return a canned response. This seems useful when you first look at it, until you realize that you've encoded your assumption about how that remote service behaves into the test. Its one step above running the test, capturing the output, and then updating the test to assert the output is what you just copy pasted. It is not testing anything except that no exceptions were thrown/errors returned. The only useful test in such case is one in which a representative-enough replacement is used, such as sqlite for a real DB. If you cannot test the intermediate logic without having it try to call to an external service, that usually indicates soon-to-be catastrophic architectural issues.
@@BloodEyePact I'm talking about testing the service/database/wathever is called with the arguments it's supposed to be called. What that call returns and what happens after doesn't matter in the case I was talking about, as one is just concerned that under specific circumstances your method performs the call with the correct parameters to produce the intended side effect.
In fact I don't see the point of replacing the actual implementation of the database or filesystem by some other that could be viable in production. It's either the real deal (integration testing) and there, yes, you test the whole system/subsystem end to the end, or pure mock to assert the side effect call is performed as it's supposed to (unit testing).
@@joaomacedo673Testing that a service was called or a database was queried is not a useful test, unless the software in question is specifically about making that call/query, because that is an implementation detail, and tests should never test implementation, only behavior observable to the user (whether that user is someone behind a browser, another program calling a library method, etc). You replace the implementation so as to not depend on external services which is poison to test reproduciblity, and therefore, result validity. Testing that a call was made with parameters assumes you know the correct parameters/query/etc to make, which you don't, and encoding what you /assume/ the correct parameters/query/etc into the test invalidates it, because it only tests that it matches your assumptions, not that your assumptions are correct.
You test a scenario, that dependency in that scenario is just behaving normally, in another scenario we can make it fail.
You know you've gone too far with Dependency Injection when you're struggling to find your concrete implementations.
😂😂😂
"...comes down to two patterns: [builder] and [strategy]" - parsing library maintainers catching strays
This is one of the reasons why I hate OOP (not necessarily objects): all of the stupid names for simple things. "Dependency Injection" is just passing objects to procedures as a value. WHY DOES IT NEED A SILLY NAME LIKE THAT?!
Its just a fancy name for composition where object is determine at runtime either by configuration or automatically. hell I used spring since 1.x, never heard the term until 3.x
DI is not unique to OOP and has nothing to do with it. It's a general term that generally describes the concept.
@@deallocThe only people who use the term are OOP people. Everyone else doesn't even bother naming it because it's such a trivial concept, and only seems "great" if you have been doing insane OOP things.
It's a pattern. It's not every time you pass something to something, it's a specific type of passing something to something. You're asking why we don't just call singletons static variables and the answer is that not all static variables are singletons.
@@thewhitefalcon8539I am not asking about calling singletons. I am literally saying that "dependency injection" is a silly name, not that is a bad idea.
11:50 I believe he's saying that:
a) Amazon S3 is more likely to be the target than them, so if their files are stolen by somebody hitting amazon, they can say it was encrypted.
or
b) They have stronger firewalls around their auth server than the rest of their network, so if they get hacked, the hackers probably will have access to the database but won't have the ability to decrypt their database.
The downside is simply that you have no idea WTF is happening when you enter a new code base full of DI.
Yeah, just a bit of a downside.
That's not DI. That's just how reality works. When you enter a new code base by definition you don't know what it's doing. DI is not preventing you from finding out. It's like saying splitting code into multiple functions instead of having one giant block of code is preventing people from knowing what's going on. It's not.
Been there. It mega sucks.
I try to avoid it unless necessary as it can become one of those things where you “just have to know” what’s happening.
Yeah not DI's fault tbh. New code based will always be somewhat confusing. Skill issue tbh. DI is one of the easiest solid principles to understand and follow in code imho
@@brandon14125 It’s not necessarily that it’s not easy to understand. But you have to understand the timing happening in a program holistically to know when and what a certain function should be performing. If you’re reading a single file, it’s sometimes not obvious that the function will not be performing what is initially assigned (Though, this can be made obvious by saying “my_func = lambda: None” with a comment or something).
Further, it’s something can get out of control fast. A couple DI’s in a project? Fine. Dozens and dozens? Now you create a web that is likely very straightforward for the person who is writing it, but will take quite a while to sort through mentally.
I also disagree with “new code bases will always be somewhat confusing.” That’s reducing something that’s very nuanced and can be judged on a curve to something thats black and white.
New to the channel. This guy is so way over my head, it depresses me. There is just so much to know and understand.
Without being a big fan of using mocks for integration tests they make a lot of sense in the case that you do black integration testing and white unit testing.
That way you can make sure that the units conform to the interface and the mock getting a lot of depth from those tests, then the integration tests can be very shallow and be very wide with minimal effort.
This allows for great coverage at the cost of minimal brain power, you can then argue if that is worth or not.
This is one of my favorite patterns for plug-in type systems. Turning DI into abstraction hell is a code smell.
there cannot be to much if your language supports it well. that's why some languages even have special DI syntax sugar.
OOPERS when they discover function parameters, "THIS IS THE BEST PATTERN EVER!"
All of these patterns that force you to write tons of classes to do anything and leave you bloated with Managers, Generators, Runners, Loaders, etc., only make sense for the person who writes them. In reality, they are extremely hard to debug, and all of this encapsulation makes it impossible to know what will happen without debugging at runtime. I started my career with Java and used all of these patterns. But then I grew up and stopped using any of them. Today, I just write a simple function, pass the parameters, and reduce all of the decision logic for using one strategy or another to "if" or "switch" statements. I only use classes in very specific cases for self-contained stuff. I almost never write the words "extends" or "implements" anymore, and my life is much better for it. :)
💯 this.
Gigachad. Genuinely I don't think this CodeAesthetic guy has made a single good video, but he makes cool visuals so people assume he's right lol.
Well said!
Whats language you using now
@@minhduccao9955 ts/python
0:43 Just starting out, that’s not what “dependency injection” was. That’s just delegation, or what you could call parameterized composition, or even just configuration. When “dependency injection” began to be discussed at all, it was about configuring code dependencies from outside of the call path, say via XML configuration files. That’s why it was called “injection”. “Delegation” and “configuration” are much older concepts. For example, if you have configurable service providers, that, in your call path, selects a service provider based on on a configuration property, that was not called “dependency injection”. For it to be called “injection”, the injector would replace runtime calls to a particular service, say described by an interface, with the desired implementation, with the original code being none the wiser.
If DI is defined this way today, it seems like its proponents trying to grab ground that doesn’t belong to them. If you want to call it “dependency management”, have at it. But by trying to redefine “dependency injection” in this way, you’re trying to wave away why so many saner engineers thought DI was unworkable in the first place.
In python, I usually have a default implementation of e.g. a Repo, that does exactly what it should do, and inject it. So the usual work of DI based usecases for clean arch become self-sustainable, and only while testing, I replace the dependency. I am not sure, why everyone shys away from happy path defaults. It makes it a lot easier to deal with, than having endless debates whether we should dependency inject this or that here or there, which is usually won by the guy who wants everything explicit, with ten interfaces, and quits the job right after everything has been refactored to this overexplicit mess.
I love your honesty and you not being afraid to show a little vulnerability. I really love it. Hope you've found some time to relax :) although it's been a year so I assume you have... right? (O_O)
My last BIG project was one based upon a DI framework (in Java). It was great. However, there were times developers created some spaghetti apps where they created DI apps that data could get lost in. I think they didn’t design or think out their app well. While “not my code”, I would often get called at midnight to fix it. The code from our group based upon this framework had very few problems (most were traced to 3rd party data providers). We would work as a group to talk through data paths in our app. Worked well.
Us FPers staring at the OO crowd for giving some fancy name to passing data as a function argument.
It's not passing data, it's passing an implementation hidden behind a facade.
Us FP guys asking, why a lambda aka strategy pattern isn't sufficient.
@@McZsh It's still passing data because, in OOP, implementations are data structures.
"The secret to good writing is re-writing" - this is golden. I often tell juniors that one of the most important skill is the ability to discern when code needs to be refactored, and the stamina and rigor to do the refactoring.
The moment you defined `as const` as the "pointer to a constant", I immediately thought about trying the exact same experiment you did with `let`. Glad you covered that 10 seconds later.
I think the difference between strategy and adapter pattern is that strategy uses different interchangable components for different outputs / side effects and adapter uses different interchangable components, but we expect the result to be the same whatever implementation you use.
I do think he showed us more adapter patterns than strategy by that definition, but it is a very fine line nonetheless.
It's interesting to see how Slack programmers think and make solutions and all the high brow talk (to make it sound cool). Good video. I wouldn't have ever found or watched that guy's video.
some luv to the editor, 20vids a day is insane!!
I am fully on board with Code Aesthetic with this one. I don't like it when people give simple things impressive names. It is indeed no "dependency injection", it is passing a stupid thing to a function and so is it not "strategy pattern", it is a literal stupid if statement
In elixir there is pattern matching on function args. It removes all of the if/else and makes it duper easy to parametrize behavior... Uh i mean, "inject dependencies" (did I use the fancy words correctly? I need to impress the Jr devs while making them feel dumb for not understanding my big thoughts right away)
It also makes it much more easy to understand when you declare the struct type on the pattern. As opposed to declaring some generic interface and then trying to figure out which classes are implementing this interface.
I actually prefer a straightforward switch/match statement instead of hunting for another declaration of the same function buried somewhere in the file.
I learned a lot watching the video you're discussing, mostly things I didn't know how to do.
But "how" it's done is to me, nice and clean.
I believe I got to the video from a video talking about using composition rather than inheritance.
So, both the "favor composition" and "use DI" were to me 👍👍.
I appreciate your comments as well Prime.
Thanks to all.
Will investigate the strategy, builder, and factory patterns much more now.
22:00 that's literally the entire concept behind languages like haskell- that's why you have pure functions that do your computation and then everything that interacts with the outside world happens through the io monad.
I think the unit testing at 24:49 helps guard against regression. You assert behavior this way, and if tomorrow some dev comes in and decides to refactor because he doesn't like the way it was written, you'll catch any functional braking changes introduced. I also had a hard time with this time of test initially and found them useless, but on large projects they are very useful. Granted, e2e tests can also cover this, but you have to write a lot more code to cover possible scenarios because of all the other branching that can occur.
For the person asking for a concrete example of DRY being bad. When you’re doing microservices, DRY leads to heaps of shared libraries and, holy cow, that was a hot steaming mess.
16:58 The proper phrase is actually "Avarice is the root of all evil". 🤓
I loved that comment. Most of programming is using builder pattern and strategy pattern. I agree whole-heartedly
builder pattern: what is my data
strategy pattern: what do I want to do with it
yup, checks out, this pretty much covers everything
I see some value in DI/mock/tests in that you can ask the simple question "if y is working, why is caller x (or forward call z) blowing up? If it's not working, how does x (or z) handle it?". That simple yes/no will suffice for most things. It can even help with transient conditions if (at the boundaries) your parameters are reduced to a "generic" message passing object that both carries back the reply, if any, AND the state of the "operation". Your caller must now decide if when presented with "failed because of K" it wants to just fail, or handle the fail and retry, or....,etc.
For me though, the true value is that it burns into you the idea of "interfaces at boundaries". Going forward, even when you're not using any of it, you start to instinctively "feel" when you crossed the line between (what should be) a and b components. Eventually you start NOT doing "monolithic" and also stay away from "abstraction i don't need". Hopefully...
p.s. loved the SQLite mention. Had to "hard sell it" to my colleagues the other day and boy, was it a tough battle... What brought it home was the logical conclusion that if you wrap all of it in a "container", you can actually commit the state to a file at any point AND restore from that checkpoint at any time. The value of that when testing for "elusive bugs" can not be understated.
p.p.s I only use SQLite when things are stable enough. First port of call is a simple "memory mapped" container that can be checkpoint'ed to/from a json file. It mimics "ideal conditions" and let's you test the code "if all was working properly". If so, and when the overall architecture is "stabilized" you move to the SQLite implementation so it's easier to do postmortem analysis if needed. Best part is that, using any of these (and client allowing), we can pull the live data from the SQLServer/MySQL/etc at any time with a tool, save it to a file, transfer it and our support team can play around with it to see what is happening when under clients state in an environment with full debug capabilities. Can save you a LOT of time over having someone checking it at the client's front end while someone else is sifting through backend logs, dead and live.
The problem with DI and OOP (which DI isn't the only thing used) is that, it relies heavily on abstractions. You always cook something over some gigantic amount of some object that all are abstractions over abstractions.
You can't EQ out clipping. The closest you're getting to removing clipping is using something like the declip module in izotope rx, which luckily is pretty painless to use
very good call rx is really nice
A lot of this can be dumbed down and distilled to this simple thought: Whenever you are implementing a functinality, always keep in mind "who's concern is this", e.g. Where the data will be stored, how the data will be scanned, and even WHAT the data is, is NOT a concern of the UploadRequest, so it SHOULDN'T KNOW anything about
"What's WebDAV" made my cry so hard because I didn't know it either until I had to re-do a whole "FileTransfer" Lib in our Common Lib where I had to reimplement all the connectivity of the possible endpoints for filetransfer, including FTB, SFTP and "ofc" WebDAV. WebDAV nearly made me cry, maybe it was our infra or WebDAV is just coming literally out of hell, idk haha.
As a typescript developer (frontend) You won my like during the "as const" thing, that shit was great btw
17:10 DRY is aspirational. Don't does necessarily mean never. Nice to hear this!
Finally someone said it, DRY only if it actually repeats, not just randomly happens to be the same in two places. I only really extract things if they're identical in 3+ places, or quite similar in 5+ places.
Also, DI very good if you know what you're doing. Especially with typescript or other fake-strict languages, you don't actually need an interface and a class, you just say
storage: { store: (item: something)=>Promise} and provide an object that has a store method taking something returning a promise, and what happens from there is irrelevant. You probably will have classes or at least some helpers for S3 storage and FTP storage, but you don't need a MockStorage class for testing, you just inline that shit. Have some weird edge case? Inline that shit. It's quite pleasant to work with if you don't have to explicitly state every behavior and can just wing it (again, for dev/testing only, wouldn't recommend in production, but then again it probably won't break, just not as easy to maintain). And you can mock write a Harder Drive implementation.
Hahahah fake-strict languages I love it.
100% DRY or nothing, if you are unable to reach 100% then there is something wrong with your design... and then there is code maintenance, tests etc.
@@depafrom5277 I think it's quite the opposite. In my experience, 100% DRY means you turn all your code into thousands of small generalized modules, which makes it much harder to maintain - because the uses of the modules/functions are often slightly different and then it becomes extremely complicated to be aware of the relevant context for specific real world scenarios. That wouldn't happen if you just have the relevant code and data right all in the same place that deals with the specific scenario - even if similar logic happens elsewhere. The parts of your application that are likely to change doesn't need tons of separation of concern and abstraction layers.
@@Muskar2 Yeah, I guess some languages and frameworks lends themselves better to DRY than others.
I saw the original video and since then i refreshed your channel every few hours waiting for a reaction.❤
DRY...man I'm using Dapper on one of our projects...I don't even remember what the using statement looks like because I wrote it just once, and copy-pasted it a million times whenever database requests were needed.
I got a DoorDash ad right after the Wifeagen brought him food. “You know… you can have your food delivered too…”
One thing I remember from somewhere is that *locality* is a key principle for code readability (as with all things in software, it's a principle, not something to overdo) and it seems DI really helps keep locality and so that's part of what helps keep it readable when it's used right.
DI adds more indirection. How does that help with locality?
@@youtubeenjoyer1743
It helps improve logicality at a given level of indirection. If you're reading code, it can be helpful to read through it and treat it as a black box until you have time to go down that road. This is especially useful when you're doing someone on someone else's codebase. You don't need to know how a FoobarGrabber works, only that you're given a FoobarGrabber and you can use it to grab a foobar. So long as you don't need to understand the implementation details of a FoobarGrabber, by treating it as a black box you can just keep reading the code.
This example uses dependency injection as a way to implement the strategy pattern, which does make sense to some degree (Was there a test verifying that the correct dependencies are injected in the production version? A pretty crucial part of the solution)
But for the vast majority of code that I've seen, there is NO strategy - no alternate implementations of behaviour (in production code). It's just, the DbContext is injected into the UserRepository. The UserRepository is injected into the UserManager. The UserManager is injected into the UserController. And then each of these are tested independently with a ludicrous amount of mocking (I've seen implementations where the very thing to test had been mocked out).
The hip bone's connected to the ...
My project at work, everything is DI. Everything! And adding anything is a pain
Debugging a software that dynamically changes at runtime is a pain too and reminds me even to self-modifying code, that's why DI just sucks and I personally never use it. Actually he said it in the video too "... we used it everywhere until it got out of control" - if that is not a sign for a design pattern not working at scale and with increasing complexity, then I don't know...
love the smug ass incorrect pointer assumption that you then fail to prove. really nice dude *chefs kiss*. confidence inspiring
While doing Advent of Code (which I'll finish later...), I often deduplicated code that would only occur twice, but that's a bit contrived since you know that most of the code will overlap upfront.
You don't test a car by just going on the road, but you test the engine on a dyno, the tyres by checking the wear and slip in wet conditions, etc. You want to test thing individually to see where it goes wrong. Mocking and simulated environments can be convenient for this.
But in general, testing to me is just taking the code that I already wrote to see if everything worked and saving it.
What is great with these videos is that these are the things new coders don't get enough of. They learn crappy quick solutions for easy ass projects and have to learn through experience how why and when to use a strategy pattern, a factory pattern, when to repeat or not to repeat. I've seen way too many express server examples expanded to become gigantic projects. Great job on CodeAesthetic's part. However yeah the issue is devs finding these hammers and then seeing everything as a nail. And yes the tests aren't testing anything typescript is already testing, the trouble in these scenarios is always having the right testing environement for e2e, you need a testing s3, a testing db, possibly a testing environement with the correct network connections between services... you need to reset that shit after the tests... it quickly becomes tedious.
No the problem is people take SOLID principles so seriously that they end up with mammoth 100m+ lines unfathomable behemoths that no one understands or wants to work with.
"There's only one implementation of..." I know where this is going.
@24:30 the sample test code can have multiple implementations - always happy path, always sad path, slow network, dropped network, etc
You make it fun to learn the science behind code thanks man!
Dependency injection using the basic features of a programming language is great.
But Java EE style dependency injection, which uses a zoo of annotations, plugins, and config files, is a nightmare that can easily make code unreadable. It forces you to rely on documentation for things that should be obvious. It creates major problems for a minor boost in comfort.
Programmer: "So what is actually getting injected here? Where does this call actually go to?"
IDE: "IDK, it's just here. Check the annotation I guess"
Annotation: "I'm just a piece of text. Maybe copy-paste it and search THE ENTIRE CODEBASE"
Codebase: "Na sorry nothing found lol. The config files to resolve the name used in the annotation are not in the project folder and maybe not even on your machine, I'm pulling half my sht off some web service."
I cant imagine not using DI in C#, usage of factories like in Java is not required at all in C#, we can use a nice DI container like Autofac,Ninjact etc for Strategy pattern (multiple implementations of same Interface) and resolve them at runtime using a key. In .NET 8,keyed resolve is also supported bt default DI container.
If you use precondition checks, you can significantly narrow the possible error surface of your code, thus obviating the need for high levels of testing for errors -- you just have to document that supplying invalid arguments/inputs to your code will result in whatever errors your precondition checks return/raise. This also makes it really easy for users of your code to understand what they did wrong because you explicitly tell them what they did wrong.
I don't use TS but one way I solved having multiple sets of parameters is to have a lookup object table. The class must be constructed with, for example the storage service to be used. Then the object map provides autocompletion for the rest of the parameters using JSDoc. I don't have types for the class but it is well documented and autocompletion is a chef kiss.
7:55 Noooo! It should actually be possible if foo is defined as type {a: number}. However the "as const" with type inference sets the type of foo to its strictest interpretation:
let foo: {a: 5} = {a: 5} as const;
so
foo = {a: 6} is invalid, because 6 cannot be assigned to something of type 5, which is what the error message actually says.
because typescript's "as const" tells typescript that the variable's value is known at build time, and I don't know a single language that allows you to modify a truly constant variable like that.
@@arjix8738 Maybe in a shallow sense. It depends, what you mean with "the variable's value", because:
let foo = {a: Math.random()} as const;
is valid typescript, even though the value of foo.a is not known at build time. Also I don't get how this is related to my original point.
"This needs an 'if', an 'else if', and an 'else', and that is too complicated! So, we will simplify it by passing the logic as a parameter, creating a factory using interfaces, and fill it with with corner-case checks. It now has 50 extra lines, but it's simple! Oh, we will still need the if-else chain on the start."
Guys, until you have 20+ different ways of doing something, just do the thing the simple way. And then watch it never be 20+ different cases.
It *really* depends, sometimes more lines of code really does describe the mental model of what you're doing better. In particular, extracting the interface in cases like this means you only have to make the decision as to what happens once when you create everything instead of each use, it can move code that's likely to change together to be close together, it can make it easier to add new types with an explicit description of the interface required, and it means you have nicely separated logical units to make testing easy.
It is all about coupling and cohesion. Reducing coupling and increasing cohesion usually makes code easier to understand and maintain.
I agree about mocking. I've always described mocking as a code smell. I find myself needing to do it a lot in all of my work, because I can't convince my coworkers that dependency injection has value and I never have enough clout to push these kinds of culture changes. Heck, when I worked at Boeing Satellite Systems, I couldn't even convince my coworkers that automated testing was worth doing. (I generally had enough control of my own code though, that I could do automated testing and nobody would stop me.)
My own personal code? I NEVER mock; not ever. I don't even have tools installed for mocking at home.
Fun fact - if you start your script with 'import this' you'll find python is fast enough in 90% of cases.
Polymorphism is an OO contruct oft utilized in patterns. It is simply the ability for the caller to execute code without knowing the subclass implementation. That's it.
"I am mustafa, and I am the code you be refactoring now........ damn, three times"
Prime is the only tech-youtuber whose wife serves him plate of food during recordings. Not because she loves him, but because Prime, for some unexplained reason, being a programmer, also has a wife...
Mocking is wonderful to simulate failures upstream or downstream of the thing you want to test. It's also useful if the actual task is way too expensive or slow to run. Otherwise you are probably better off not mocking.
That reminds of the good old days with Spring writing XML configs for the dependency injection and annotation-voodoo-magic.
Speaking as a 30-year C++ vet, you give great advice.
"This is stupid!"
* proceeds to smack his lap
...is such comedic gold
i mean. here's the thing. experts in anything know the exceptions to the rule. we teach DRY because it's a very valuable concept. it's more about don't cut and paste the same code all over the place, abstract it out, make it easy. i've been out of things for a while, but i never heard the Extreme folks saying DRY in relation to patterns, DRY means abstract out common code so you don't have to maintain and debug 10 separate instances of something that could be done once.
Unit testing I/O is usually just testing your Mock code works ... but code coverage numbers impress managers. :)
My last job became a massive slog because someone had a massive boner for DI and forced it into almost every part of the codebase. We were using OSGi and bndtools and weird stuff would go wrong that only like 2 people in the company really knew how to fix. Anyone who asked questions on the dev channels like "is it just me or is DI really getting in everyone's way and making things way more complicated?" would be shot down by one of those two OSGi experts who would insist it was all very simple and you were perhaps too smooth-brained to understand.
By the time I quit, even writing a unit test took forever, because the constructor injection meant a ton of stuff was coupled together (by simply having to pass things down to dependencies, without even using them directly).
I wanted to comment "Ohh fLiP dO yOur jOb", but honestly, get some rest, Flip, you deserve it
appreciate you my man -flip
You have enough experience that when I think "you just arent using unit tests right" I can temper it with "you know more than me and probably have extra experience that I cant understand yet"
21:57 Yes, this! I try to split components by those who work with data and those who work with timing and DI is nice for that.
In this video the business logic was trivial so unit testing isn’t particularly useful. But it might be excessive to create an example project that has business logic worth testing, for a video like this.
24:52 I'd argue that it's a way to test your critical paths or what i like to call "best case scenarios"-at the end of which you'd have provided value to your users
My advice is to always use DI even if you aren't using an IoC container; passing deps in through a constructor manually is still DI. Always pass your deps in through the constructors.
I agree - but technically passing dependencies through constructor is not DI - it is just composition if you think about it. The term "injection" implies you are not manually doing it, but some framework is doing it for you - lets say with reflection, or if compile time, then through some kind of pre-compilation, language support etc....
So no... what is presented here is technically totally not DI - but the way to pretty much emulate it manually. The point of DI is to not do this and thus remove boilerplate codes and make it product-level configurable what is being used.
Of course I still think it is a good idea to add your dependencies via constructor - but I like the naming to be proper, because lets say I can easily imagine a native language where you would have either compile - or even deploy time - real dependency injection of components and even not just single ones, but like array of components to iterate over - etc.... But it can be made both compile and deploy time that way that no virtual functions would be needed for this to work!!! Which is awsome to think about. Actually the best way to emulate that currently in native code is via the linker not the compiler.... but I really mean it could have made that way that even inlining is possible etc...
I started with an IoC framework when I first started doing DI assuming it was saving me time and frustration. Fortunately I have learned that 99% of the time, it saves no dime and leads to frustration. Manual DI is just better, clearer, easier to maintain long term and often easier to test.
@@HalfMonty11 It is just not called DI then: but composition. And composition is KING it is AWSOME - I agree in that. Just its not "injection" anymore to be honest...
And again - there are upsides of DI other than less boilerplate with sending around parameters. Like as I said it is easily imaginable to make a programming language level support for DI in a native language where calls to a component would be totally inlined! Think of a game engine where you can choose what physics engine (or AI) you want! The calls to that will be as free as if you call an inlined funcion - not even function calls are needed because code can be inlined. This must happen either compile OR deploy (install) OR at least "before running the app"-time! Then calls can be more optimized than vtable like calls.
Currently you can do this compile-time with templates / generic and composition, but easy to do later and more dynamically with a DI language feature.
So my point is that its better to call things by their real names. I usually prefer composition over DI just like you do, but I do not call composition as DI...
@@HalfMonty11adding a dependency and pushing your change, having 500 test fail only because the construction code failed to build it.
Then having to go through every one of those test and fix the manual insert of mocks/dependencies is super annoying.
A container should let you reordered, remove or add dependencies without any tests failing.
But that’s only going to work in strongly types, OOP languages. Not going to work so well in languages like js
If the process is depending on an externally provided and allocated dependency, it is DI. Remember, composition means the containing object owns it thus responsible for lifetime management. If your class allocates or owns the memory, it isn't DI. So a factory function or constructor injection of an interface satisfies that, and that is what his code is doing.
@@u9vata
The point is you need to balance between Reuse and Ease of use. Same as when someone says composition over inheritance. NOOOO. You need to understand when to use what. But 1 thing never fails, make it as simple as possible that doesn't jeopardize functionality and testability.
For any c++ person, to whom all these buzz words sound like childish giberish, this is basically using interfaces and inheritence where you pass the object pointer under the interface type. (And you allow others to also implement that interface any way they see fit)
Content provided by him is same as we have with different social media platforms but some how instead of decreasing our productivity it increases a little, reaction videos are common but these videos are productive
S3 the largest graveyard? Confluence was right there, though.
This seems like a natural step towards microservice architecture design too, at least to me.
7:58 You can have a pointer to a constant, but you need to give the type implicitly because "as const" forces TS to infer a narrow type. let foo = { a: 5 } as const; gives foo the type {a : 5}, to which { a: 4 } is not assignable.
TypeScript "as const" is *type narrowing*. I.e. narrow it to the most specific type possible. So `{a : 5}` is typed as `{a: number}`. but `{a: 5} as const` is typed as `{a: 5}`.
"It all comes down to 2 patterns: builder and strategy pattern."
I'll be here waiting until you get deeper in your ocaml arc and see that these are all just (higher order) functions in the end.
its fine flip, we can lower the audio ourselves. Do not worry!
22:10, that's what Haskell claims to do through types (String vs IO(String))
In the land of C: a struct type containing - an enum to switch on and a "union of struct types" for the parameters... Add in a test/mock type and its all good for unit tests. "What if someone passes the wrong parameter structure?" well, then you have personal and integration problems that compiler checks won't save you from.
Abstract the APIs behind open,read/write close file interface a la Unix...
What about the weird and wonderful things that can go wrong with each storage method? Generally "something went wrong" is important for control flow. "What went wrong?" that's a debug issue (and that is why Golang error handling works like it does). In my experience... code that properly handles something going wrong at any time, is rare enough... and using "what went wrong" in control flow is abused or only partially handled.
23:14 Code Aesthetics literally explains what the point of all the mocking was, right before @ThePrimeTime pauses and says "I don't get the point of all this mocking". The point is to test the uploading service in isolation. Later on you test your encryption service in isolation. Later still you test your S3 storage service in isolation. The point is to make it easy to determine where any potential bug is. If you didn't isolate the services with mocks, and your test failed, now you have to go in and hunt down exactly where a bug is occuring. By isolating all the services, you know that when a test fails, the bug is ONLY in that tested code, and not in an upstream service.
I love when JBlowists come on and say things like "omg this is so complicated, just write ifs". And then you gotta wonder why some people (I wonder who I'm talking about) can only work on code on their own because nobody else is able to read through their horrible horrible spaghetti that was simple in the beginning. And then mysteriously it takes them a whole eternity to complete a project because they literally have to write every single line by themselves. One thing you learn as a programmer is that everything is simple in the beginning but you better know where to put all the complications that will come later. Because sure as hell nobody, not even you is going to refactor code under a deadline.