The fact that first example made it through a code review in the first place is wild. Imagine reviewing a refactor that requries 2 new dependencies and going "yeah lets get that merged in".
Good advice, good examples. One I would add to the list is that if there's funky looking code that looks odd but is there for a specific business requirement, inline comments can do a lot to help both other devs and your future self be aware of it before you try to refactor it blindly
I like to leave comments that explain "why" something was implemented. If I had to create some hack/workaround or something is just business specific, leaving a comment as to "why" it was done that way helps a ton. I always appreciate it when I run into these types of comments and other devs told me they appreciated it when they ran into comments that I left.
Love it 🚀 Another point could be: Improving something that should not exist in the first place e.g. recreating libraries, refactoring unnecessary code 👉 Because the best code is no code.
YES, 🎉! Perfect! 👌 Sharing this with my developers. So well communicated. Also ran into these issues and it’s time consuming communicating this, but necessary and fun. This helps communicate the importance faster. ❤
Over abstracting tests can be painful when implementing patterns that the team is unfamiliar with. Forcing other devs to pick up patterns that don’t have buy in leaves a desire to revert the PR.
I think the first 2 points are debattable. the way you're using ramda in your example is bad, pipe is amazing, composition is amazing, people should use composition a lot more. Your example should be pipe(filterFn, mapFn)(data), it's just so much cleaner than just chaining filter map sort and such.
I think the 3:53 example was a bit contrived. I don't think even junior devs would remove caching logic for no reason. Unless, they REALLY were a "special" type of person.
Isn't the goal of refractoring to change your earlier code for the better? I think before you fault those refractors, there should be guidelines. Because if there isn't, it is not the person's fault.
meh, i wouldnt shit on the hired guy. He refactored as best he could, and the refactor exposed a problem in the codebase. That allowed you to make things a better refactor in the end. Happens all the time.
You need to optimize things when its necessary. In terms of asymptotic complexity they does not differ this much. And if there is billion elements you are probably not using JS/TS btw), if it is in browser you have way more problems other than filtering and then mapping through
then you would do it at the DB level, not in JS code. (tbh I probably wouldn't be a fan of changing it to a chained map and filter either, that code just needs to be within a util function that makes it clear what it's doing)
If you are rendering 1 billion users then there are prob other questions you should ask yourself (Assuming its frontend cuz having this in backend is weird as Yeezus said, filter on DB level and on top of that youll prob have a limit=50 on that either way). And time complexity wise 2n doesnt differ much from n.
What if the array had 10 items? The most important question when optimizing is the range of input sizes. If you're doing this in JS it's clear the sizes are reasonable.
I agree every point, but I will be more aggressive on the api wrapper refactor, I think it's completely unnecessary to create such a wrapper to just save few keystroke, Probably see enough failures of api wrapper in codebase in my life, none of them worked out really well. All failed miserably as business requirement changes. and ppl keep glue weird hacks on top of that wrapper.
I think its not really a "api wrapper" thing and heavily depends on what exactly you are doing. For example frappe (erp framework) uses a wrapper to wrap away ajax calls with common headers, error handling, response parsing, permission checking, ... You dont want to do this all the time for every api call. On top of that we are currently using a wrapper around that to add typing and schema validation support. That saves a TON of time knowing what you'll receive and having that intellisense especially with long complex prop names. And the wrapper is like 30 lines of code. So of course you shouldnt wrap everything in something, but do it where it makes sense
I agree, though we did eventually switch to using google cloud gen2 functions and the wrapper made it really easy to change them all at once vs file by file (we have like 50 APIs)
nit: in lots of the examples shown you don't need the curlies and return keyword e.g. Change this: const foo = () => { return bar } To this: const foo = () => bar Much nicer!
Read more on this in my latest blog post: www.builder.io/blog/good-vs-bad-refactoring
finally some one talking... great Steve. You explain it very well. 🙂
The fact that first example made it through a code review in the first place is wild. Imagine reviewing a refactor that requries 2 new dependencies and going "yeah lets get that merged in".
Good advice, good examples. One I would add to the list is that if there's funky looking code that looks odd but is there for a specific business requirement, inline comments can do a lot to help both other devs and your future self be aware of it before you try to refactor it blindly
I like to leave comments that explain "why" something was implemented. If I had to create some hack/workaround or something is just business specific, leaving a comment as to "why" it was done that way helps a ton. I always appreciate it when I run into these types of comments and other devs told me they appreciated it when they ran into comments that I left.
Shots fired at whoever he hired lmaoo
Love it 🚀 Another point could be:
Improving something that should not exist in the first place e.g. recreating libraries, refactoring unnecessary code
👉 Because the best code is no code.
The fault really lies on whoever approved of those pull requests
Before even watching this all I’m thinking is this exactly me.
Steve dropping pure gold!
3:33 great point, man
can't like this video enough, solid points and suggestion.
YES, 🎉! Perfect! 👌
Sharing this with my developers. So well communicated. Also ran into these issues and it’s time consuming communicating this, but necessary and fun.
This helps communicate the importance faster. ❤
Over abstracting tests can be painful when implementing patterns that the team is unfamiliar with. Forcing other devs to pick up patterns that don’t have buy in leaves a desire to revert the PR.
Great advice, thanks!
I'm guilty of making my code overly verbose, this is a good reminder.
I swear, if I see one more pile of Ramda code
It'll be great if you make a video on how to write better alternatives. Still learning react here..
I think the first 2 points are debattable. the way you're using ramda in your example is bad, pipe is amazing, composition is amazing, people should use composition a lot more. Your example should be pipe(filterFn, mapFn)(data), it's just so much cleaner than just chaining filter map sort and such.
This is gold 🙌
I think the 3:53 example was a bit contrived. I don't think even junior devs would remove caching logic for no reason. Unless, they REALLY were a "special" type of person.
Nailed it 🎯🎯
how would you handle errors?
Isn't the goal of refractoring to change your earlier code for the better?
I think before you fault those refractors, there should be guidelines. Because if there isn't, it is not the person's fault.
Sin 8: testing everything?
meh, i wouldnt shit on the hired guy. He refactored as best he could, and the refactor exposed a problem in the codebase. That allowed you to make things a better refactor in the end.
Happens all the time.
use cody to generate unit test
super helpful
why you go through the loop twice instead of once in first example, what if array had 1 billion elements
You need to optimize things when its necessary. In terms of asymptotic complexity they does not differ this much. And if there is billion elements you are probably not using JS/TS btw), if it is in browser you have way more problems other than filtering and then mapping through
then you would do it at the DB level, not in JS code. (tbh I probably wouldn't be a fan of changing it to a chained map and filter either, that code just needs to be within a util function that makes it clear what it's doing)
If you are rendering 1 billion users then there are prob other questions you should ask yourself (Assuming its frontend cuz having this in backend is weird as Yeezus said, filter on DB level and on top of that youll prob have a limit=50 on that either way). And time complexity wise 2n doesnt differ much from n.
What if the array had 10 items? The most important question when optimizing is the range of input sizes. If you're doing this in JS it's clear the sizes are reasonable.
bro is ai
I agree every point, but I will be more aggressive on the api wrapper refactor, I think it's completely unnecessary to create such a wrapper to just save few keystroke,
Probably see enough failures of api wrapper in codebase in my life, none of them worked out really well. All failed miserably as business requirement changes. and ppl keep glue weird hacks on top of that wrapper.
I think its not really a "api wrapper" thing and heavily depends on what exactly you are doing. For example frappe (erp framework) uses a wrapper to wrap away ajax calls with common headers, error handling, response parsing, permission checking, ... You dont want to do this all the time for every api call. On top of that we are currently using a wrapper around that to add typing and schema validation support. That saves a TON of time knowing what you'll receive and having that intellisense especially with long complex prop names. And the wrapper is like 30 lines of code. So of course you shouldnt wrap everything in something, but do it where it makes sense
I agree, though we did eventually switch to using google cloud gen2 functions and the wrapper made it really easy to change them all at once vs file by file (we have like 50 APIs)
Wow 512MB i wonder what you are asking to ask for that many.
puppeteer
@@Steve8708 lol
That or people can try to learn something new
nit: in lots of the examples shown you don't need the curlies and return keyword
e.g.
Change this:
const foo = () => {
return bar
}
To this:
const foo = () => bar
Much nicer!
This is just linting. Not an actual refactor.
Stay consistent, 😂
Implicit returning makes ppl losed the context, thought it would be best choice in simple logic or in middleware functions