jokes aside, he's not a bad looking dude at all, getting jacked wouldn't hurt tho, as long as he doesn't covert to a life coach and talk about it non-stop like that other coder youtuber John Sonmez.
@@TechdubberStudios It may pay less than ads, but it's many times better. I support websites that responsibly use cryptomining, and I block ads. Please, don't say that ads are better. They have never been any good to anybody's web browsing experience. Oh, and you can use cryptomining along with Arc, another earning method that does not involve ads. I'm done with Google's creepy trackers. Cryptocurrency mining is the future.
@@ezshroom I am genuinely 100% with you on the crypto movement. I hate ads. Always have hated them. But there are at least 2....3 big corporations that come to mind that were built on the ads business model, but with crypto mining... can't find one. And browser-crypto-mining is not exactly a new technology. I really want it to replace ads. I really do. Hate the pop-ups, spying, tracking, that's going on. And the first corpo that comes to mind would be Netflix, when considering whom should adopt the crypto model. Because the users stay on netflix and binge-watch hours and hours!
@@ezshroom also, do you happen to know any website/forum/subreddit focusing on browser-based mining? I would really like to join and dig in more into this subject.
I avoided serverside rendering a meta tag by registering a sub-domain, doing the serverside-rendering there and making my app only compatible with a set number of user-agents. Brilliant!
Fantastic job explaining this! As always, the hilarious dry humor and "next level" metaphors help drive home points and keep things entertaining. Really helped clear up a bunch of stuff and get me pointed in the right direction. Many thanks!
there is a workaround : just add conditional tag in the small server that builds your page. you can still use client side rendering except for meta tags
Lol just killed the whole argument. Never heard of it before. Just goes to say that tech is exponential. Wonder if it will cause the cosmic crash eventually.
A solution to your problem could be to build a single page application, with each end point for the app being pre rendered. It's basically jamstack. Once a user loads one page, the others do not need to be loaded.
Also check out the create-exact-app npm (that's exact not react). Like NextJS but Express-forward design, full control at the server side level of what's going on.
@@jpsimons Just FYI, next.js also gives you full server side control. You can just run next as a library within an express server. In my experience, it's super ergonomic while preserving the state-of-the-art benefits of next (code splitting, automatic static optimization, incremental static generation, etc.). Having said that, I have not yet checked out create-exact-app, and am not sure how it differs from nextjs.
I had to do that once, I used a Lambda function since it was hosted on AWS, and the function intercepts the CloudFront distribution request and updates the HTML if the request comes from a robot, adding the OpenGraph tags.
If you manage the web server, you could use the web server's router to do the same exact hack you described without the need for a different subdomain, just a route that checks the user-agent of the client and returns different HTML based on it.
my solution for this is: 1. set a lambda as an index 2. on the index lambda, it should run a headless tool (like a headless chrome) to render the page if the request is a bot, else just serve the raw js
Greate video! I use Laravel on the server side to serve up everything. Static html pages and React apps or a combo of both. It's easy to embed a react app within a .blade template file. Meanwhile Laravel takes care of everything else, like API services, user registration and authentication, etc. Best of both worlds.
Had this issue while using a MeteorJS website running with ReactJs as the client side is that we created a crawler ( i think there is an npm project for this) that would go to every page, render it and save it on our DB. When a non-human (Google) would access the site it would served this rendered HTML, making it SEO friendly. Basically our server would use the User Agent to define what type of content the user would get served. Hope this help
can you please explain what do you mean by 'render it and save it on db' , do you mean like render the dom elements and attach to html and store it in some link and add that link in db , or what exactly are you storing in db, if this is the case wouldn't it be too much for pages which are dynamic like /rob/photos, /sam/photos and likewise to be stored in db or am i missing something
2:05 i do it this way: Server serves response for parsers (meta, og, schema, jsonld and plain html content) and then comes along js that structures it up and takes over routing from this point, so when you navigate you actually don't "refresh"
I just put my meta tags with variables like %PAGE_NAME% %PAGE_IMAGE% and replace these later while serving the page with express. doesn't work while client-side routing but It works for link previews.
If the only thing that needs to change is the meta tags (not the rendered bits), you can also modify the html before returning it to the client, inserting the relevant meta tags. It will probably lead to performance problems, but you could also perform an if condition on the referrer of the request to determine if you should perform such modifications.
Our course -- I followed and understood all the way to the end. This is because I'm an unemployed ex-Tech Lead [who has never worked at a FANG company], and a thousandaire.
Nice solution. Only downside guess would be that Google wants you to show the same content to their bot as the user. But probably doesn't matter if you don't want to index those pages.
I done this in a similar way. Except the share link page didn't check any user agent headers. It just featured a JavaScript redirect to the normal URL. Bots don't run JavaScript, don't redirect, they get the meta tags they need. User does run JS, redirects to the main SPA. And, for good measure, a fallback clickable link on the share page shows in case redirecting is blocked.
The problem I see with the magic link is that when a user just copies the url instead of pressing a share button to get the special share.* link that there will not be meta tags and when you do use the special share.* then there are meta tags. And I think most users will probably just copy the url they are on now instead of looking for a share button
In your example of the sub domain. I think that’s not needed / don’t need to redirect. If you are using a server side program it can decide to return the react app or the preview on the normal domain using the same user agent logic or an IP address list of known Facebook servers.
Why do you want another domain for that. The server that delivers the app index.html could also deliver the meta response instead, based on user agent. One problem in both situations you could get (separate service or combined), is that the bots make checks whether the content they see is different to a regular user. Whether any of them does it, I don't know, but I would check this possibility.
You can use the header trick as you mentioned, and then simply use something like puppeteer to load the page on the server itself and then send the rendered HTML page to the client. If the header says it's from a normal user, then don't go to puppeteer, just return your usual index.html file.
I think you could easily do this in net core. In the startup class, in the configure for routing, you could filter each route with the correct meta tags. You could make this an extension and bing bang bosh, neat tidy job done
Your solution at the end is valid. Use reverse proxy to detect the request, and forward them appropriately. However, it's best to use SSR from the beginning if that's your intention.
I think he meant that we should use Next / Nuxt from the beginning. I used to face these problems and since then I use Nuxt for every project and never worry about these problems again
I had this problem once but my focus was towards crawlers. I ended up using some php to "render" the important bits like title, descriptions and links. Then the javascript would remove those elements and do the single page app business. It was back in carrot farmer code days but I'm sure happy coders can accomplish this just as well.
This helped me alot! I am working on a project and my backend was almost finished. I was using create-react-app with router but switched over to next.js! Thanks alot
netlify has a free experimental feature called pre-rendering, for me, it works with Facebook, it parses the right meta tags automatically with pictures also. My content comes from a backend via graphql and apollo. meta is being set with react helmet, the page is handled by react-router, and it's a create react app project. Hope this helps. You can also do prerendering very very easily with react-snap package, but you need to rebuild when data changes. (PS. Thanks for your work, I really like your videos)
I just used a node.js express server to host compiled create-react-app, this way you can modify the page and add meta tags if needed before serving the page. Sort of a mix of server and client side rendering as he said.
Now you can make static pages for you dynamic frequently updated pages with Nextjs, How it works's is that it looks at the requested page and if it is present from the build time, sends it back and if it is not built during build time, builds on the go(run time) and adds it to the built pages for the next request. pretty amazing and game changing !
Well a simpler but similar way would be to always server side generate the index.html but only to change the meta data while letting the index.js generate the actual content. This wouldn't hurt human users and is pretty light on the server (only entry point html meta tags need to be generated depending on the link then the rest of the rendering navigation is done client side)
Just make a simple API where the client can send urls and it will return a preview. Just make sure to cache the pages and add rate limits so they can’t spam your api.
Going through a similar issue myself. My static site is hosted on S3 / CloudFront, orchestrated by terraform. My plan is to use CloudFront Origin Response triggers to trigger a lambda function to add the correct open graph tags to the response. I think this is the lightest weight option.
Another problem I struggle frequently with client side rendering is content indexing by search engines. There is a chance that Goggle will index js-generated content, but it depends. Other search engines will not even give a try.
It works! I do exactly that with my react web SPAs. I use firebase and cloud functions to detect user agents and serve SSR version on the fly to robots and CSR version to users. This is also important to SEO indexing, cause some robots won't run any JS and expect html-only responses. Really enjoy your videos.
@@leisiyox I thought about using it, but never tried it. Don't know how well it works. It would also cost more than my current firebase cloud function solution.
@@leisiyox I like that they integrate lots of services in a single solution. When you create a firebase project, you instantly have access to file storage, hosting, database, authentication, and some other stuff that makes it really easy. I also like that Firestore has real-time listeners for your data. Really good for a client side rendered app. Also really like their documentation and the fact that you can easily access other Google Cloud services and API. There are many videos online about it. Check it out.
I think your points on pre-rendering are slightly off. Not mentioning tools like Gatsby, Scully, or Gridsome is a miss as they can be used to render those dynamic routes you say cannot be pre-rendered. It's worth mentioning that JAMStack options are becoming really incredible for developers and give you the same benefits as server side rendering out of the box.
For some wierd cases like mine where only /particularRoute need to work like SSR what i actually tried was hosted gatsby project in particular route of CRA project and it worked just need to handle few re-routing cases
I think I might try react snap. That sounds good. Pre rendering on every build. Because often the layout of a page is the same even of the content changes. What do I mean by that: every reddit post will have the logo, the side bar, the footer and a div in the middle which contains the contents of the post. So you can prerender all that with an empty div, and then Hydrate it. Even with user generated content (as long as it is simple and consistent) you could prerender. Thanks for the video
Another option would be to have a CDN and when you get content updates just render that single page to the CDN. Then you aren't having to rerender all the time. It would get more complex than that of course as you would need to rerender everything if you make a style change to your website. And it would be public. Unless you pay for something to make it private and have the CDN only return a partial page for bots.
I would use the same client-side bundle BUT adding a little bit of logic on the static assets server to add the meta tags to the HTML shell that embeds the client-side bundle. That way, you won't need to implement HTTP redirects, AND probably is better once you start working with deep-links for a mobile app.
The pre-built html approach could work. If someone tries to visit a url while something is still building, you could serve up a special "page is still being built" page... one benefit here could be that, in 20 years when a website is taken down, you have all this pre-built static html, and it's very cheap (probably free) to host that.
Or when there's another sea-change with technology, MAYBE it's easier to migrate to something new, but I'm not sure, would depend on the specific situation.
Nice post. However I am curious as to why these FB bots can't read a client rendered app? I am sure that these bots are ultimately rendering the app on a web engine/browser, because they need to generate the preview of the rendered content. The static html will also most likely load some resources like CSS and JS over the network (unless the bot expects an inlined css/js), so it's not going to be an instant preview. So maybe the limitation is not technical but rather functional, in that the bot is probably not willing to wait too long for the page to load (which is the case with most client rendered apps)?
Huh thats an interesting problem. You could, of course, only SSR the meta tags. Would be a nice little project to work on an ssr-meta-tag-proxy. Wonder how that would work
7:20 would it be possible to use the same url but check the header value on the for example nginx server? like is this user agent a bot (twitter, fb, etc.) => proxy to your slim API for only the meta data response and if its a real user (mac, windows, chrome, firefox user agent etc.) => proxy to your real page / default response / SSR page. maybe im forgetting something. i dont know if this could work.
Sounds similar to how to generate PDF report (E.g. invoice) from rendered content. My solution was to open the link in a headless browser (like puppeteer) on server and save the rendered result. Then perhaps implement some kind of caching. It is simple but slow.
instead of returning a plain .html site which then calls your js to do the client-side rendering you could make the .html side dynamic with php or python or some other technology ... it shouldn't mess with the client-side rendering since it is based on JS anyway .. this would allow you to render the meta-tags on the serverside while rendering the actual page on the clientside
I made my "yarn deploy" script change react's build/index.html to build/index.php, then it throws in a little php script that grabs some info from the good ole' mongoDB and 'echo's the metatags into the , only when that url is directly accessed. (You can get the URL params in PHP) It's only a few lines of code. The hardest part was figuring out how to configure Nginx's reverse proxy to render PHP. O_O
The solution to the preview links would be to have a back-end endpoint that fetches that links metadata and pass in a URL we want to preview. This isn't something that is unsolvable in CSR.
You don’t need META tags for link previews. I’ve seen sites just use some json manifest data that the webscraper will pick up and read the json instead
With your magic url, you already use some kind of server side rendering. Why not use this always. Generate the index.html with the correct metatags on the server and only the script tag and root element in the body and let the rest of the page be rendered on the client. It wouldn't be more complex and you wouldn't need to check the user agent and server different sites
I use client side rendering, and am able to return different HTML per page from the server. This is only a problem if you have a single, static, index.html.
I'm sure someone has already pointed out, but if someone copies the url from the browser, they won't have the `share.` prefix. Probably need a global UA interceptor that results in pages being rendered server-side. Year 2030: Browsers introduce a feature to request server-rendered pages by pretending to be a facebook/twitter bot.
I think one issue is that most of the link preview code nowadays is kind of stuck in a pre-1page application world. Apart from making a curl request to the website and getting the HTML, the preview feature should also render it and stop assuming everybody's using server-side rendering. This does sound like it gets more into the "create a google scraper" type of situation, but if in-turn we're talking about a client-side rendered website trying to create a preview for another client-side rendered website, all the rendering is happening on the client so the server doesn't take much load at all.
Looking for exactly this solution. Was thinking of using aws api gateway to check the routes and then redirect based on headers. Social og links are super important for shareability..
In our case, we solved this by redirecting the bots from social media to a backend service that would return an html with all the meta tags. So, when a link to mypage/article/23 was copied, the nginx server would redirect it to backend.mypage/article/23/metatags. Not the best solution, but worked pretty fine
I liked the idea however, I think it is still good to have SSR for all users or maybe SEO as well. Would you share your ideas on uFrontends, too? Are you preparing some sort of tutorial on that or what?
kinda curious what the increased load would be if you'd do this on the normal domain already -> check the user agent... Would eliminate cdn usage tho... so maybe not foolproof.
Woah! My self esteem skyrocketed because I managed to keep up with you until the end :D Aside from that, your content is top notch, keep it coming man.
You can write normal HTML with some normal CSS just to make the preview template, imagine that this is the loading page, and only once the client-side rendering is ready, you can show the content of that on top of that base temporary HTML template. I've never done client-side rendering before, and I've never used React, but I think this should work. Why are you saying that it's not possible?
Your metaphors are next level.
here's your big mac.
Aaaah I see what you did there...
🎈 🏠 🎈
"you are not a karen"
Ben's got 99 problems, but a girlfriend ain't one.
Bruh! He says it with a straight face
@@phantomKE I know right? He just leveled up with these jokes!
🤣🤣
jokes aside, he's not a bad looking dude at all, getting jacked wouldn't hurt tho, as long as he doesn't covert to a life coach and talk about it non-stop like that other coder youtuber John Sonmez.
He has a girlfriend in Canada.
I avoid client-side rendering in order to save CPU cycles for cryptocurrency mining.
hahaha
hilarious comment! but crypto mining is an inefficient form of revenue on client's computer, see TPB case experiment.
@@TechdubberStudios It may pay less than ads, but it's many times better. I support websites that responsibly use cryptomining, and I block ads. Please, don't say that ads are better. They have never been any good to anybody's web browsing experience.
Oh, and you can use cryptomining along with Arc, another earning method that does not involve ads.
I'm done with Google's creepy trackers. Cryptocurrency mining is the future.
@@ezshroom I am genuinely 100% with you on the crypto movement. I hate ads. Always have hated them. But there are at least 2....3 big corporations that come to mind that were built on the ads business model, but with crypto mining... can't find one. And browser-crypto-mining is not exactly a new technology. I really want it to replace ads. I really do. Hate the pop-ups, spying, tracking, that's going on. And the first corpo that comes to mind would be Netflix, when considering whom should adopt the crypto model. Because the users stay on netflix and binge-watch hours and hours!
@@ezshroom also, do you happen to know any website/forum/subreddit focusing on browser-based mining? I would really like to join and dig in more into this subject.
solutions:
0) pre-rendering with parcel or webpack
1) server side rendering
your solutions are not client side rendering. he mentioned it.
I avoided serverside rendering a meta tag by registering a sub-domain, doing the serverside-rendering there and making my app only compatible with a set number of user-agents. Brilliant!
Fantastic job explaining this! As always, the hilarious dry humor and "next level" metaphors help drive home points and keep things entertaining. Really helped clear up a bunch of stuff and get me pointed in the right direction. Many thanks!
I love the tint on your glasses, it's serial killer-ish, where can i get a pair like those?
a package arrives at your door after the 3rd kill
@@bawad respect
They are the left-behinds after each kill. That's the way you get it.
@@bawad quick scope no scopes?
Those tints are wiped off blood from killing
there is a workaround : just add conditional tag in the small server that builds your page. you can still use client side rendering except for meta tags
I guess u never heard of prerender.io
been using it for years
Lol just killed the whole argument. Never heard of it before. Just goes to say that tech is exponential. Wonder if it will cause the cosmic crash eventually.
yep yep yep , you just commented before me
@@ayushkhanduri2384 same case, I just searched prerender before I add a comment about it to check if it's already mentioned and here it was.
🤯
A solution to your problem could be to build a single page application, with each end point for the app being pre rendered.
It's basically jamstack. Once a user loads one page, the others do not need to be loaded.
Solution: NextJS, Angular Universal, Nuxt, etc.
Also check out the create-exact-app npm (that's exact not react). Like NextJS but Express-forward design, full control at the server side level of what's going on.
@@jpsimons Just FYI, next.js also gives you full server side control. You can just run next as a library within an express server. In my experience, it's super ergonomic while preserving the state-of-the-art benefits of next (code splitting, automatic static optimization, incremental static generation, etc.). Having said that, I have not yet checked out create-exact-app, and am not sure how it differs from nextjs.
Why do I not like the sound of Angular Universal?
@@angshu7589 because you are not carrot farmer. Although color of your profile picture kinda resembles the carrot :D
@Adithya R Svelte Sapper is still in early development. I love Svelte, but Sapper is still far away from production-ready
Love the joke about girlfriend and client side rendering at the beginning
I love how Ben roasts Angular devs. I thought of that carrot farmer line off and on all day and cracked up every time.
I had to do that once, I used a Lambda function since it was hosted on AWS, and the function intercepts the CloudFront distribution request and updates the HTML if the request comes from a robot, adding the OpenGraph tags.
If you manage the web server, you could use the web server's router to do the same exact hack you described without the need for a different subdomain, just a route that checks the user-agent of the client and returns different HTML based on it.
my solution for this is:
1. set a lambda as an index
2. on the index lambda, it should run a headless tool (like a headless chrome) to render the page if the request is a bot, else just serve the raw js
Greate video! I use Laravel on the server side to serve up everything. Static html pages and React apps or a combo of both. It's easy to embed a react app within a .blade template file. Meanwhile Laravel takes care of everything else, like API services, user registration and authentication, etc. Best of both worlds.
Had this issue while using a MeteorJS website running with ReactJs as the client side is that we created a crawler ( i think there is an npm project for this) that would go to every page, render it and save it on our DB. When a non-human (Google) would access the site it would served this rendered HTML, making it SEO friendly. Basically our server would use the User Agent to define what type of content the user would get served. Hope this help
can you please explain what do you mean by 'render it and save it on db' , do you mean like render the dom elements and attach to html and store it in some link and add that link in db , or what exactly are you storing in db, if this is the case wouldn't it be too much for pages which are dynamic like /rob/photos, /sam/photos and likewise to be stored in db or am i missing something
Sapper + svelte gives you the best of both worlds
2:05 i do it this way:
Server serves response for parsers (meta, og, schema, jsonld and plain html content) and then comes along js that structures it up and takes over routing from this point, so when you navigate you actually don't "refresh"
For your sake and ours, I hope you DON'T get a girlfriend too soon.
I just put my meta tags with variables like %PAGE_NAME% %PAGE_IMAGE% and replace these later while serving the page with express. doesn't work while client-side routing but It works for link previews.
If the only thing that needs to change is the meta tags (not the rendered bits), you can also modify the html before returning it to the client, inserting the relevant meta tags.
It will probably lead to performance problems, but you could also perform an if condition on the referrer of the request to determine if you should perform such modifications.
I use EJS and it allows for variables to be passed before sending the HTML to the client, so that can allow you to change the values in the meta tags.
This was the best explanation video I've seen on the matter... Kudos to you Mister...
Our course -- I followed and understood all the way to the end. This is because I'm an unemployed ex-Tech Lead [who has never worked at a FANG company], and a thousandaire.
Nice solution. Only downside guess would be that Google wants you to show the same content to their bot as the user. But probably doesn't matter if you don't want to index those pages.
I done this in a similar way. Except the share link page didn't check any user agent headers. It just featured a JavaScript redirect to the normal URL.
Bots don't run JavaScript, don't redirect, they get the meta tags they need. User does run JS, redirects to the main SPA. And, for good measure, a fallback clickable link on the share page shows in case redirecting is blocked.
The problem I see with the magic link is that when a user just copies the url instead of pressing a share button to get the special share.* link that there will not be meta tags and when you do use the special share.* then there are meta tags. And I think most users will probably just copy the url they are on now instead of looking for a share button
In your example of the sub domain. I think that’s not needed / don’t need to redirect. If you are using a server side program it can decide to return the react app or the preview on the normal domain using the same user agent logic or an IP address list of known Facebook servers.
Why do you want another domain for that. The server that delivers the app index.html could also deliver the meta response instead, based on user agent. One problem in both situations you could get (separate service or combined), is that the bots make checks whether the content they see is different to a regular user. Whether any of them does it, I don't know, but I would check this possibility.
You can use the header trick as you mentioned, and then simply use something like puppeteer to load the page on the server itself and then send the rendered HTML page to the client. If the header says it's from a normal user, then don't go to puppeteer, just return your usual index.html file.
I got this idea from here: ruclips.net/video/lhZOFUY1weo/видео.html
I had similar issue. good thing you found a better solution.
Why wouldn’t you just serve this over the server on a different port? As in send the result as a when someone searches that route?
I think you could easily do this in net core. In the startup class, in the configure for routing, you could filter each route with the correct meta tags. You could make this an extension and bing bang bosh, neat tidy job done
I had the same issue last week and also was thinking about moving to nextjs, but having a separate domain and server makes a lot more sense.
well Next js or any other SSR solutions doesn't mean you're gonna use one server for the backend and the front-end.
Your solution at the end is valid. Use reverse proxy to detect the request, and forward them appropriately.
However, it's best to use SSR from the beginning if that's your intention.
Gatsby also solves the React single-page problem, since we can generate all the individual HTML, CSS, and JS pages.
"It's like I spent a bunch of time building a house and now I want that house to fly." LMAO
I think he meant that we should use Next / Nuxt from the beginning. I used to face these problems and since then I use Nuxt for every project and never worry about these problems again
the girlfriend problem might be solved if you stop walking around wearing asexual flag shirts
hahaha
lmao, good catch, respect
He's just playing hard to get. Karen gets it.
But with this if he ever gets one, she will be the right one. Lol
He do check a lot of aesthetic boxes from the virgin meme... Though I probably do too 😆
I had this problem once but my focus was towards crawlers. I ended up using some php to "render" the important bits like title, descriptions and links. Then the javascript would remove those elements and do the single page app business. It was back in carrot farmer code days but I'm sure happy coders can accomplish this just as well.
This helped me alot! I am working on a project and my backend was almost finished. I was using create-react-app with router but switched over to next.js! Thanks alot
netlify has a free experimental feature called pre-rendering, for me, it works with Facebook, it parses the right meta tags automatically with pictures also. My content comes from a backend via graphql and apollo. meta is being set with react helmet, the page is handled by react-router, and it's a create react app project. Hope this helps. You can also do prerendering very very easily with react-snap package, but you need to rebuild when data changes. (PS. Thanks for your work, I really like your videos)
I just used a node.js express server to host compiled create-react-app, this way you can modify the page and add meta tags if needed before serving the page. Sort of a mix of server and client side rendering as he said.
Sorry I'm new here, and I've noticed that Ben clearly hates Angular.
Can someone give a quick background please???
Great video. Trying to wrap my head around server side rendering and this video definitely helped
Hey I saw Wes Bos in one of his videos, he used cloud functions to generate the preview and puppeteer i guess to take a screenshot of the url
Now you can make static pages for you dynamic frequently updated pages with Nextjs, How it works's is that it looks at the requested page and if it is present from the build time, sends it back and if it is not built during build time, builds on the go(run time) and adds it to the built pages for the next request. pretty amazing and game changing !
I was experimenting with client side rendering in 2012, it would generate the page based on json
Well a simpler but similar way would be to always server side generate the index.html but only to change the meta data while letting the index.js generate the actual content.
This wouldn't hurt human users and is pretty light on the server (only entry point html meta tags need to be generated depending on the link then the rest of the rendering navigation is done client side)
Just make a simple API where the client can send urls and it will return a preview. Just make sure to cache the pages and add rate limits so they can’t spam your api.
Facebook's user agent is there for the facebook app browser as well.
My company has a massive CRA project that could benefit a conversion to NextJS, but we rely so much on browser globals outside of effects
I think one of the main point to understand can be found in the name 'create-react-app' which is NOT 'create-react-website'
Going through a similar issue myself. My static site is hosted on S3 / CloudFront, orchestrated by terraform. My plan is to use CloudFront Origin Response triggers to trigger a lambda function to add the correct open graph tags to the response. I think this is the lightest weight option.
Did you do it? Am exploring some of this ideas :)
Another problem I struggle frequently with client side rendering is content indexing by search engines. There is a chance that Goggle will index js-generated content, but it depends. Other search engines will not even give a try.
The preview still won't work when users copy paste the link directly from the browser url bar
It works! I do exactly that with my react web SPAs. I use firebase and cloud functions to detect user agents and serve SSR version on the fly to robots and CSR version to users. This is also important to SEO indexing, cause some robots won't run any JS and expect html-only responses. Really enjoy your videos.
What about some prerender.io ?
@@leisiyox I thought about using it, but never tried it. Don't know how well it works. It would also cost more than my current firebase cloud function solution.
@@cauebahia what are the conditions that you recommend using firebase?
I thought about using it but I seek guidence
@@leisiyox I like that they integrate lots of services in a single solution. When you create a firebase project, you instantly have access to file storage, hosting, database, authentication, and some other stuff that makes it really easy. I also like that Firestore has real-time listeners for your data. Really good for a client side rendered app. Also really like their documentation and the fact that you can easily access other Google Cloud services and API. There are many videos online about it. Check it out.
I think your points on pre-rendering are slightly off. Not mentioning tools like Gatsby, Scully, or Gridsome is a miss as they can be used to render those dynamic routes you say cannot be pre-rendered. It's worth mentioning that JAMStack options are becoming really incredible for developers and give you the same benefits as server side rendering out of the box.
For some wierd cases like mine where only /particularRoute need to work like SSR what i actually tried was hosted gatsby project in particular route of CRA project and it worked just need to handle few re-routing cases
The problem with client side rendering is mostly that most of the time it's used for something that doesn't need it. Most websites are mostly static.
Very useful, thank you for pointing to react-snap. Happy Hacking Ben 🙌🏻
I was just watching one of your videos on react native animation earlier xD
Keep up the good job 🔥
I think I might try react snap. That sounds good. Pre rendering on every build. Because often the layout of a page is the same even of the content changes. What do I mean by that: every reddit post will have the logo, the side bar, the footer and a div in the middle which contains the contents of the post. So you can prerender all that with an empty div, and then Hydrate it. Even with user generated content (as long as it is simple and consistent) you could prerender. Thanks for the video
Another option would be to have a CDN and when you get content updates just render that single page to the CDN. Then you aren't having to rerender all the time. It would get more complex than that of course as you would need to rerender everything if you make a style change to your website. And it would be public. Unless you pay for something to make it private and have the CDN only return a partial page for bots.
I´ve been watching your videos and yes, the quality of the content is always awesome, new suscriber
this is actually the first time I understood the difference between client side and server side rendering
I would use the same client-side bundle BUT adding a little bit of logic on the static assets server to add the meta tags to the HTML shell that embeds the client-side bundle. That way, you won't need to implement HTTP redirects, AND probably is better once you start working with deep-links for a mobile app.
Nice GatsbyJS colorway on that shirt 🤙
i think we can use helmet in CRA also by generating the data for meta in server side
The pre-built html approach could work. If someone tries to visit a url while something is still building, you could serve up a special "page is still being built" page... one benefit here could be that, in 20 years when a website is taken down, you have all this pre-built static html, and it's very cheap (probably free) to host that.
Or when there's another sea-change with technology, MAYBE it's easier to migrate to something new, but I'm not sure, would depend on the specific situation.
Nice post. However I am curious as to why these FB bots can't read a client rendered app? I am sure that these bots are ultimately rendering the app on a web engine/browser, because they need to generate the preview of the rendered content. The static html will also most likely load some resources like CSS and JS over the network (unless the bot expects an inlined css/js), so it's not going to be an instant preview. So maybe the limitation is not technical but rather functional, in that the bot is probably not willing to wait too long for the page to load (which is the case with most client rendered apps)?
This channel is slowly becoming one of my favorites on RUclips! 😄
I like how you explain, well done. Thank you for the quality content.
Huh thats an interesting problem. You could, of course, only SSR the meta tags. Would be a nice little project to work on an ssr-meta-tag-proxy. Wonder how that would work
7:20 would it be possible to use the same url but check the header value on the for example nginx server?
like is this user agent a bot (twitter, fb, etc.) => proxy to your slim API for only the meta data response
and if its a real user (mac, windows, chrome, firefox user agent etc.) => proxy to your real page / default response / SSR page.
maybe im forgetting something. i dont know if this could work.
Sounds similar to how to generate PDF report (E.g. invoice) from rendered content. My solution was to open the link in a headless browser (like puppeteer) on server and save the rendered result. Then perhaps implement some kind of caching. It is simple but slow.
Rails is this the best framework. Right i build all my rails views with react, but with rails server rendered meta tags.
instead of returning a plain .html site which then calls your js to do the client-side rendering you could make the .html side dynamic with php or python or some other technology ... it shouldn't mess with the client-side rendering since it is based on JS anyway .. this would allow you to render the meta-tags on the serverside while rendering the actual page on the clientside
my reaction to this video is LOL. as someone already mentioned, you could've done a decent server side routing
I made my "yarn deploy" script change react's build/index.html to build/index.php, then it throws in a little php script that grabs some info from the good ole' mongoDB and 'echo's the metatags into the , only when that url is directly accessed. (You can get the URL params in PHP) It's only a few lines of code. The hardest part was figuring out how to configure Nginx's reverse proxy to render PHP. O_O
The solution to the preview links would be to have a back-end endpoint that fetches that links metadata and pass in a URL we want to preview. This isn't something that is unsolvable in CSR.
You don’t need META tags for link previews. I’ve seen sites just use some json manifest data that the webscraper will pick up and read the json instead
With your magic url, you already use some kind of server side rendering. Why not use this always. Generate the index.html with the correct metatags on the server and only the script tag and root element in the body and let the rest of the page be rendered on the client. It wouldn't be more complex and you wouldn't need to check the user agent and server different sites
I use client side rendering, and am able to return different HTML per page from the server. This is only a problem if you have a single, static, index.html.
I'm sure someone has already pointed out, but if someone copies the url from the browser, they won't have the `share.` prefix. Probably need a global UA interceptor that results in pages being rendered server-side.
Year 2030: Browsers introduce a feature to request server-rendered pages by pretending to be a facebook/twitter bot.
I think one issue is that most of the link preview code nowadays is kind of stuck in a pre-1page application world. Apart from making a curl request to the website and getting the HTML, the preview feature should also render it and stop assuming everybody's using server-side rendering. This does sound like it gets more into the "create a google scraper" type of situation, but if in-turn we're talking about a client-side rendered website trying to create a preview for another client-side rendered website, all the rendering is happening on the client so the server doesn't take much load at all.
Looking for exactly this solution. Was thinking of using aws api gateway to check the routes and then redirect based on headers. Social og links are super important for shareability..
In our case, we solved this by redirecting the bots from social media to a backend service that would return an html with all the meta tags. So, when a link to mypage/article/23 was copied, the nginx server would redirect it to backend.mypage/article/23/metatags. Not the best solution, but worked pretty fine
I liked the idea however, I think it is still good to have SSR for all users or maybe SEO as well.
Would you share your ideas on uFrontends, too? Are you preparing some sort of tutorial on that or what?
This doesn't apply to less frequently updated blogs and e commerce website. You should start with that!!!
kinda curious what the increased load would be if you'd do this on the normal domain already -> check the user agent... Would eliminate cdn usage tho... so maybe not foolproof.
You ever tried NextJS? It does SSR for the initial request (b/c it could be a bot), but CSR when you click links.
Dude, your video was reffered in a CodeAcademmy course for FullStack JS delopment.
Woah! My self esteem skyrocketed because I managed to keep up with you until the end :D Aside from that, your content is top notch, keep it coming man.
You are hilarious and informative my dude haha, relatable. And damn dude the lenght of your link
Nice video Ben. Try this out and make a video about the results please.
You make my day better
You can write normal HTML with some normal CSS just to make the preview template, imagine that this is the loading page, and only once the client-side rendering is ready, you can show the content of that on top of that base temporary HTML template.
I've never done client-side rendering before, and I've never used React, but I think this should work. Why are you saying that it's not possible?
I'm about that carrot farmer lifestyle tbh
me too 😦, that's why in my free time will enjoy build something else
@@mohdhaziq9859 I like the carrot life tho, has worked well for me