Converting a WebRTC stream to WS is crazzy inefficient. Shared Mem is great but I wonder why did they not go for a RTC communication between chromium and python process in the first place.
Exactly my concern, if Recoil is already a member of meeting, RTC data transmission would have been a great choice for media and any other separate data could go through WS
never ever bulk transcode medias using CPUs they are exponentially expensive as compared to GPUs even if you are just transmuxing. GPUs are patchy on cloud services. if you think that your company is going to grow a lot (they are already providing services to more than 400 companies) then build your own GPU server. you dont even need expensive GPUs. u can offload huge amount computation from CPUs
👀 Tbh, that is something noobs would do wrong in a basic System Design Interview (pool state and job load). Aka, there is no reason why the observation of the headless chrome that is signed into the web conference and the stream processing of the video feed is not on the same machine simply through a file stream or for processing on disk via a volume in i.e. a docker compose file. Keeping the machine alive when the web conference ends to finish the processing. 🤐 Weird weird for a company with „hundreds“ of customers using client protocols for IPC. But good that they realized this eventually 15:10
Converting a WebRTC stream to WS is crazzy inefficient. Shared Mem is great but I wonder why did they not go for a RTC communication between chromium and python process in the first place.
I might be wrong but I don’t think they are tapping directly inside webrtc streams themselves. They’re instead “screen recording” in a way
Exactly my concern, if Recoil is already a member of meeting, RTC data transmission would have been a great choice for media and any other separate data could go through WS
Just a question.
where do you find these blog posts in the first place?
prolly from reddit primeagen
HackerNews?
saw this shared by Gary tan from YC on X. but mostly through social media/hackernews, etc.
never ever bulk transcode medias using CPUs they are exponentially expensive as compared to GPUs even if you are just transmuxing. GPUs are patchy on cloud services. if you think that your company is going to grow a lot (they are already providing services to more than 400 companies) then build your own GPU server. you dont even need expensive GPUs. u can offload huge amount computation from CPUs
Still wondering what it takes to understand all the things in this blog post
dedication of more than 10 years
Experience
thanks
👀 Tbh, that is something noobs would do wrong in a basic System Design Interview (pool state and job load). Aka, there is no reason why the observation of the headless chrome that is signed into the web conference and the stream processing of the video feed is not on the same machine simply through a file stream or for processing on disk via a volume in i.e. a docker compose file. Keeping the machine alive when the web conference ends to finish the processing. 🤐 Weird weird for a company with „hundreds“ of customers using client protocols for IPC. But good that they realized this eventually 15:10