You're talking about Hyperthreading, which is very different from VMs. The Hyperthreading feature allows each of your 4 physical processing units to look at incoming instructions and see if they can be split up to any degree. If so, one physical processor then executes multiple instructions in parallel, giving the effect of a fifth (sixth/seventh/eighth), "virtual" processer.
I think what you've described highlights my original point on the negative consequences of prop. software even more. None of the things you've mentioned are a problem in the free/libre software side because the license gives rights and control to their users rather than taking them away with prohibitions and legal threats about who gets to even look at the source code..
One common misunderstanding about the "cloud" what I've observed when it all started and sometimes observe now, is that it is something new and incredible. Well, its not. Its just a fancy word for the same old servers, maybe with a little bit different pricing model.
Stage 7: Improve ultra-high speed internet access to support higher demand. Stage 8: Figure out where to maintain the powerful hardware, since end users no longer need good hardware to run demanding software (so they move towards cheaper, lighter devices focusing on connectivity and displays).
Except that when a proprietary software company goes bust, you will have start a hiring spree to secure old employees who knows the codebase well to ensure your own livelihood, and there may not be enough for everybody, and other companies aren't required to share their changes. With free software, even if the main supporting company goes bust, and all their employees scatters to different companies, they will still be improving the shared codebase as copyleft ensures that fixes are shared.
It's quite feasable/doable, actually. What I have noticed (i.e. anecdote) is that sometimes people would rather pay someone else to solve their problems for them (e.g. another company's tech support) rather than solve them themselves.. which is OK in some situations but not when it's supposed to be that person's *job*. This is independent from the proprietary/free software topic because companies that do free software (e.g. Cannonical) provide these services w/o resorting to proprietary source.
If this channel has captions, it would be great. English is not my first language, and British accent is really hard for me. Captions would provide more people and boost awareness in computer sciences. Cheers!
I love VMs i've been using them for years now, as i do software dev it has saved me having to rewrite/reinstall/just get pissed off every time i upgraded the HDD, or anything else. the best part i like is i can just make a copy of the machine and move it around online, or on usb (without having to drag around a computer)
another term is "Application Virtual Machines" to describe the JVM and friends, granted, yes, they are somewhat different pieces of technology. there can be overlap though, for example, both can make use of a JIT compiler (often called "dynamic translation" with emulators) to allow relatively fast execution of code running inside the VM (if appropriate hardware support is not available). likewise, on some targets, hardware assistance for application VMs exists (Jazelle and ThumbEE on ARM), ...
Sure. I'm not sure exactly where they'd fit them though, given their weird spot in the whole spectrum of things. Should they talk about interpreters, and then talk about runtimes as a special flavor of an interpreter? Should they cover compilers first? Should they talk about where is the line between a program accepting instructions vs. a "real" runtime? It seems questions like that are the reason the different topics are covered very narrowly, even when not difficult to grasp per se.
All of those aren't really virtual machines, even though some call themselves such (for marketing purposes). They are what's called a "runtime" or "interpreter" (well, a flavor of...) - they are a normal (R3 level) executable program, with the only job of executing ITS OWN instruction set dictated by the input, as opposed to a real VM that executes instructions of a hardware CPU, and can (in some of the cases; described in the video) call up the real hardware to do those instructions.
Polished is correct in some cases, but not supported enough? If the original dies (like OpenOffice) many others may be born from it (like LibreOffice). We should be embracing free software more as it can often do the same job, sometimes with fewer headaches.
I'm interested in embedded systems. Can you do "embedded systems for dummies" type of thing. Some history, what it is now, languages and operating systems used and that sort of thing. Embedded machines are everywhere now so I think it's important that people know at least something about embedded systems :)
I was thinking about something: how can we get rid of this backward compatibility issue, that is - I feel - is slowing down the developement of new technologies? I mean, both software and hardware has to have backward and forward compatibility and it means compromises. Every attempt to create an absolute standard so far has just managed to be one of the many standards on every level of computing. Could one of the great minds of computerphile answer?
The future will be a truly fluent system where you have some sort of hypervisor or smiler distribution task to multiple processor. After all we are living in a era of Ubiquitous computing. It is just a matter of unlocking the true potential of the world we live in. I just hope that I can keep control over my own cloud rather then give it over to some other party. If so then we are walking down a dangerous path.
once was actually doing something like this for development: using VMware to run Linux on top of Windows; using QEMU (in Linux) to run an ARM version of Linux; then running a script VM inside this ARM version of Linux. the script VM ran the code (in a plain interpreter). it got "impressive" benchmark results.
Maybe a computerphile video can talk about even more cutting edge stuff like Google's LMCTFY and Linux Docker? I know these are not (yet) topics of general interest but a video about them would be generally useful anyway.
actually it isn't really all that bad if compared with Thumb2 or similar. there is a little hair up-front with the opcodes and dealing with ModRM+SIB+... but, in all, it is fairly consistent regarding general instruction encoding rules and similar (most instructions follow similar encoding rules, ...), and is relatively friendly to a table-driven instruction decoder, ...
If you read carefully, my point has nothing to do with a company's or an application's performance.. good or bad. It's about the fact that if the *owner* of the proprietary software disappears, you're screwed and would have no choice other than to run old/unsupported/unpatched code for as long as you have a critical business process that depends on it. BTW, if you don't like Red Hat, then don't buy their support and use CentOS instead. Easy, no?
Only thing i miss is the ability to run old hardware with virtual NT4.. I got this STAudio audio interface that hasn't got updated drivers, only for XP and NT4, it's a great piece of kit that's just lying around.
Could you talk about the other kind of Virtual Machines? like JVM the Java Virtual Machine, LLVM, the Low-Level Virtual Machine, or the CLR, the Common Language Runtime, which is the VM that .NET runs on...
What is a virtual machine? (asked in the description) A virtual machine is just software. If, for example, you have a word-processing program on your computer, it is a virtual machine that makes your computer act like a word processor. (There used to be dedicated word processors.) I'm not sure what he means by the x86 instruction set being "ambiguous." I see no ambiguity. Every instruction is well defined.
Maybe it would be good to get a discussion on how the cloud is meant to be used for remote computing? Because even with a huge broadband connection, I just can't see it ever being better to ask a remote server to process my data for me and send it all back over the internet, rather than just process it myself on a 4Ghz quad-core with more RAM than it will ever even utilize. Yet, we're seeing lots of people tout it as the wave of the future, and it just sounds unreasonably slow to me.
Let's say I have a laptop that has a 2nd gen i7 quad core, 8 gig of ram, and a solid state drive, and I have a program to run that will take a long time on this machine. However, the program is very easily made to run parallel. It also can be run as an executable file (it is a F90 console aplication). Now, would cloud computing significantly reduce the computing time of this program? I admit I know little about cloud computing.
Isn't this less safe then individual machines, because if gain access to the main OS then can't you see all of the process's that the virtual machines are running.
Yeah, JIT is one of those other murky subjects too, which makes these all the more difficult to talk about in a single (or even just two or three) video(s). I mean, strictly speaking, you COULD say that JVM's byte code is in essence code that's "JIT-ed" if you will, to native code, and yet in an interpreter's context, that's more like "turned from abstract syntax tree to byte code when needed, but not otherwise" kind of thing. And as you point out, some "AVMs" have hardware help, as a "real" VM.
If it happens, whoever has a business that depends on it has the source code available and has the right to modify it in whatever way they see fit (i.e. a fork). In short, they can provide their own support. Contrast that with the fact that with proprietary software, if the entity that originated the code goes "dies", you're really screwed. Whether free/libre projects die or not is irrelevant.
I'm a software engineer. I don't know what your team's standards are (some reject free software by 'definition' & FUD, not merit). It's odd that it won't pass eval when you have no src available to evaluate, no test cases/code, can't verify security (you can take their word for it, especially after NSA leak;). "Does meet our reqs?" is as deep as you can go. Again, polish/support is not a problem when you have the source avialable. If dear MS dies, you have 0 support, 0 polish, AND 0 code to DIY.
so wait why don't we do a reverse virtual machine where we use the software to emulate on a multicore processor just one core? couldn't we take advantage of ring 1 or 2 in the personal computer to allow a sufficiently strong program to sit between the operating system and the kernel that would translate the multiple cores that the kernel sees into one core that the OS sees? has anyone actually written programming code that uses multiple cores?
That is a big "if".. from the top of my head, I'm not familiar with any such examples (e.g. Microsoft? Apple? Google? Adobe? Any names that are common knowledge?). Not saying there aren't any, but even if there were, I think they'd be the exception, not the rule. Companies tend to sell their "intellectual property" off before the go completely bankrupt to minimize losses, etc.. not give away their stuff. That's why it's often necessary to reverse engineer (or re-write) proprietary software..
We couldn't do large amounts of our processing at work if not for virtual machines. Use the a lot, and there's nothing that comes close to the flexibility that they offer.
Oh, and an other question: how is it, that in technical parameters my crappy android phone is better than my first pc that was running win 98 flawlessly, but it is still slow to respond running android. Is the software optimalisation so bad?
Quality of their products can be questioned, but what cannot is the fact that dose companies are free not to extend support contract whenever they feel like and then users are left with unsupported binary blob. Moreover, if you need a particular feature, and you're not a huge corporation, good luck getting it in.
If you run a vm inside a vm inside a vm ... eventually you can kill the hardware and the deepest vm will continue to exist. It's called the virtual infinity paradox.
Wait a second @7:45, Mainframes are not only for old code, come on. It just about runs everything, incl. Java, WAS, et cetera, at a better cost per operation, so there isn't a reason to move it to a new infrastructure architecture. Just because companies (like mine) use mainframes doesn't mean the code isn't updated anymore, or they can't move it to a different architecture.
Once the contract reaches its end, either side can choose not to extend it. So if I'm a company who critically depend on Microsoft Foo, and buy a year support contract, and after a year, Microsoft says “we won't support Foo any more”, I'm screwed. Should Foo be free software, I could then hire my own engineers or form a coalition with other companies that depend on Foo to have support for it.
I do sysadmin work in enterprise environments (among other things) and it's far from "terrible". I'd point to poor sys-admin skills or something else first. Freeware and open source are not the same as free/libre software (which is what I'm actually talking about), so you should avoid mixing the terms. Also, how did you even get into "problems with source control" and "legal prudence"?.. I'm starting to get the impression you may not quite know the topic..
Well you also have the redhat solutions which are far more powerful than the common once but as any other redhat product - it will cost you an arm and a leg.
Stage 1. Virtualize the software. Stage 2. We also virtualize the virtual software, we need neither code or machines to do what we want. Physical server rooms is a thing of the past. Nothing nowhere! Stage 3. Electricity, cooling, perimeter security will also be virtual. Stage 4. Fire all guards, technicians and specialists. Stage 5. We begin to virtualize end users. Stage 6. EVERYTHING IS VIRTUALIZED. EVERYTHING EXISTS IN THE MAGIC CLOUD. Stage 7. ???
It actually has nothing to do with your computer and virtual cores. That is a whole different thing ;) 4 cores means: your one processor contains 4 separated parts that can calculate. So it is able to calculate 4 things in parallel at the same time 4 virtual cores is related to the hyper threading technology. It means that every real core can (in some cases) do two things at once and thus it pretends to be two cores ...
When I hear "Virtual Machines" I was expecting talk of Java, .NET, LLVM, V8, SpiderMonkey, and so forth. When I hear "Virtualization" I think of VMWare, VirtualBox, Xen, KVM, Qemu, etc.
Which is not what I'm disputing. And like I'm pointing out, I may be guaranteed to get what I'm paying for for the duration of the contract (say a year), but I have no guarantee about what will happen after that. Besides, that just one of many reasons why free software is better for users.
ewwwwww... and while we're at it, "HARD drives" "FLOPPY disks" etc. etc. at my old job we would refer to ssh by "shhhh"-ing each other with pointer finger on lips. One day my supervisor walked up to me and shhh-ed me in a sexy way, and a member of HR was walking by... most awkward moment ever.
Not entirely true, Depends on how you allocate resources, and how those resources are being used. When you throw in FPGA's or GPGPU's it becomes even less of a issue,
i could make a linux machine with the same hardware for half that price. but it might crash once or twice a day in normal operation and i might burn out A hard drive reinstalling the os until i find the most stable one.
The main idea of Inception:
If you run a VM inside a VM inside a VM inside a VM inside a VM, everything will be very slow.
"gnarly" - what a nice adjective to describe the x86 platform, I love that statement! :-)
You're talking about Hyperthreading, which is very different from VMs. The Hyperthreading feature allows each of your 4 physical processing units to look at incoming instructions and see if they can be split up to any degree. If so, one physical processor then executes multiple instructions in parallel, giving the effect of a fifth (sixth/seventh/eighth), "virtual" processer.
I think what you've described highlights my original point on the negative consequences of prop. software even more. None of the things you've mentioned are a problem in the free/libre software side because the license gives rights and control to their users rather than taking them away with prohibitions and legal threats about who gets to even look at the source code..
One common misunderstanding about the "cloud" what I've observed when it all started and sometimes observe now, is that it is something new and incredible. Well, its not. Its just a fancy word for the same old servers, maybe with a little bit different pricing model.
Stage 7: Improve ultra-high speed internet access to support higher demand. Stage 8: Figure out where to maintain the powerful hardware, since end users no longer need good hardware to run demanding software (so they move towards cheaper, lighter devices focusing on connectivity and displays).
Except that when a proprietary software company goes bust, you will have start a hiring spree to secure old employees who knows the codebase well to ensure your own livelihood, and there may not be enough for everybody, and other companies aren't required to share their changes. With free software, even if the main supporting company goes bust, and all their employees scatters to different companies, they will still be improving the shared codebase as copyleft ensures that fixes are shared.
It's quite feasable/doable, actually. What I have noticed (i.e. anecdote) is that sometimes people would rather pay someone else to solve their problems for them (e.g. another company's tech support) rather than solve them themselves.. which is OK in some situations but not when it's supposed to be that person's *job*. This is independent from the proprietary/free software topic because companies that do free software (e.g. Cannonical) provide these services w/o resorting to proprietary source.
If this channel has captions, it would be great. English is not my first language, and British accent is really hard for me.
Captions would provide more people and boost awareness in computer sciences.
Cheers!
I love VMs i've been using them for years now, as i do software dev it has saved me having to rewrite/reinstall/just get pissed off every time i upgraded the HDD, or anything else. the best part i like is i can just make a copy of the machine and move it around online, or on usb (without having to drag around a computer)
Dusty decks are a consequence of proprietary/closed-source software. Good thing there's free/libre software today.
I MUST KNOW MORE ON THIS SUBJECT!
This is an excellent one, like to see more on both software CPUs and os architecture
another term is "Application Virtual Machines" to describe the JVM and friends, granted, yes, they are somewhat different pieces of technology.
there can be overlap though, for example, both can make use of a JIT compiler (often called "dynamic translation" with emulators) to allow relatively fast execution of code running inside the VM (if appropriate hardware support is not available).
likewise, on some targets, hardware assistance for application VMs exists (Jazelle and ThumbEE on ARM), ...
Sure.
I'm not sure exactly where they'd fit them though, given their weird spot in the whole spectrum of things. Should they talk about interpreters, and then talk about runtimes as a special flavor of an interpreter? Should they cover compilers first? Should they talk about where is the line between a program accepting instructions vs. a "real" runtime?
It seems questions like that are the reason the different topics are covered very narrowly, even when not difficult to grasp per se.
All of those aren't really virtual machines, even though some call themselves such (for marketing purposes). They are what's called a "runtime" or "interpreter" (well, a flavor of...) - they are a normal (R3 level) executable program, with the only job of executing ITS OWN instruction set dictated by the input, as opposed to a real VM that executes instructions of a hardware CPU, and can (in some of the cases; described in the video) call up the real hardware to do those instructions.
Polished is correct in some cases, but not supported enough? If the original dies (like OpenOffice) many others may be born from it (like LibreOffice). We should be embracing free software more as it can often do the same job, sometimes with fewer headaches.
I'm interested in embedded systems. Can you do "embedded systems for dummies" type of thing. Some history, what it is now, languages and operating systems used and that sort of thing. Embedded machines are everywhere now so I think it's important that people know at least something about embedded systems :)
I was thinking about something: how can we get rid of this backward compatibility issue, that is - I feel - is slowing down the developement of new technologies? I mean, both software and hardware has to have backward and forward compatibility and it means compromises. Every attempt to create an absolute standard so far has just managed to be one of the many standards on every level of computing.
Could one of the great minds of computerphile answer?
The future will be a truly fluent system where you have some sort of hypervisor or smiler distribution task to multiple processor. After all we are living in a era of Ubiquitous computing. It is just a matter of unlocking the true potential of the world we live in. I just hope that I can keep control over my own cloud rather then give it over to some other party. If so then we are walking down a dangerous path.
once was actually doing something like this for development:
using VMware to run Linux on top of Windows;
using QEMU (in Linux) to run an ARM version of Linux;
then running a script VM inside this ARM version of Linux.
the script VM ran the code (in a plain interpreter).
it got "impressive" benchmark results.
Maybe a computerphile video can talk about even more cutting edge stuff like Google's LMCTFY and Linux Docker? I know these are not (yet) topics of general interest but a video about them would be generally useful anyway.
actually it isn't really all that bad if compared with Thumb2 or similar.
there is a little hair up-front with the opcodes and dealing with ModRM+SIB+...
but, in all, it is fairly consistent regarding general instruction encoding rules and similar (most instructions follow similar encoding rules, ...), and is relatively friendly to a table-driven instruction decoder, ...
If you read carefully, my point has nothing to do with a company's or an application's performance.. good or bad. It's about the fact that if the *owner* of the proprietary software disappears, you're screwed and would have no choice other than to run old/unsupported/unpatched code for as long as you have a critical business process that depends on it. BTW, if you don't like Red Hat, then don't buy their support and use CentOS instead. Easy, no?
Only thing i miss is the ability to run old hardware with virtual NT4.. I got this STAudio audio interface that hasn't got updated drivers, only for XP and NT4, it's a great piece of kit that's just lying around.
Could you do videos about different CPU technologies like Hyper-Threading, MMX, SSE, EM64T, x86 etc...?
Could you talk about the other kind of Virtual Machines? like JVM the Java Virtual Machine, LLVM, the Low-Level Virtual Machine, or the CLR, the Common Language Runtime, which is the VM that .NET runs on...
And i feel like a computer genius when I open the back of my laptop and upgrade the RAM....
A video about Hyper-threading would be really useful!
Question, if I build a computer in Minecraft, is it a virtual computer/machine?
What is a virtual machine? (asked in the description) A virtual machine is just software. If, for example, you have a word-processing program on your computer, it is a virtual machine that makes your computer act like a word processor. (There used to be dedicated word processors.)
I'm not sure what he means by the x86 instruction set being "ambiguous." I see no ambiguity. Every instruction is well defined.
Could you do one on RISC vs CISC. You could even talk about OISC
Maybe it would be good to get a discussion on how the cloud is meant to be used for remote computing? Because even with a huge broadband connection, I just can't see it ever being better to ask a remote server to process my data for me and send it all back over the internet, rather than just process it myself on a 4Ghz quad-core with more RAM than it will ever even utilize. Yet, we're seeing lots of people tout it as the wave of the future, and it just sounds unreasonably slow to me.
Let's say I have a laptop that has a 2nd gen i7 quad core, 8 gig of ram, and a solid state drive, and I have a program to run that will take a long time on this machine. However, the program is very easily made to run parallel. It also can be run as an executable file (it is a F90 console aplication). Now, would cloud computing significantly reduce the computing time of this program? I admit I know little about cloud computing.
Yeah, do a series of them on Bitcoins and how they work. Perhaps you could do one with Tor as well.
Isn't this less safe then individual machines, because if gain access to the main OS then can't you see all of the process's that the virtual machines are running.
Yeah, JIT is one of those other murky subjects too, which makes these all the more difficult to talk about in a single (or even just two or three) video(s).
I mean, strictly speaking, you COULD say that JVM's byte code is in essence code that's "JIT-ed" if you will, to native code, and yet in an interpreter's context, that's more like "turned from abstract syntax tree to byte code when needed, but not otherwise" kind of thing. And as you point out, some "AVMs" have hardware help, as a "real" VM.
If it happens, whoever has a business that depends on it has the source code available and has the right to modify it in whatever way they see fit (i.e. a fork). In short, they can provide their own support. Contrast that with the fact that with proprietary software, if the entity that originated the code goes "dies", you're really screwed. Whether free/libre projects die or not is irrelevant.
I'm a software engineer. I don't know what your team's standards are (some reject free software by 'definition' & FUD, not merit). It's odd that it won't pass eval when you have no src available to evaluate, no test cases/code, can't verify security (you can take their word for it, especially after NSA leak;). "Does meet our reqs?" is as deep as you can go. Again, polish/support is not a problem when you have the source avialable. If dear MS dies, you have 0 support, 0 polish, AND 0 code to DIY.
WAY over my head. Can you do more introductory videos please!?
so wait why don't we do a reverse virtual machine where we use the software to emulate on a multicore processor just one core? couldn't we take advantage of ring 1 or 2 in the personal computer to allow a sufficiently strong program to sit between the operating system and the kernel that would translate the multiple cores that the kernel sees into one core that the OS sees?
has anyone actually written programming code that uses multiple cores?
That is a big "if".. from the top of my head, I'm not familiar with any such examples (e.g. Microsoft? Apple? Google? Adobe? Any names that are common knowledge?). Not saying there aren't any, but even if there were, I think they'd be the exception, not the rule. Companies tend to sell their "intellectual property" off before the go completely bankrupt to minimize losses, etc.. not give away their stuff. That's why it's often necessary to reverse engineer (or re-write) proprietary software..
I agree with this statement
thanks for all the great vids
I quite like the term "cloud" because clouds can burst and rain begins to fall.
We couldn't do large amounts of our processing at work if not for virtual machines. Use the a lot, and there's nothing that comes close to the flexibility that they offer.
I'd love to see a video on pathfinding.
beautiful way of saying
Khan Academy has a selection of videos on that very topic that may interest you.
You don't have to wait for them to do that. I'm sure Wikipedia has all the info from the 8bit 8086 to the 64bit octa-core CPUs we have now. :-D
Oh, and an other question: how is it, that in technical parameters my crappy android phone is better than my first pc that was running win 98 flawlessly, but it is still slow to respond running android. Is the software optimalisation so bad?
Quality of their products can be questioned, but what cannot is the fact that dose companies are free not to extend support contract whenever they feel like and then users are left with unsupported binary blob. Moreover, if you need a particular feature, and you're not a huge corporation, good luck getting it in.
Can I operate my machine while it operates it's machine withing the machine?
still worth talking about them though...
If you run a vm inside a vm inside a vm ... eventually you can kill the hardware and the deepest vm will continue to exist. It's called the virtual infinity paradox.
Wait a second @7:45, Mainframes are not only for old code, come on. It just about runs everything, incl. Java, WAS, et cetera, at a better cost per operation, so there isn't a reason to move it to a new infrastructure architecture. Just because companies (like mine) use mainframes doesn't mean the code isn't updated anymore, or they can't move it to a different architecture.
Hey, could you please make a video on how CPUs work?
Once the contract reaches its end, either side can choose not to extend it. So if I'm a company who critically depend on Microsoft Foo, and buy a year support contract, and after a year, Microsoft says “we won't support Foo any more”, I'm screwed. Should Foo be free software, I could then hire my own engineers or form a coalition with other companies that depend on Foo to have support for it.
I do sysadmin work in enterprise environments (among other things) and it's far from "terrible". I'd point to poor sys-admin skills or something else first. Freeware and open source are not the same as free/libre software (which is what I'm actually talking about), so you should avoid mixing the terms. Also, how did you even get into "problems with source control" and "legal prudence"?..
I'm starting to get the impression you may not quite know the topic..
Well you also have the redhat solutions which are far more powerful than the common once but as any other redhat product - it will cost you an arm and a leg.
I'm just curious, does Linux preserve old code? I imagine that not preserving old code with new ones will break systems.
Stage 1. Virtualize the software.
Stage 2. We also virtualize the virtual software, we need neither code or machines to do what we want. Physical server rooms is a thing of the past. Nothing nowhere!
Stage 3. Electricity, cooling, perimeter security will also be virtual.
Stage 4. Fire all guards, technicians and specialists.
Stage 5. We begin to virtualize end users.
Stage 6. EVERYTHING IS VIRTUALIZED. EVERYTHING EXISTS IN THE MAGIC CLOUD.
Stage 7. ???
64 core processor, i want to see that
talk about duel booting/multi-booting.
Do one about P vs NP!!!!!!
So, how does this factor into my computer? AFAIK, I have 4 cores and 4 virtual cores, but I really don't know what that means really.
It actually has nothing to do with your computer and virtual cores. That is a whole different thing ;)
4 cores means: your one processor contains 4 separated parts that can calculate. So it is able to calculate 4 things in parallel at the same time
4 virtual cores is related to the hyper threading technology. It means that every real core can (in some cases) do two things at once and thus it pretends to be two cores ...
A video on emulation maybe?
When I hear "Virtual Machines" I was expecting talk of Java, .NET, LLVM, V8, SpiderMonkey, and so forth.
When I hear "Virtualization" I think of VMWare, VirtualBox, Xen, KVM, Qemu, etc.
too right i got a cnc machine with NT4 being ran on it, just nice strong robust system :)
Which is not what I'm disputing.
And like I'm pointing out, I may be guaranteed to get what I'm paying for for the duration of the contract (say a year), but I have no guarantee about what will happen after that.
Besides, that just one of many reasons why free software is better for users.
Financial Services ARE behind... My company uses a program, whose last major update was in 1994, to perform 90% of the business data requirements....
I was always one of those people that "Knows about computers". After watching these videos... I realise I know NOTHING about computers.
I don't like the term 'cloud' it sounds like such a buzz word.
ewwwwww... and while we're at it, "HARD drives" "FLOPPY disks" etc. etc.
at my old job we would refer to ssh by "shhhh"-ing each other with pointer finger on lips. One day my supervisor walked up to me and shhh-ed me in a sexy way, and a member of HR was walking by... most awkward moment ever.
And this is exactly why propitiatory software is bad for business, yes most businesses are terrified of it.
Sean, can you increase the volume on the videos pls?
Can you do a video on bitcoins please? What it is, how it works,...
Not entirely true, Depends on how you allocate resources, and how those resources are being used. When you throw in FPGA's or GPGPU's it becomes even less of a issue,
Great that you can run the OS you need to, but running NT4 is like saying "hack me please" I hope those NT4 VMs are on private IP ranges only.
NT 4 was the last decent Windows OS, that's while people are still using it.
VMware vs. VirtualBox. 3...2...1... go!
Not at all, I've not even played it, only seen videos. I'm annoyed by people who comment on any subject they don't know about.
Good luck with getting that in your contract with Microsoft, Oracle or IBM.
My nickname is virtually powering the internet ... you're welcome :D
Anybody else wondered about those writings on the whiteboard behind.
A machine with in a machine, machine-ception!
I suspect its because a lot of people think ssh and ftp are icky.
What do you mean by that?
Call John Conner, I think this guy builds Skynet.
yeah and the vm is lost in cyberspace, still running
I still use Windows 95 to play Sim Farm, I'm 21 years old don't judge me.
i could make a linux machine with the same hardware for half that price.
but it might crash once or twice a day in normal operation and i might burn out A hard drive reinstalling the os until i find the most stable one.
So: modern computing is just a lot of, "He said she said shi*." That's why I keep two machines; an older one (without internet), and a newer one.
Stage 7 is obviously the Matrix
So? We'll simulate the simulation from within the simulation, virtually.
So you can like them, while you like them!
Yeah and on the Darknet in general
And now I know how to lag a modern computer with 8-bit graphics XD