Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer
-
Overall, this seems like a good video. Haven't quite finished it yet, but the info seems to be clear and accurate. Stuff that so many people often get wrong. Good to see good info when the A+ has such a reputation for getting the basics screwed up.
-
I thought an OS memory usage was not that high...is this not the case?
-
@mary said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
I thought an OS memory usage was not that high...is this not the case?
Define "not that high?" Let's take Windows for example, a Windows system doing any normal amount of tasks will use at least 2GB while basically idle and easily 4GB if doing anything, and easily 6-8GB if you are just surfing the web a lot. A typical computer today comes with either 8GB or 16GB of RAM.
So if you are virtualizing a system, it needs exactly the same amount of RAM as a physical system. So if your virtual system needs a minimum of 4GB and practically needs more like 8GB, and you run two of those on top of your existing system which itself needs close to 8GB, suddenly you need 24GB total, so the numbers get really big, really fast.
Now if you are using Ubuntu Linux and don't run a graphical desktop, you could get a virtual system up and running with maybe 400MB of RAM, a very, very far cry from 4GB. So it all depends what you are doing.
-
A few of the practice questions on the A+ deals with mac filtering. Is that what he is talking about when configuring each instance of the OS with it's own address or is this another layer you can add on?
-
@mary said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
A few of the practice questions on the A+ deals with mac filtering. Is that what he is talking about when configuring each instance of the OS with it's own address or is this another layer you can add on?
MAC filtering is uncommon (and generally impractical) in the real word. It's one of those "mostly fake" "security by obscurity" things that newbies get taught to do that sounds good to management, but techs know is not going to stop anyone. It's more of a pain than anything, and mostly if for cases where someone in testing on their tinfoil hat for size.
But basically every device has a unique MAC address, this is how Ethernet or Wifi tell one device from another. It's their "machine address". MAC filtering is when you make a list of the MAC addresses allowed on your network (there is zero value to making a list of ones that are banned because anyone can just change their MAC address to anything that they want.)
For normal people, 99.9999% of the world, you don't change your MAC address from the default, which is always set by the manufacturer following simple global guidelines. It's a UUID. So MACs always "just work", you ignore them. They are handled at a layer below IP addresses and you don't need to think about them.
It's not the OS that has a MAC address, it's the NIC that the OS is using. It's the address of the "Ethernet device."
-
IP, the Internet protocol, doesn't work on its own, it's a very "high level" protocol. It has to ride on top of other things, like Ethernet. In today's world, it's always Ethernet or Ethernet over wireless (aka Wifi.) Thirty years ago, Ethernet had lots of competitors like Token Ring, today it really doesn't have any competitors except super, duper rare Infiniband. Anything other than Ethernet or Infiniband as an underlying protocol for IP is just a "lab experiment" and not useful in the real world. Technically things like USB and FibreChannel can do the job, but have to purpose.
Part of what makes Ethernet so good is that all of the work is "automatic." There is zero configuration needed or warranted from the end users, even at initial setup. If your goal is just to build an Ethernet network and nothing more, all you need is a switch of any size, and to plug computers (or devices) into it, that's it. Ethernet is just sort of magic. It's also so basic that it is almost unusable, it needs something like IP on top of it to do anything that you'd actually want to do. It's not enough on its own, it's really just a building block to IP.
But you CAN mess with the Ethernet layer and one of the easiest things that you can do is to modify your MAC address.
-
@mary if you want to see some MAC addresses on your network try the arp command on Fedora Linux. ARP is the "Address Resolution Protocol" and it is the protocol that maps the ethernet devices with MAC addresses to the IP addresses that are assigned to them. By running arp you will see a list of the IP addresses that your desktop knows about, and the list of the MAC (called HW Address for Hardware Address) to which they are mapped on your home network.
-
On Windows, if you install the Advanced IP Scanner like this choco install advanced-ip-scanner -y and run it, it will scan your network and show you a list of all MAC addresses that are active on it, and it will use that MAC address to list the manufacturer of the device in question (since a native MAC address has to have the manufacturer's registered ID in it as part of the address.)
-
@scottalanmiller thanks for clearing that up!
-
@scottalanmiller As one of the tinfoil hat wearers around here, even I don't thing MAC filtering is anything other than headache and time sync.
-
So essentially you could have one beefy computer in a business or something similar that distributes virtual machines to other monitors? I can't really grasp how VMs work.
-
@connorsoliver said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
So essentially you could have one beefy computer in a business or something similar that distributes virtual machines to other monitors? I can't really grasp how VMs work.
This diagram may help. A single hardware host is about to create multiple Virtual Machines. This virtual machines split the resources on the host, however these virtual machines behave like a hardware host in every way. Most are not aware in anyway that they are virtualized as they function exactly like physical hardware hosts.
-
@connorsoliver said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
So essentially you could have one beefy computer in a business or something similar that distributes virtual machines to other monitors? I can't really grasp how VMs work.
The general idea isn't about providing a desktop environment to end users, this is about dividing up the hardware between different workloads. Today's servers are so powerful, if you ran, let's say just an email server from the a modern Dell server, you'd likely be wasting 95% of it's processing power. Virtualization allows you to split the physical hardware into many workloads, that all think they are on their own hardware. This is good because it keeps different software from interfering with each other, etc.
-
@scottalanmiller said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
On Windows, if you install the Advanced IP Scanner
I used this today trying to find a phone. No DHCP lease. I never configure anything static. Phone is offline in Asterisk. Yet the user can make calls.
MAC not found and no unknown IP's found with the scan. WTF.....
-
@JaredBusch Rouge DHCP server?
-
@travisdh1 said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
@JaredBusch Rouge DHCP server?
Nope, better detail from the end user resulted in learning that the phone was still on the prior extension.
That led to the discovery that I have no idea where the MAC that was configured for the extension came from.
Deleted old ext, used MAC for that phone and all better..
-
@scottalanmiller said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
@mary said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
I thought an OS memory usage was not that high...is this not the case?
Define "not that high?" Let's take Windows for example, a Windows system doing any normal amount of tasks will use at least 2GB while basically idle and easily 4GB if doing anything, and easily 6-8GB if you are just surfing the web a lot. A typical computer today comes with either 8GB or 16GB of RAM.
So if you are virtualizing a system, it needs exactly the same amount of RAM as a physical system. So if your virtual system needs a minimum of 4GB and practically needs more like 8GB, and you run two of those on top of your existing system which itself needs close to 8GB, suddenly you need 24GB total, so the numbers get really big, really fast.
Now if you are using Ubuntu Linux and don't run a graphical desktop, you could get a virtual system up and running with maybe 400MB of RAM, a very, very far cry from 4GB. So it all depends what you are doing.
You aren't comparing correctly... you mention high usage of RAM in Windows when using a web browser and GUI, and compare it to a GUIless Linux server doing nothing running at 400MB RAM... That isn't a fair comparison. You also need to compare it to an Ubuntu Linux workstation with default GUI (Gnome) running a web browser. They will both require a ton of RAM there...
No system is very useful with the minimum amount of memory to be "up and running" as you say.
I have seen a number of Windows servers with GUIs running idle at the minimum 512MB RAM in Hyper-V, and not go up much higher periodically even when they are DCs.
As soon as you have the Linux/Windows box or VM doing anything useful, the "up and running on 400MB RAM" is no longer so... It's nice and all that GUIless Linux server can be up and running on 400 MB RAM usage, which is much better than Windows, but if we're talking GUIless, then I'm assuming servers. If that's the case, it's safe to assume the Windows server is GUIless as well and is also running with low RAM usage when "just up and running".
-
@Obsolesce said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
As soon as you have the Linux/Windows box or VM doing anything useful, the "up and running on 400MB RAM" is no longer so...
That's totally not true. You can do things like email servers, web servers, DNS servers, even PBX with 400MB of RAM. It's tight and not ideal, but totally doable in the real world.
That's my point, from the Windows world, that's an "idle" number. But on Linux, idle is more likely around 80MB and 400MB is more of a "with usage."
-
@scottalanmiller said in Client-side Virtualization - CompTIA A+ 220-1001 Prof Messer:
You can do things like email servers, web servers, DNS servers, even PBX with 400MB of RAM. It's tight and not ideal, but totally doable in the real world.
Doable? Sure. But definitely not practical at any kind of real scale. Could some SMB run their DNS server or little web server on a GUI-less Linux server with 400MB RAM, definitely! And very likely done in most of them! That's likely the way to go vs Windows... wtf would anyone use Windows to do that?
HOWEVER, that was not my main point.... I suppose I should have each paragraph as it's own reply.
-
@Obsolesce Windows Server Core is not really able to be utilized in the same ways as your typical linux server. Many of Microsoft's own features are not supported on core (at least that used to be the case). I am sure Microsoft continues to add features for it to support additional MS product and services, but what is the real use case?
Who would license Windows Server to run an inefficient web server? Even Microsoft is pushing SQL server on Linux vs their own server platform. I have gotten so many emails from Microsoft almost begging people to run SQL server on Linux. The only real world cases to run server core are with domain controllers , dns servers, dhcp ,etc. Those were the real use cases behind core. 95% of your Windows admins are terrified of core servers, though. It is just fact. I have seen it everywhere I have worked.