One of the things that a lot of administrators seem to really like about Hyper-V is its relative simplicity, especially when compared to some of the competing hypervisors. Perhaps nowhere is this simplicity more evident than in Hyper-V’s support for virtual networking. Natively, Hyper-V’s virtual networking capabilities center around one or more virtual switches. Hyper-V supports three different types of virtual switches. A private virtual switch provides communications between a collection of virtual machines, and nothing else. An Internal virtual switch is like a private virtual switch, except that it also allows the Hyper-V host to communicate with the virtual machines.
The most widely used virtual switch type is an external virtual switch. An external virtual switch provides communications between virtual machines, the Hyper-V host, and the outside world. Hyper-V provides this connectivity by binding the virtual switch to a physical network adapter. Virtual machines and the Hyper-V host connect to the virtual switch by way of a virtual network adapter. So simply put, VMs (and the host) all have a virtual network adapter that connects to a virtual switch, and the virtual switch connects to a physical network adapter.
Although this architecture works really well in a native Hyper-V environment, it doesn’t scale very well. If you wanted to live-migrate a virtual machine from one host to another, for example, you would have to ensure that the destination host contains a virtual switch with a name that is identical to that of the virtual switch that the VM is currently using.
Organizations that operate larger scale Hyper-V deployments almost always manage their Hyper-V hosts (and virtual machines) through System Center Virtual Machine Manager (VMM). For those who might not be familiar with VMM, it is a separate product that is included in the Microsoft System Center Suite. It is designed to act as a more powerful alternative to the Hyper-V Manager.
Because VMM is designed to support larger Hyper-V deployments, it provides a more complex virtual networking architecture that is better suited to serving those environments. There are six main VMM networking components that you need to be aware of.
Logical networks are designed to act as a virtual representation of your physical networks. Generally speaking, you will want to create a logical network for every physical network that VMM will interact with.
While it is possible that you may end up with multiple logical networks (such as one representing the corporate network, and another representing the management network), it is also possible that you may only have a single logical network. VMM makes it possible to associate multiple sites (and subnets) with a single logical network. You can even create an IP address pool to be used by a logical network, although using a DHCP server is usually the preferred mechanism for assigning IP addresses.
A VM network is a virtual machine network. It’s the network that virtual machines use to communicate with one another. Of course, virtual machines also need to communicate with the outside world and with VMs on other Hyper-V hosts. To enable this communication, a VM network is tied to a logical network. In other words, VM networks act as a layer of abstraction that allows virtual machines to access logical networks.
IP address pools
Just as DHCP servers manage pools of IP addresses, IP address pools can also be assigned to logical networks and to VM networks. The rules for when you create an IP address pool on a logical network vs when you create a pool on a VM network are a bit complex, but ultimately boil down to logical network isolation.
When you create a logical network, you must choose whether or not that network will be isolated. Isolation is used to isolate multiple VM networks on a single logical network. If you opt not to use isolation, there will be a one to one relationship between VM networks and logical networks, with each logical network supporting a single VM network.
So with that said, if you are not using isolation (or if you are using VLANs) then you can opt to either use a DHCP server to provide IP addresses, or you can create an IP address pool at the logical network level. Those addresses will be automatically made available across the VM network. In most other situations, you will have to create IP address pools on both the logical network and the VM network.
Gateways also tie back to the concept of isolation. When isolation is used, the VMs on a VM network are only able to communicate with one another. If you need for a VM on one VM network to be able to communicate with a VM on a different VM network, then you will have to create a gateway. The gateway provides connectivity between two VM networks that share a single logical network.
There are two types of port profiles available in VMM. Uplink port profiles apply to physical network adapters, and virtual network adapter port profiles apply to virtual network adapters. Port profiles are used to define port capabilities, such as bandwidth limitations. You can also use port classifications as a tool for identifying the characteristics of the various ports. Some organizations, for example, classify ports as being either fast or slow.
As previously mentioned, a standalone Hyper-V host provides network connectivity through a virtual switch. The problem with virtual switches is that they are unique to a host. Logical switches take the place of a virtual switch. VMM allows settings to be applied to logical switches in a consistent manner so that each Hyper-V host is equipped with a similarly configured logical switch.
VMM and virtual networking: Yes, it’s a bit more complex
Admittedly, VMM’s approach to virtual networking is more complex than that of native Hyper-V, and can, therefore, take a bit of getting used to. Even so, the various networking components fit together in a very logical way, and can ultimately make Hyper-V connectivity much easier to manage.