NSX-V Network Virtualization Design Guide 2.0

VMware publish  second and completely reworked edition of the VMware® NSX for vSphere Network Virtualization Design Guide

NSX-V Design Guide 2.0

NSX-V Design Guide 2.0

This Design document is targeted toward virtualization and network architects interested in deploying VMware® NSX network virtualization solution in a vSphere environment covering:

  1. NSX-v functional components
  2. Functional services
  3. Design considerations.

Design content include new features from NSX 6.1 release (i.e. active/active Equal Cost Multi-path Routing) and Micro-segmentation, Distributed Firewall

Thank to Authors Max Ardica and Nimish Desai creating  best ever fully NSX-V design guide:

Link to paper:

https://communities.vmware.com/docs/DOC-27683

Tagged with: , , , ,
Posted in Design

Deploying NSX-V controller failed and disappear from vSphere client

Few days ago I was trying to deploy NSX-V controller and found that the controller was fail and disappear after few min.

The first area was going to look was vCenter task.

I was try to see what is going on, it is clear that after the controller virtual machine is “power on” the VM is power off and then deleted. But why?

View vCenter Tasks

View vCenter Tasks

 

Troubleshooting step 1 is to download the NSX manager logs.

Right click on the upper right corner of the NSX manager GUI and chose “Download Tech Support Log”

Download NSX Manager Logs

Download NSX Manager Logs

 

The Tech support file can be very large text file, is like to Needle in pile of hay.  what to look for ?

My best advice is start with something we know, the controller name.
This name was creating when we complete the NSX deploy wizard for NSX controller.

In my example it was “controller-2”

Open the text file and search for this name:

Search in Tech Support File

Search in Tech Support File

 

When find the name try to use the arrow down key and start to read:

NSX Tech Support file

NSX Tech Support file

 

From this error we can learn we have connectivity issue, it appear that if the controller can’t connect to NSX Manager in the deploying process it will be automatic deleted.

The next question is way I have routing problem?

In my case the NSX controller and NSX manager run in same ip subnet.

The answer found in the manual Static ip poll object created.

In this lab I work with subnet class B 255.255.0.0 = prefix of 16, but in the object poll I creded I chose prefix of 24.

 

Wrong IP Pool

Wrong IP Pool

 

This was just of example how to troubleshooting deploying NSX-V controller but there may be other rezone that can cause this problem for example:

  • Firewall block Controller to talk NSX Manager.
  • Network Connectivity between NSX Manager and Controllers.
  • Make sure NSX Manager/vCenter/ESXi hosts have DNS/NTP configured
  • Make sure you have available resource like disk space in the Datastore you deploying the controllers.
Tagged with: , , ,
Posted in Troubleshooting

Improving NSX GUI user experience

Working with NSX on different environments I found that the vSphere Web client can work slowly.

After I heard that others users complain about those problems, I decided to write some tips for improving user experience

Fist try to login with local default user: administrator@vsphere.local if this work fast then LDAP user then please try to switching from using user@domain to domain\user.

1. Increases the java memory limit for vCenter. VMware Publish KB:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2021302

2. Install the Client Interaction plugin. you should see message at the login to vCenter:

client integration plugin

client integration plugin

3. Increasing the local storage settings on the Flash Player will speed up the web client.

Adobe have online tool to view and change the local storage setting:
http://www.macromedia.com/support/documentation/en/flashplayer/help/settings_manager07.html

Flash Storage Settings panel

Flash Storage Settings panel

4. remove idle session from you vCenter

vCenter idle session

vCenter idle session

Tagged with: , , ,
Posted in Troubleshooting

NSX – Distributed Logical Router

Overview

Today in modern Datacenter interconnection physical router is essential for building working design,

We need to provide similar functionalities in virtual networks.

Routing between IP subnets can be done in a logical space without traffic going out to the physical router. This routing is performed in the hypervisor kernel with a minimal CPU / memory overhead.

This functionality provides an optimal data-path for routing traffic within the virtual infrastructure.

The distributed routing capability in the NSX platform provides an optimized and scalable way of handling East – West traffic within a data center. East-West traffic is a communication between virtual machine or a resource within the datacenter. The amount of East West traffic in the datacenter is growing. The new collaborative, distributed, and service oriented application architecture demands higher BW for server-to-server communication.

If these servers are virtual machines running on a hypervisor, and they are connected to different subnets, the communication between these servers has to go through a router. Also, if Physical router is used to provide routing services the virtual machine communication has to go out to the physical router and get back in to the server after routing decision. This un-optimal traffic flow is sometimes called as “hair pinning”.

The distributed routing on the NSX platform prevents the “hair-pinning” by providing hypervisor level routing functionality. Each hypervisor has a routing kernel module that performs routing between the logical interfaces (LIFs) defined on that distributed router instance.

 

 

What is DLR

DLR- Distributed Logical Router.

The best way to explain what is the DLR is to look how Backbone Data Center works today.

Inside this Big Box we have one or two supervisor cards, which is the brain of the chassis (Control Plane). And we have many line cards doing the forwarding (Data Plane).

The DLR built from two pieces, first one is the DLR Control VM (Control Plane) that works as Virtual Machine.

Dynamic routing protocol will run between DLR Control VM and Upper Router (Edge).

The second pieces is distributed Kernel Modules that act like Line Cards (Forwarding plane).

The Distributed kernel module will run on each ESXi host.

When the DLR Control VM learn New Network, the routing table update will forward to all ESXi DLR kernel modules.

Like Physical Router have Physical interface with an IP address, Logical Distributed Router have logical interface with an IP address called Lif,

 

What is DLR

What is DLR

 

What is Lif?

LIF is the Logical Interface attached to the DLR. etch interface is assigned an IP address, this interface is distributed to all ESXi hosts with the same IP address.

From VM perspective the Default gateway is the DLR’s IP address configured on the Lif.

 

Lif

Lif

My Current Logical Switch configuration is:

Logical Switch configuration

Logical Switch configuration

 Creating DLR

First step will be to create the DLR Control VM.

We need to go to Network and Security -> NSX Edges -> and click on the green + button.

Here we need to specify Logical (distributed) Router

 

Creating DLR

Creating DLR

Specify the User and Password, we can Enable SSH Access:

DLR CLI Credentials

DLR CLI Credentials

We need to specify where we want to place the DLR Control VM:

place the DLR Control VM

place the DLR Control VM

We need to specify the Management interfaces and Logical Interface (LIF)

Management Interface is for access with SSH to Control VM.

Lif interface needed to be configure Second Table below “Configure Interfaces of this NSX Edge”

Configure Interfaces of this DLR

Configure Interfaces of this DLR

Configure the Lif Interface’s done by connected interface to “Logical Switch” interfaces

Connected Lif  to DLR

Connected Lif to DLR

Configure the Up-Link Transit Lif:

Configure Up-Link Lif

Configure Up-Link Lif

Configure the Web Lif:

Configure the web Lif

Configure the web Lif

Configure the App Lif:

Configure the App Lif:

Configure the App Lif:

Configure the DB Lif:

Configure the DB Lif

Configure the DB Lif

Summary of all DLR Lif’s:

Summary of all DLR Lif’s

Summary of all DLR Lif’s

DLR Control VM can work in High Availability mode, in our lab we will not enable H.A:

DLR High Availability

DLR High Availability

Summary of DLR configuration:

Summary of DLR configuration:

Summary of DLR configuration:

 

DLR Intermediate step

After completed deploying DLR, we created 4 different Lif’s.

Tranit-Network-01, Web-Tier-01, App-Tier-01, DB-Tier01

All these Lif’s are spanned over all our ESX Cluster’s.

So for example virtual machine connected to Logical Switch called “App-Tier-01” will have a default gateway of 172.16.20.1 regardless where this VM located in the DC.

DLR Intermediate step

DLR Intermediate step

 

DLR Routing verification

We can verify NSX controller receiving the DRL Lif’s IP address for each VXLAN Logical switch.

From NSX controller run this command: show control-cluster logical-routers instance all

DLR Routing verification

DLR Routing verification

The LR-Id “1460487505” is the internal id of the DLR control VM.

To verify all DLR Lif’s interfaces run this command: show control-cluster logical-routers interface-summary LR-Id.

In our lab:

show control-cluster logical-routers interface-summary LR-Id14604875

DLR Routing verification

DLR Routing verification

 

Configure OSPF on DLR

On the ESX Edges click on the DLR Type Logical Router

Configure OSPF on DLR

Configure OSPF on DLR

Go to Manage – > Routing ->  OSPF and Click “Edit”

Configure OSPF on DLR

Configure OSPF on DLR

Type in the Protocol Address and Forwarding Address.

Do not Mark the “Enable OSPF” Check box !!!

Protocol Address and Forwarding Address

Protocol Address and Forwarding Address

The Protocol address is the IP address of the DLR Logical Router Control VM, this Control Plane actually establishing the OSPF peering with the NSX Edge.

The Forwarding Address is the IP address that use next-hop for NSX Edge to forward the packet to DRL:

DLR Forwarding Address

DLR Forwarding Address

Click on “Publish Changes”:

Publish Changes

Publish Changes

The results will look like this:

DLR

Go to “Global Configuration”:

Global Configuration

Global Configuration

Type the Default Gateway for DLR (Next hop NSX Edge):

Default Gateway

Default Gateway

Enable the OSPF:

Enable the OSPF

Enable the OSPF

Then click on “Publish the Change’s”

Go Back to “OSPF” to “Are to Interface Mapping” and add the Transit-Uplink to Area 51:

Are to Interface Mapping

Are to Interface Mapping

Click on “Publish Change”

Go to Route Redistribution and make sure OSPF is enabled:

Route Redistribution

Route Redistribution

Deploy NSX Edge

In our LAB we will use NSX Edge as next-hop for LDR but it can be physical router.

NSX Edge is virtual appliance offers L2, L3, perimeter firewall, load-balancing and other services such as SSL VPN, DHCP, etc.

We will use this Edge for Dynamic Routing.

 

Go to “NSX Edge” -> and Click on the green plus button

Select “Edge Services Gateway” fill in the Name and Hostname for this Edge.

If we would like the use redundant Edge we need to checked the “Enable High Availability”

NSX Edge

NSX Edge

Put your username and password:

username and password

username and password

Select the Size of the NSX Edge:

NSX Edge size

NSX Edge size

Select where to install the Edge:

Configure the Network Interfaces:

Configure the Network Interfaces

Configure the Network Interfaces

Configure the Mgmt interface:

 

 

 

 

Configure the Mgmt interface

Configure the Mgmt interface

Configure the Transit interface:

Configure the Transit interface

Configure the Transit interface (toward  DLR):

Configure Default Gateway:

Edge Default Gateway

Edge Default Gateway

 

Set Firewall Default policy to permit all traffic:

Firewall Default policy to permit all traffic

Firewall Default policy to permit all traffic:

Summary of Edge Configuration:

Summary of Edge Configuration

Summary of Edge Configuration

Configure OSPF at NSX Edge:

Configure OSPF at NSX Edge

Configure OSPF at NSX Edge

Enable OSPF at “Global Configuration”:

Enable OSPF at "Global Configuration"

Enable OSPF at “Global Configuration”

In the “Dynamic Routing Configuration” Click “Edit”

For the “Router ID” select the interface that you have configured as the OSPF Router-ID.

Check “Enable OSPF”:

 

Enable OSPF

Enable OSPF

Publish and Go to “OSPF” Add Transit Network to Area 51 in the interface mapping section:

Map Interface to OSPF Area

Map Interface to OSPF Area

 

Click “Publish”

Make sure OSPF Status is in “Enabled” state and the Red button on the right is in “Disable”.

Getting the full picture

 

Getting the full picture

Getting the full picture

 

Dynamic OSPF Routing Verification

Open the Edge CLI

The Edge has OSPF neighbor adjacency with 192.168.10.3 This is the Control VM IP address.

Edge OSPF verfication

Edge OSPF verfication

The NSX Edge Received OSPF Routes from the DLR.

From the Edge Perspective the next-hope to DLR is the Forwarding Address 192.168.10.2

Edge OSPF Routing Verification

Edge OSPF Routing Verification

 

Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router

Thanks to Offer Nissim for reviewing this post

 

To find out more info what is Distributed Dynamic routing I recommend on reading two blogs of

Colleague of mine:

Brad Hedlund

http://bradhedlund.com/2013/11/20/distributed-virtual-and-physical-routing-in-vmware-nsx-for-vsphere/

Antony Burke

http://networkinferno.net/nsx-compendium

Posted in Install

VMware Start Beta NSX training

VMware NSX: Install, Configure, Manage [V6.0]

VMware NSX

Overview:
This comprehensive, fast-paced training course focuses on installing, configuring, and managing VMware NSX™. NSX is a software networking and security virtualization platform that delivers the operational model of a virtual machine for the network. Virtual networks reproduce the layer 2–layer 7 network model in software, enabling complex multitier network topologies to be created and provisioned programmatically in seconds. NSX also provides a new model for network security where security profiles are distributed to and enforced by virtual ports and move with virtual machines.
For advanced course options, go to http://www.vmware.com/education.
Objectives: •  Describe the evolution of the Software-Defined Data Center
•  Describe how NSX is the next step in the evolution of the Software-Defined Data Center
•  Describe data center prerequisites for NSX deployment
•  Configure and deploy NSX components for management and control
•  Describe basic NSX layer 2 networking
•  Configure, deploy, and use logical switch networks
•  Configure and deploy NSX distributed router appliances to establish East-West connectivity
•  Configure and deploy VMware® NSX Edge™ services gateway appliances to establish North-South connectivity
•  Configure and use all main features of the NSX Edge services gateway
•  Configure NSX Edge firewall rules to restrict network traffic
•  Configure NSX distributed firewall rules to restrict network traffic
•  Use role-based access to control user account privileges
•  Use activity monitoring to determine whether a security policy is effective
•  Configure service composer policies
Intended Audience: Experienced system administrators that specialize in networking
Prerequisites: •  System administration experience on Microsoft Windows or Linux operating systems
•  Understanding of concepts presented in the VMware Data Center Virtualization Fundamentals course for VCA-DCV certification
Outline: 1  Course Introduction
•  Introductions and course logistics
•  Course objectives
2  VMware NSX Components for Management and Control
•  Evolution of the Software-Defined Data Center
•  Introduction to NSX
•  VMware® NSX Manager™
•  NSX Controller cluster
3  Logical Switch Networks
•  Ethernet fundamentals and basic NSX layer 2 networking
•  VMware vSphere® Distributed Switch™ overview
•  Switch link aggregation
•  Logical switch networks
•  VMware® NSX Controller® replication
4  Routing with VMware NSX Edge Appliances
•  Routing protocols primer
•  NSX logical router
•  NSX Edge services gateway
5  Features of the VMware NSX Edge Services Gateway
•  Network address translation
•  Load balancing
•  High availability
•  Virtual private networking
–  Layer 2 VPN
–  IPsec VPN
–  SSL VPN-Plus
•  VLAN-to-VXLAN bridging
6  VMware NSX Security
•  NSX Edge firewall
•  NSX distributed firewall
•  Role-based access control
•  NSX data endpoint
•  Flow Monitoring
•  Service Composer

More detailed can be found:

http://mylearn.vmware.com/mgrreg/courses.cfm?ui=www_edu&a=one&id_subject=54990

 

 

Tagged with: , , , , , , , , , , , , , , , , , , ,
Posted in Certification

Create firewall rules that blocked your own VC

Working on daily tasks with firewalls can sometimes end in a situation where you end up blocking access to the management of your firewall.

This situation is very challenging, regardless of the vendor you are working with.

The end result of this scenario is that you are unable to access the firewall management to remove the rules that are blocking you from reaching the firewall management!

 

How it’s related to NSX?

Think of a situation where you deploy a distributed firewall into each of your ESX hosts in a cluster, including the management cluster where you have your virtual center located.

And then you deploy a firewall rule like the one below.

Deny any Any Rule

Deny any Any Rule

Let me show you an example of what you’ve done by implementing this rule:

cut tree you sit on

cut tree you sit on

Like the poor guy above blocking himself from his tree, by implementing this rule, you have blocked yourself from managing your vCenter.

 

How we can we protect ourselves from this situation?

Put your vCenter (and other critical virtual machines) in an exclusion list.

Any VM on that list will not receive any distributed firewall rules.

Go to the Network & security tab Click on NSX Manager

Exclusion VM list 1

Exclusion VM list 1

 

Double click on the IP address object. In my example it is 192.168.110.42

Exclusion VM list 2

Exclusion VM list 2

Click on Manage:

Exclusion VM list 3

Exclusion VM list 3

Click on the green plus button.

Exclusion VM list 4

Exclusion VM list 4

Choose your virtual machine.

Exclusion VM list 5

Exclusion VM list 5

That’s it!  Now your VC is excluded from any enforced firewall rules.

Exclusion VM list 6

Exclusion VM list 6

 

What if we made a mistake and do not yet have access to the VC?

We can use the NSX Manager REST API to revert to the default firewall ruleset.

By default the NSX Manager is automatically excluded from DFW.

Using a REST Client or cURL:

https://addons.mozilla.org/en-US/firefox/addon/restclient

Submit a DELETE request to:

https://$nsxmgr/api/4.0/firewall/globalroot-0/config

Exclusion VM list 7

After receiving code status 204 we will revert to default DFW policy with default rule to allow.

Exclusion VM list 8

Now we can access our VC, As we can see we revert to default policy, but don’t panic 🙂 , we have saved policy.

Exclusion VM list 9

Click on the “Load Saved Configuration” button.

Exclusion VM list 10

Load Saved Configuration before the last Saved.

Exclusion VM list 11

Accept the warning by click Yes.

Exclusion VM list 12
Now we have our last policy before we blocked our VC.

Exclusion VM list 13

We will need to change the last Rule from Block to Allow to fix the problem.

Exclusion VM list 14

And Click “Publish the Changes”.

Exclusion VM list 15

 

Thank to Michael Moor for reviewing this post


Tagged with: , , ,
Posted in Troubleshooting

NSX and Teaming Policy

Teaming policies allow the NSX vSwitch to load the balance the traffic between different physical NIC’s (pNIC).

Within this blog we will explain the different teaming option’s with single or multiple VTEPs.

The official  NSX Design Guide 2.0  contains a table with different teaming policy configuration options.

Teaming Table

Teaming Table

 

VTEP – Special VMkernel interface created to encapsulate/de-encapsulate VXLAN traffic.

VXLAN traffic has a separate IP stack from the other Vmkernel interfaces (Management, vMotion,FT , iSCSI).

At first glance of the table, we can see that only some of the option’s supported with Multiple VTEP.

 

 

What is Multiple VTEP

Multiple VTEP – two or more VTEP kernel interfaces that can be created in an NSX vSwitch.

For Multiple VTEP we will have 1:1 mapping, That means each VTEP will map to specific pNIC.

In our example VTEP1 will map to pNIC1 and VTEP2 will map to pNIC2.

This is the point to mention, all traffic exit  VTEP1 will go to pNIC1 , all traffic coming  from  pNIC1 will forward to VTEP1

This means that all VXLAN outbound and VXLAN inbound traffic from pNIC1 will forward from and to, VTEP1.

Multiple VTEP

Multiple VTEP

Why do we need Multiple VTEPs ?

If we have more than one physical link that we would like to use for VXLAN traffic and the upstream switches do not support LCAP (or they are not configured).

In that case use multiple VTEP is to balance the traffic between physical link’s.

Where do we configure Multiple VTEPs?

Configuration of the multiple VTEPs are  done on the Network & Security > Installation > Configure VXLAN tab.

VXLAN VTEP Teaming mode

VXLAN VTEP Teaming mode

In this example we can see 4 VTEPs, This Number is coming from the Number of up link’s configured in the vDS.

Number of VTEP's

Number of VTEP’s

Source Port Teaming mode (SRCID)

The NSX vSwitch selects an uplink based on the virtual machine portID.

In our example we will have two VTEPs and two physical uplinks

When VM1 connects to an NSX vSwitch and sends RED traffic, the NSX vSwitch will pick one of the VTEPs (VTEP1) based on Portgroup1(portID1)  to handle this traffic.

VTEP1 will then send this traffic to pNIC1.

 

Source Port Teaming mode

Source Port Teaming mode

 

When VM2, with portID2, connects and generates green traffic, the NSX vSwitch will pick a different VTEP to send out this traffic.

We will use another VTEP since the NSX vSwitch will see a different portID as the source, and VTEP1 already has traffic.

VTEP2 will now forward this traffic to PNIC2..

At this point we are using both of the physical links.

Source Port Teaming mode

Source Port Teaming mode

 

Now VM3, from portID3, connects and sends yellow traffic.  The NSX vSwitch with randomly pick one of the VTEPs to handle this traffic.

Both VTEP1 and VTEP2 already have the same VM connection, so there is no difference who will be selected in terms of port-group balancing.

In this example, VTEP1 was chosen for this and forwards traffic to pNIC1.

Source Port Teaming mode

Source Port Teaming mode

Positive aspects: Very simple and there is no need to configure any LACP on the upstream switch.
Negative aspects: If VM1 doesn’t generate heavy traffic, and VM2 is generating heavy VM traffic, the physical links will not be balanced.

 

Source MAC Teaming Policy (SRCMAC)

This method is very identical to the previous method, NSX vSwitch Select up link based on the virtual machine MAC Address.

In our example we will have two VTEP’s and two physical up link’s

When VM1 with MAC1 connected  to NSX vSwitch and send RED traffic, the NSX vSwitch will pick one of the VTEP (VTEP1) base on mac address to handle this traffic.

VTEP1 will send this traffic to pNIC1.

MAC Address Teaming mode

MAC Address Teaming mode

 

When VM2 with MAC2 connect to and generate Green traffic, NSX vSwitch, will pick different VTEP to send this traffic out,

We will use other VTEP since NSX vSwitch  see diff rent mac address as Source and VTEP1 already have traffic.

VTEP2 will forward this traffic to pNIC2.

At this point  we are using both of the physical Links.

MAC Address Teaming mode

MAC Address Teaming mode

 

Now VM3 from MAC3 connected and send Yellow traffic, the NSX vSwitch will pick randomly one of the VTEP to handle this traffic.

Both VTEP1 and VTEP2 already have same VM connection, so there is not different who will selected  in context of mac address balancing.

in our example VTEP1 will was chosen for this and forward it to pNIC1.

 

MAC Address Teaming mode

MAC Address Teaming mode

Positive points: Very simple, no need to configure any LACP on the upstream switch.
Negative points: if VM1 doesn’t generate heavy traffic and VM2 is very heavy VM traffic, the physical link’s will not be balanced.

 

 

 

LCAPv2 (Enhanced LACP)

Starting from ESX 5.5 VMware improve the hashing method for LCAP up to 20 different HASH algorithm.

vSphere 5.5 supports these load balancing types:
  1. Destination IP address
  2. Destination IP address and TCP/UDP port
  3. Destination IP address and VLAN
  4. Destination IP address, TCP/UDP port and VLAN
  5. Destination MAC address
  6. Destination TCP/UDP port
  7. Source IP address
  8. Source IP address and TCP/UDP port
  9. Source IP address and VLAN
  10. Source IP address, TCP/UDP port and VLAN
  11. Source MAC address
  12. Source TCP/UDP port
  13. Source and destination IP address
  14. Source and destination IP address and TCP/UDP port
  15. Source and destination IP address and VLAN
  16. Source and destination IP address, TCP/UDP port and VLAN
  17. Source and destination MAC address
  18. Source and destination TCP/UDP port
  19. Source port ID
  20. VLAN

 

Source or Destination  IP Hash will drive from the VTEP ip address located at the outer ip header of the VXLAN frame.

VXLAN frame

VXLAN frame

Every time we need to calculate the Hash algoritim for Source or Destination IP Method (option 1 or 7) the VTEP ip address will used

 

For LACPv2 connectivity from esx host to switch we can configure one VTEP only.

LACPv2

LACPv2

In this example we have 2 physical up link’s connected to one physical upstream switch.

LACP

LACP

 

LACPv2 Source or Destination ip Hash  (Bad for NSX)

In this scenario we have using ip Hash as method for LACPv2,  We have two esxi host, esx1 and esx2.

When VM1 connected to NSX vSwitch on host1 and generated Red traffic towered VM2, the traffic will send to VTEP1 (the only VTEP we have in the esx host).

Then the NSX vSwitch will calculated the Hash value base on Source VTEP IP1 or Destination VTEP IP2 . for this Hash we will pick pNIC1.

When physical switch received this at esx2 he will do the same calculation and pick one physical link’s. in this example pNIC1

 

LACPv2 IP Hash

LACPv2 IP Hash

 

Now VM3 connect to NSX vSwitch at esx1 try to send Green traffic, VTEP1 will handle this traffic.

the NSX vSwitch will calculate the Hash algorithm base on the source ip or destination ip of the VTEP1

The results will be electing same pNIC1 since this is the same Hash when VM1 send traffic !!!

from this scenario we can see that both of connection form VM1 and VM3 using pNIC1.

Even we you using IP Hash as LCAPv2 as Hash method,  we will always use the same pNIC.

LACPv2 IP Hash

LACPv2 IP Hash

 

 

LACPv2 Layer 4

When using L4, Hashing will calculated based on “Source port” or “Destination port” (Option 2,4,6,8).

In VXLAN that mean Hash will done base on the “Outer UDP Header”.

VXLAN destination port is always udp/8472

VMware create random UDP source port base on the L2 Ethernet inner VXLAN header.

VXLAN Random UDP Port

VXLAN Random UDP Port

The results of this method, every time different VM MAC address is send traffic, different random UDP source port will used,

that mean different Hash results = Better Load Balancing.

 

Now when VM1 and VM3 send traffic , the load balancing will be on different pNIC’s.

LACPv2 IP Hash

LACPv2 IP Hash

 

 

 Conclusion

When ever possible use LCAPv2  with L4  as Hash Algorithm.

Source MAC is more CPU intensive then Source ID,  Source ID is recommended when LACP is not possible

 

 

 

Tagged with: , , , , , , , , , , ,
Posted in Design

NSX Minimum MTU


What is the Minimum MTU for VMware NSX  ?


The VXLAN rfc  can be found at:

https://www.rfc-editor.org/rfc/rfc7348.txt

Since we are in the Professional  field let’s show it with wireshark

From my esxi host we can run the command

pktcap-uw –capture UplinkSnd –uplink vmnic1 -o /tmp/cap2.pcap

This command will capture all my traffic send from VTEP toward the physical switch and save it in file name cap2 with pcap format.

while running this command i will ping from one guest 192.168.1.1 to other guest 192.168.1.2 to generate some traffic.

With WinSCP we can bring the pcap file from the esxi host to my Windows PC and open it with WireShark.

Open the file with show us something like this:

Wireshark 1

Wireshark 1

We can see udp traffic from VTEP  host 192.168.64.130 to VTEP 192.168.64.131 dest to port 8472 (VXLAN) but where is the VXLAN header ?

For wireshark to display VXLAN traffic we will need to change the decode to VXLAN!!!

Right Click to the frame and chose “Decode As…”

wireshark decode as vxlan

wireshark decode as vxlan

 

Change the Trnasport to VXLAN

Transport k decode as vxlan

Transport k decode as vxlan

 

wireshark display VXLAN

wireshark display VXLAN

Now we can see the VXLAN header

Capture4

 

MTU Math Time

MTU Math

MTU Math

 

Outside MTU for IPv4 without Internal Guest OS dot1q Tagging = 20 + 8 + 8 + 14 + 1500  = 1550 bytes

Outside MTU for IPv4 with Internal Guest OS dot1q Tagging = 20 + 8 + 8 + 14 + 4 + 1500  = 1554 bytes

For IPv6 we will need to add more 20 bytes to Outer IPv4 so total max MTU  will be 1574 bytes

 

IPv4 with VXLAN

IPv4 with VXLAN

Conclusion

When we configure VXLAN in DSwitch keeping the default MTU 1600 will keep you in the safe side!!!

NSX MTU 1600

NSX MTU 1600

Tagged with: , , , , , , ,
Posted in Design, NSX-V

NSX Home LAB Part 4 – Logical Switch

Logical Switch Overview

This next overview of Logical Switch was taken from great work of Max Ardica and Nimish Desai in the official NSX Design Guide:

The Logical Switching capability in the NSX-v platform provides customers the ability to spin up isolated logical L2 networks with the same flexibility and agility, as it is to spin up virtual machines. Endpoints, both virtual and physical, can then connect to those logical segments and establish connectivity independently from the specific location where they are deployed in the data center network. This is possible because of the decoupling between network infrastructure and logical networks (i.e. underlay and overlay networks) provided by NSX network virtualization.

Logical Switch Overview

The Figure above displays the logical and physical network views when logical switching is deployed leveraging the VXLAN overlay technology that allows stretching a L2 domain (logical switch) across multiple server racks, independently from the underlay inter-rack connectivity (L2 or L3).

With reference to the example of the deployment of the multi-tier application previously discussed, the logical switching function allows to create the different L2 segments mapped to the different tiers where the various workloads (virtual machines or physical hosts) are connected.

Create of Logical Switch

It is worth noticing that logical switching functionality must enable both virtual-to-virtual and virtual-to-physical communication in each segment and that the use of NSX VXLAN-to-VLAN bridging is also required to allow connectivity to the logical space to physical nodes, as it is often the case for the DB tier.

LAB Topology Current State

Before start Lab 4 i deleted one ESXi host to save memory and storage space on my Laptop.

So the Compute cluster will built from 2 ESXi host’s without VSAN.

My Shard Storage is OpenFiler.

Lab4  Topology Starting  Point

Lab4 Topology Starting Point

Creating the VTEP kernel Interface

In order the ESXi host to be able send VXLAN traffic we need to create special VMkernel interface called VTEP(VXLAN Tunnel End Point).

We have two option for creating the IP address of this VTEP .

DHCP or IP Pool, My preference is ip pool method.

Go to Host Preparation :

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

Click  “Configure” where the VXLAN Column, as result’s New form pop up.

The Minimum MTU is 1600 , do not lower this value.

https://roie9876.wordpress.com/2014/04/29/nsx-minimum-mtu/

Select “Use IP Pool” and Chose New Pool.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

New Form will show. type your range of ip address for the VMkernel ip pool.

Create VTEP IP Pool

Create VTEP IP Pool

Click OK, for the teaming policy choose Fail Over (Must for Nested ESXi).

After few sec we will create 3 VMK1 interfaces with 3 different ip address.

Create VTEP VMkernel Interface

Create VTEP VMkernel Interface

The Topology with the VMKernel interfaces, show in Black Color :

Lab4  Topology With VTEP IP address

Lab4 Topology With VTEP IP address

 Create the Segment id

For etch VXLAN we have unique ID that represent with as Segment ID, this Number called VNI – VXLAN Network Identifier.

Instead of creating new VNI etch time we need new Logical Switch, we will create Pool of VNI.

The VNI number start from 5000.

Click on Segment ID and than Edit and Chose your range:

Create Pools of VNI

Create Pools of VNI

Transport Zone

In the simplest sense, a Transport Zone defines a collection of ESXi hosts that can communicate with each other across a physical network infrastructure. As previously mentioned, this communication happens leveraging one (or more) specific interface defined on each ESXi host and named VXLAN Tunnel EndPoints (VTEPs).

A Transport Zone extends across one or more ESXi clusters and in a loose sense defines the span of logical switches. To understand this better, it is important to clarify the relationship existing between Logical Switch, VDS and Transport Zone. A VDS can span across a certain number of ESXi hosts, since it is possible to add/remove single ESXi hosts from a specific VDS. In a real life NSX deployment, it is very likely that multiple VDS are defined in a given NSX Domain. Figure 14 shows a scenario where a “Compute-VDS” spans across all the ESXi hosts part of compute clusters, and a separate “Edge-VDS” extends across ESXi hosts in the edge clusters.

Think of Transport zone as Large Tube that carry all VNI inside it.

The Zone can work in 3 different Mode’s: Unicast , Multicast and Hybride.  (special blog post will be need to explain this tree mode’s)

We will chose Unicast because this mode will work without multicast at the Physical Switch’s.

We can decided which cluster’s join to Transport Zone,

In our lab both Management and Compute Cluster will Join same Transport Zone called “Lab Zone”.

Note: NSX Domain can have more than one Transport Zone.

Create Transport Zone

Create Transport Zone

Create the Logical Switch

At this Point We can create the Logical Switch, the function of the logical switch is the connect Virtual Machine’s from different esxi host’s(or same one)

Magic of NSX is the ability of each  esxi to be in different ip subnet’s

For this Lab, the Logical Switch will named “VNI-5000”.

Logical switch is tied to Transport Zone

Create Logical Switch

Create Logical Switch

Results  of creating the logical switch:

Logical Switch

Logical Switch

Connect virtual machines to/from a Logical Switch

To connect VM to Logical switch we need to click +VM image:

Connect Virtual Machines to logical switch

Connect Virtual Machines to logical switch

Select VM

Connect Virtual Machines to logical switch2

Connect Virtual Machines to logical switch2

Pick Up the Specific NIC to add to logical switch:

Connect Virtual Machines to logical switch3

Connect Virtual Machines to logical switch3

Click Finish

Connect Virtual Machines to logical switch4

Connect Virtual Machines to logical switch4

Test Logical Switch connectivity

We have  two diffrent way to test logical switch connectivity:

Option 1 GUI:

Double Click on Logical Switch icon for example VNI-5000 and select the Monitor tab:

Test Logical Switch connectivity1

IN the Size of test packet we two diffrent option:

“VXLAN standard” or “Minimum”. the difference is the MTU size.

Test Logical Switch connectivity2

Test Logical Switch connectivity2

VXLAN standard size is 1550 bytes (should match the physical infrastructure MTU) without fragmentation. This allows NSX to check connectivity and verify that the infrastructure is prepared for VXLAN traffic.

Minimum packet size allows fragmentation. Hence, NSX can check only connectivity but not whether the infrastructure is ready for the larger frame size.

Use the browse button to select the source esxi host and destination esxi host:

Test Logical Switch connectivity3

Test Logical Switch connectivity3

Click “Start Test”:

Test Logical Switch connectivity4

Test Logical Switch connectivity4

Option 2 CLI :

use the command:

ping ++netstack=vxlan ‘IP_address’

for example:

ping ++netstack=vxlan 192.168.150.52 -d -s 1550

The ip address is the destination VTEP ip address,

The “D” mean set DF bit on

The “S” is the MTU size.

Lab 4 Summary Logical Switch:

After Creating the Logical Switch VNI_5000 (Marked with yellow) , VM1 _will able to  talk with VM2.

Note the magic: This two virtual machine’s do not have L2 connectivity!!!

LAB 4 Final with Logical SwitchTopology

LAB 4 Final with Logical SwitchTopology

Related Post:

NSX Manager

NSX Controller

Host Preparation

Logical Switch

Distributed Logical Router

Tagged with: , , , , , , , , ,
Posted in Install

VMware is a new entrant to the Data Center Networking Magic Quadran

Summary
Data center networking requirements have evolved rapidly, with emerging technologies increasingly focused on supporting more automation and simplified operations in virtualized data centers. We focus on how vendors are meeting the emerging requirements of data center architects.

 

Gartner Magic Quadrant for Data Center Networking

++

VMware

VMware is a new entrant to the Data Center Networking Magic Quadrant, due to the introduction of its NSX overlay solution and its large installed base of distributed virtual switches in virtual server deployments. The NSX solution is an innovative approach to solving long-standing network provisioning bottlenecks within the data center, and it allows for the integration of switching, routing and upper-layer services into an integrated application and network orchestration platform. With an overlay solution that may not require hardware upgrades, VMware offers enterprises a potentially quicker way of taking advantage of SDN capabilities. The NSX solution should be considered by existing VMware customers as a way of providing network agility and reducing network operational challenges within the data center.

Strengths

VMware NSX provides a way to bring network agility to existing network deployments with limited impact to existing network hardware. This approach simplifies the day-to-day network operations required to deal with application changes.

VMware NSX may offer a cost-effective solution, assuming that the existing infrastructure has appropriate scale, capacity and performance to meet application requirements, and that enterprises have negotiated appropriate discounts from the vendor.

VMware NSX works across many IP-based network installations and in virtual environments running mainstream hypervisors.

VMware has established relationships with a broad set of IT vendor partners to provide integration of security and optimization solutions, as well as key network hardware players, such as Arista Networks, Brocade, Dell, HP and Juniper Networks.

Cautions

VMware NSX is a new product and there are a very limited number of production deployments in mainstream enterprise.

VMware does not offer network hardware, and enterprises must still acquire, provision and manage the foundational aspects of the physical network.

The NSX solution has limited control of the underlay network. It is imperative for customers to ensure that VMware’s assumptions of adequate performance, scale and resiliency are available in the physical network to meet current and future application deployments. Visibility is improving through the integration into traditional network management tools.

VMware also offers rudimentary upper-layer services within a distributed and scalable framework. Enterprises should look to VMware partnerships to integrate with existing security, application delivery controller ADC and WAN optimization capabilities, while continuing to monitor VMware’s upper-layer offerings.

Enterprises must evaluate the total cost of NSX deployments. Although we often find that NSX is a lower-cost solution when the existing network can meet existing and planned capacity requirements, there is a very large range of VMware pricing and discounting in the market that customers need to consider and evaluate. In situations when network upgrades are required, the comparison will not be as favorable.

Gartner Magic Quadrant for Data Center Networking

Tagged with: , , , , ,
Posted in NSX-V
Categories
Blog Stats
  • 27,577 hits
Archive

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 42 other followers