CSR1000v as VMs on ESXi


I’ve got this super weird issue thought to share for thoughts on probable solution.

Attached is my vLAB topology incase anyone wants to adopt, the physical/virtual connections are as specified. The logical connections are same as INE CCIEv5 Workbook.

In summary:

  • all routers are CSR1000v configured as VMs on ESXi
  • I’ve got end hosts windows 10, windows 7, and server OS’s as VMs connected to vSwitch2 portGroup100-vS2 (meaning vlanID = 100).
  • I’ve got another portGroup-vS2 with VLANID = 4095 which means trunk on ESXi (on same vSwitch2)
  • R7 is logically connected to R9 through vSwitch1 - portGroup 4095 (as trunk)
  • Basic EIGRP routing is working to advertise connected routes etc.
  • I’ve configured all vSwitches and portGroups to accept Promiscuous traffic, MacAddress changes, and Forged Transmits except for default VM network switch and PG which is going to outside network (which is fine; no logical link to this case).

The super weird situation is I can ping any configured IP on the router from the endHost, but not vice versa. From immediate connectivity of R9, I can ping endHost and vice versa, however, from a hub away from that or more, I cannot ping from the router to the endHosts, but endHosts to all routers are fine. Route wise, everything is fine, all router knows how to reach the end hosts etc.

For simplicity and ease of explanation, I have implemented same portion of the network on GNS3 (attached the topology), works super fine.

I know the issue is on the virtualization, but don’t know what it is exactly.
Traceroute reaches R9 and drops (immediate connection to endHosts).

I’m open to ideas, though I’ve tried multiple topology playout with the vSwitch, but still the same.

Uploading: vLAB-topology.JPG…