VMware and iptables

Is there a problem using VMware on Windows to host a linux virtual window using iptables? I have a configuration that seems to work on physical hardware but is vulnerable under VMware.

I am using VMware to run a virtual Linux 2.6.24 machine on a Windows 2003 Server host. A linux app is essentially a NAT router that starts up iptables

. The rules in the table nat

include:

Chain foo_pre
 target     prot opt in  out  source      destination
 LOG        all  --  *   *    0.0.0.0/0   0.0.0.0/0     [options here]
 LOG        all  --  *   *    0.0.0.0/0   10.10.1.33    [options here]
 DNAT       all  --  *   *    0.0.0.0/0   10.10.1.33    tcp dpt:80 to:192.168.0.33:8080

Chain PREROUTING
 target     prot opt in  out  source      destination
 foo_pre    all  --  *   *    0.0.0.0/0   0.0.0.0/0

      

I can see incoming packets at 10.10.1.33:80 with help tcpdump

and the former LOG

generates messages but neither the DNAT

latter LOG

shows packets registered on their packet counters, the latter LOG

does not generate messages and tcpdump

does not show packets at 192.168.0.33.

The adapter eth0

is on the 10.10.0.0/16 network with the default gateway 10.10.1.1; it has a secondary address of 10.10.1.33/32. /proc/sys/net/ipv4/config/eth0/forwarding

set to 1.

Is VMware the culprit or am I missing something? Thank!


Update: We have simplified the test environment. No NAT rules at all, just a virtual virtual machine running under a Win2k3 server host. Testing stages:

  • The VM is connected to a network adapter. VM and host are on the same subnet with the same default gateway as above.

  • VM interacts with devices both inside and outside of its subnet: ICMP, TCP, UDP. Communication is bi-directional: it does not matter which equipment initiates it.

  • The engineer forced the default gateway during the call to the system.

  • Now the VM only communicates with devices on its subnet. Any attempt to communicate through the gateway with the same equipment as in step 2 cannot put packets on the wire. tcpdump on eth0 on the virtual machine shows outgoing packets with no response; WireShark on the host doesn't show anything on the physical NIC.

  • Stopping and restarting a virtual machine does not change its behavior. Stopping a virtual machine and replacing it with another virtual machine with the corresponding IP address, etc. Doesn't change behavior.

  • The Win2k3 host continues to communicate normally on both its subnet.

I can only conclude from this that "something is happening" between the VM and the host: in the VMware drivers or on the host's network networks. I'm reassembling the web pages ... it's hard to imagine we're the first to see this.

Updates as they become available. Thanks for your thoughts and discussions.

-1


source to share


2 answers


The second line of the log tries to match packets sent at 10.10.1.33, but you changed the destination address to 192.168.0.33 on the line above it.

I'm not sure why you can't see outgoing packets in tcpdump yet. I am assuming you are running tcpdump on the Linux VM itself. Is the virtual machine actually sending packets on the same interface it receives, or is there a second virtual ethernet adapter? Which machines are assigned different IP addresses (except 10.10.1.33).



Regarding the update: I assume you are not using DHCP (people usually don’t bother with static IP addresses). Also, it looks like the gateway sees one network adapter using two IP addresses. This should usually be good, but it's always the details that get you.

Is it possible that the gateway will only assign one IP address to the NIC and ignore traffic from the VM?

+1


source


After your editing, I suggest an experiment: on your physical machine, configure your network adapter to disable all hardware acceleration.



0


source







All Articles