Download Presentation

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts
no text concepts found
Transcript
Networking Lab
Life of a packet
Nicolas Prost
Septembre 2015
1
Networking Lab - Goals
From the theory …. to experimentation
• network switching (level 2) in an openstack environment
• external world communication with DVR ( network routing / NAT, level 3)
• network virtualization (underlay with vxlan)
Several Use Cases to follow a ping packet
• Use case 1 East-West routing, VM to VM in single network on single compute node
• Use case 2 East-West routing, VM to VM in single network on two compute nodes
• Use case 3 North-South with Floating IP, VM To Internet (DVR / sNAT)
• Use case 4 East-West routing, VM to VM in two sub-networks on two compute nodes
• Use case 5 North-South routing with SNAT, VM to Internet (Dynamic NAT)
2
(DVR)
Main CLI on Compute node
Libvirt Virtualization
•
virsh
Linux bridge
• brctl show
• iptables --list-rules
• tcpdump
http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
3
network namespace
• ip netns - process network
namespace management
(ip, tcpdump, iptables)
openvswicth
• ovs-vsctl show - utility
for querying and
configuring OVS
• ovs-ofctl show administer / configure
OpenFlow switches
• ovs-appctl - utility for
configuring running
OVS daemons
Use Case 1: VM to VM in single network on single compute node
4
Use Case 2: VM to VM in single network on two compute nodes
5
Use Case 3: North-South with Floating IP
6
Use Case 4: East-West routing – VM on different computes /
networks
7
Use Case 5: North-South routing with SNAT
8
Network Lab - Pre-requisites
• Having follow the theory
• Having done the previous Lab
• Get the Lab Guide pdf from http site
Dashboard: https://192.168.24.31/ (admin / c7d9b0fe57df051ec6b76c2bb741ab0dfa81720d)
• a Tenant Id and User Id
• a Private Network and a subnet
• 3 VMs (you know how to access to), 2 on the same Compute node, the 3rd one on a different one
with security group (Ping and SSH authorized !), keypair, a floating IP
• A router, connected to external Network
9
Lab Environement (reminder)
Jump Host
• RDP to 16.16.11.96 as userXYZ / *ETSSjun2015!*
Undercloud
• ssh [email protected] (from Seed VM)
Seed Host
• # sudo -i
• SSH 10.2.1.230 as demopaq / P@ssw0rd (from Jump Host)
• # source stackrc
• Run sudo –i t switch to root user
• # nova list
Seed VM
Overcloud
• ssh 192.168.24.2 (from Seed Host)
• ssh [email protected] (from Seed VM)
• source stackrc
• # sudo -i
• nova list
• # source stackrc
• # nova list
Please do not stop the SEED VM. ! This would break the entire lab!
Compute Node
• ssh [email protected] (from Seed VM)
• # sudo -i
10
Collecting Information
11
Prepared environement
Tenant:
networklab
Network:
ext-net – subnet: 192.168.25.0/24 (FIPs)
nwlabprivate - subnet: internal – 192.168.200.0/24
with nwlabrouter (ID = c3be0f2e-88c7-445e-89aa-9c17b8d3761b )
Security group: nwlabsecgroup
KeyPair:
nwlabkeypair
VMs
12
Instance Id
Compute IPs
Bridge Id
vNIC Id
IP
+ Associated
FIPs
MAC@
nwlab1on
Cumpute
9
instance0000005a
192.168.24.44
qbr3f3ebb06-dd
tap3f3ebb06-dd
192.168.200.9
FIP:
192.168.25.87
fa:16:3e:ee:5c:7f
nwlab2
on
Cumpute
9
instance0000005d
qbrfed20562-44
tapfed20562-44
192.168.200.10
fa:16:3e:82:49:d
1
192.168.24.44
Collecting Information on VMs
Get your project tenant ID (from Overcloud)
# keystone tenant-get <your tenantName>
e.g. 1598e8d4a5e64bed9880514a39a2e940
On what physical compute nodes your instances are running and what is its local VM name (from Overcloud)
# nova list --all-tenants 1 --tenant <tenantId>
--fields name,OS-EXT-SRV-ATTR:host,OS-EXT-SRV-ATTR:instance_name
e.g. NetworkLabVM1 | overcloud-ce-novacompute1-novacompute1-qr52vumlc4in | instance-000001b6
Get compute node IPs (from Overcloud)
# nova hypervisor-list
# nova hypervisor-show <computeNodeHostname> | grep host_ip
e.g. 192.168.24.35 (compute 0) and 192.168.24.36 (compute 1)
Log into compute node and Get the Virtual Nic + bridge (from Seed VM)
# ssh heat-admin@<ComputeNode IP>
$ sudo –i
[# virsh list]
[# virsh dumpxml <Instance ID> | grep “<nova:name” to check it is your VM]
# virsh dumpxml <Instance ID> | grep -A 7 "<interface“
e.g. tap551d286a-e4/ qbr551d286a-e4
13
Overcloud Compute IP
+--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+
| ID
| Name
| Status | Task State | Power State | Networks
|
+--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+
| 914b9e90-af7e-48a1-8f2a-a9fdc607743c | overcloud-ce-controller-SwiftStorage0-xupnrgqv6byz | ACTIVE | -
| Running
| ctlplane=192.168.24.34 |
| d13ded44-7f6a-47e5-a7d2-5ade062208a8 | overcloud-ce-controller-SwiftStorage1-3qxf35lkkagj | ACTIVE | -
| Running
| ctlplane=192.168.24.33 |
| 6bc6e42a-ef3b-45ae-b445-662e3525914d | overcloud-ce-controller-controller0-6udsmj2xdjbi
| 39266048-b727-4254-8b35-5f3dd2f4cd2f | overcloud-ce-controller-controller1-k3iiokbfjvey
| ACTIVE | -
| ACTIVE | -
| e9e89f62-762b-496f-9318-01292d7a0c10 | overcloud-ce-controller-controller2-ssbsl5uulnmn
| ACTIVE | -
| Running
| Running
| ctlplane=192.168.24.30 |
| Running
| 189f1f0b-17ef-4526-824b-0cb66f2745f5 | overcloud-ce-novacompute0-NovaCompute0-mxdy3klm45np | ACTIVE | -
| ctlplane=192.168.24.32 |
| Running
| 7933d944-9914-4146-91ae-15541a3c9df7 | overcloud-ce-novacompute1-NovaCompute1-dcemqprercrx | ACTIVE | -
| Running
| ctlplane=192.168.24.35 |
| ctlplane=192.168.24.36 |
| 5d71a273-9f42-432b-8838-473d9b6e75ac | overcloud-ce-novacompute2-NovaCompute2-6gzjf42rxtvf | ACTIVE | -
| Running
| ctlplane=192.168.24.37 |
| 34ae25e9-87cb-4fcd-9ef9-00f86fe88e25 | overcloud-ce-novacompute3-NovaCompute3-3yek7if6k3pm | ACTIVE | -
| Running
| ctlplane=192.168.24.38 |
| c7920407-b93c-410c-aa66-a2734f697dea | overcloud-ce-novacompute4-NovaCompute4-oc6xz72joshk | ACTIVE | -
| Running
| ctlplane=192.168.24.39 |
| 13463fb4-68f8-451f-8762-baac928763a1 | overcloud-ce-novacompute5-NovaCompute5-42mkfaniod5e | ACTIVE | -
| Running
| ctlplane=192.168.24.40 |
| a654ec46-2284-4c8e-8e57-9d6fe74b1517 | overcloud-ce-novacompute6-NovaCompute6-nknrdp3bxirp | ACTIVE | -
| Running
| ctlplane=192.168.24.41 |
| d89666e7-da13-4c0a-9321-d74ab3d3c692 | overcloud-ce-novacompute7-NovaCompute7-th2gxbphpvyj | ACTIVE | -
| Running
| ctlplane=192.168.24.42 |
| 9a91f928-164b-41b6-867e-b711643f6ae8 | overcloud-ce-novacompute8-NovaCompute8-hxkfrs7fmum5 | ACTIVE | -
| Running
| ctlplane=192.168.24.43 |
| b64e0d6d-9226-43d7-b793-5a76f15aa505 | overcloud-ce-novacompute9-NovaCompute9-2fcag4clpflk | ACTIVE | -
| Running
+--------------------------------------+-----------------------------------------------------+--------+------------+-------------+------------------------+
14
| ctlplane=192.168.24.31 |
| ctlplane=192.168.24.44 |
Use Case 1
VM to VM in single network on single compute node
15
Use Case 1: VM to VM in single network on single compute node
16
Use Case 1: VM to VM in single network on single compute node
What you need (Refer to the Cloud Lab for How To)
• 2 VMs, on the same network and on the same compute node, with Security Group allowing
Ping / SSH
Tips: to ensure you are on the same compute node, create your first VM and check on what
compute node it is hosted. Then create your second VM using the relevant Availability Zone
Scenario
Connect to first instance and initiate ping to second instance
Use Case 1: VM to VM in single network on single compute node
ping <VM2 IP>
VM0
eth0
tcpdump icmp -e -i <tap> (the VM vNIC)
check Dst MAC : fa:16:3e:d5:14:0c
tap
iptables --list-rules | grep <tap>
per-VM Linux Bridge (qbr)
Iptables
qv
b
neutron-openvswi-i551d286a-e => Input
neutron-openvswi-o551d286a-e => Output
iptables –list <neutron-openvswi-i> -v –n
brctl show <qbr>
tcpdump icmp -e -i <qvb>
0 0 RETURN icmp -- *
7 1056 RETURN tcp -- *
rule (ingress)
*
*
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
=> ICMP security rule (ingress)
tcp dpt:22
=> SSH security
ovs-vsctl show | grep -A3 qvo
tag: 47 Tenants are locally isolated on L2 by assigning VLAN tags
qvo
18
Compute1 vSwitch Integration Bridge (brint)
Table 0 – Forward
NORMAL
VLAN
ovs-ofctl show br-int | grep qvo
140 qvo port Id used for OpenFlow rules
ovs-ofctl dump-flows br-int table=0
match of Dst MAC is with rule forward NORMAL (we will do L2 forwarding)
ovs-appctl fdb/show br-int | grep <Dest MAC>
packet switch to port 141 (dst MAC known)
Use Case 1: VM to VM in single network on single compute node
VLAN
Tag
ovs-ofctl show br-int | grep <port>
Compute vSwitch Internal Bridge
19
Table Forward
qvo
141 qvo8f0d43bf-95 not leaving br-int, going to local bridge
tcpdump icmp -e -i qvb<ID>
19
qv
b
per-VM Linux Bridge (qbr)
Iptables
tap
eth0
VM2
tcpdump icmp -e -i tap<VM2>
==> Test with a security rules without ICMP
Use Case 2
VM to VM in single network on two compute nodes
20
Use Case 2: VM to VM in single network on two compute nodes
21
Use Case 2: VM to VM in single network on two compute nodes
http://docs.openstack.org/networking-guide/scenario_legacy_ovs.html
22
Use Case 2: VM to VM in single network on two compute nodes
What you need (Refer to the Cloud Lab for How To)
• 2 VMs, on the same network BUT on different compute nodes, with Security Group
allowing Ping / SSH
Tips: to ensure you are on the same compute node, create your first VM and check on what
compute node it is hosted. Then create your second VM using the relevant Availability Zone
Scenario
Connect to first instance and initiate ping to second instance
Use Case 2: VM to VM in single network on two compute nodes
ping <VM3 IP>
VM0
eth0
tcpdump icmp -e -i <tap> (the VM vNIC)
check fa:16:3e:dd:ff:cf
tap
iptables --list-rules | grep <tap>
neutron-openvswi-i3f3ebb06-d => Input chain
neutron-openvswi-o3f3ebb06-d => Output chain
per-VM Linux Bridge (qbr)
Iptables
qv
b
iptables –list <neutron-openvswi-i> -v –n
brctl show <qbr>
tcpdump icmp -e -i <qvb>
qvo
24
0 0 RETURN icmp -- *
7 1056 RETURN tcp -- *
rule (ingress)
*
*
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
0.0.0.0/0
=> ICMP security rule (ingress)
tcp dpt:22
=> SSH security
ovs-vsctl show | grep -A3 qvo
VLAN
Compute1 vSwitch Integration Bridge (brint)
Table 0 – Forward
NORMAL
tag: 2 Tenants are locally isolated on L2 by assigning VLAN tags
ovs-ofctl show br-int | grep qvo
13 Port Id used for OpenFlow rules
ovs-ofctl dump-flows br-int table=0
match is with rule forward NORMAL (we will do L2 forwarding)
ovs-appctl fdb/show br-int | grep <Dest MAC>
packet switch to port 6 (dst MAC known)
Use Case 2: VM to VM in single network on two compute nodes
ovs-ofctl show br-int | grep <port>
Compute 1 Integration Bridge (br-int)
patch Tun MAC is not reachable on br-int and we need to go out of compute node
Table – Forward
ovs-ofctl show br-tun | grep '('
patch1(patch-int): addr:f2:a9:2e:fd:d9:22
VLAN
patch-int port Id
tun
patch-int
ovs-ofctl dump-flows br-tun table=0
cookie=0x0, duration=173548.496s, table=0, n_packets=37963, n_bytes=13248284, idle_age=0, hard_age=65534, priority=1,in_port
Table 0: From ?
actions=resubmit(,1)
VM
ovs-ofctl dump-flows br-tun table=1
cookie=0x0, duration=173603.994s, table=1, n_packets=38004, n_bytes=13252670, idle_age=0, hard_age=65534, priority=0
actions=resubmit(,2)
Table 1: Routed ?
ovs-ofctl dump-flows br-tun table=2
cookie=0x0, duration=173834.782s, table=2, n_packets=528, n_bytes=49526, idle_age=0, hard_age=65534,
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
Table 2: Unicast ?
ovs-ofctl dump-flows br-tun table=20 | grep <Dest MAC>
cookie=0x0, duration=8076.520s, table=20, n_packets=509, n_bytes=49098, idle_age=0, priority=2,dl_vlan=2,dl_dst=fa:16:3e:dd:ff:c
actions=strip_vlan,set_tunnel:0x3ed,output:7
strip VLAN tag, set VXLAN VNI 0x3ed (in Hex = 1005 in Dec) and send to port 7
Table 20: Tunnel
Compute1 Tunnel
Bridge (br-tun)
VNI
ovs-ofctl show br-tun | grep '(‘
7(vxlan-c0a8182b): addr:8e:39:ac:11:c0:ea
ovs-vsctl show | grep –A2 <vxlan ID>
options: {df_default="false", in_key=flow, local_ip="192.168.24.44", out_key=flow, remote_ip="192.168.24.43"}
This is compute8 ÏP
Use Case 2: VM to VM in single network on two compute nodes
tcpdump -e -i eth0 -c 100 | grep -B1 <VM Destination IP>
Table 20: Tunnel
09:16:56.583110 c4:34:6b:ae:d7:b8 (oui Unknown) > c4:34:6b:ae:28:50 (oui Unknown), ethertype IPv4
(0x0800), length 148: overcloud-ce-novacompute9-NovaCompute9-2fcag4clpflk.42717 > overcloud-cenovacompute8-NovaCompute8-hxkfrs7fmum5.4789: VXLAN, flags [I] (0x08), vni 1005
Internal MAC and IP are not visible to underlay
Compute1 Tunnel
Bridge (br-tun)
VNI
Underlay
VNI
tcpdump -e -i eth0 -c 100 | grep -B1 <Destination IP>
09:28:03.584266 IP overcloud-ce-novacompute9-NovaCompute9-2fcag4clpflk.42717 > overcloud-ce-novacompute8NovaCompute8-hxkfrs7fmum5.4789: VXLAN, flags [I] (0x08), vni 1005
IP 192.168.200.9 > 192.168.200.11: ICMP echo request, id 6486, seq 1615, length 64
Compute2 Tunnel
Bridge (br-tun)
ovs-vsctl show
Port "vxlan-c0a8182c"
Interface "vxlan-c0a8182c"
type: vxlan
options: {df_default="false", in_key=flow, local_ip="192.168.24.43", out_key=flow,
remote_ip="192.168.24.44"}Port “
ovs-ofctl show br-tun | grep '('
12(vxlan-c0a8182c): addr:e6:c3:36:83:61:a6
VXLAN packet it is coming from port 12
1(patch-int): addr:7a:45:57:ab:04:f4
connects br-tun with br-int, where our VM is
Use Case 2: VM to VM in single network on two compute nodes
VNI
ovs-ofctl dump-flows br-tun table=0
Table 0: From ?
Tunnel
cookie=0x0, duration=9960.459s, table=0, n_packets=2465, n_bytes=240439, idle_age=0, priority=1,in_port=12
actions=resubmit(,4)
ovs-ofctl dump-flows br-tun table=4
Table 4: Add VLAN
based on VNI
cookie=0x0, duration=10215.592s, table=4, n_packets=2753, n_bytes=269001, idle_age=0, priority=1,tun_id=0x3ed
actions=mod_vlan_vid:5,resubmit(,9)
set VLAN tag
ovs-ofctl dump-flows br-tun table=9
Table 9: Routed ?
ovs-ofctl dump-flows br-tun table=10
Table 10: Learn,
sent to br-int
Compute2 Tunnel
Bridge (br-tun)
patch-int
cookie=0x0, duration=176122.550s, table=9, n_packets=3149, n_bytes=301923, idle_age=0, hard_age=65534, priority=0
actions=resubmit(,10)
VLAN
cookie=0x0, duration=178689.832s, table=10, n_packets=3191, n_bytes=305983, idle_age=0, hard_age=65534, priority=1
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],lo
ad:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
learn table 20, sent to port 1 (patch-int)
Use Case 2: VM to VM in single network on two compute nodes
28
patchVLAN
tun
Compute2 vSwitch Internal Bridge (br-int)
Table 0 – Forward
normal
qvo
ovs-vsctl show | grep -A1 'tag: <vlanId>'
tag: 5
Interface "qvod2bca12f-74’
ovs-ofctl show br-int | grep '(‘
8(patch-tun): addr:26:dc:b4:4f:df:91
19(qvod2bca12f-74): addr:ba:9b:58:5e:0f:7d
Port Id is 19
ovs-ofctl dump-flows br-int table=0
cookie=0x0, duration=178960.748s, table=0, n_packets=50913, n_bytes=15060268,
idle_age=0, hard_age=65534, priority=1 actions=NORMAL
match is with rule forward NORMAL
ovs-appctl fdb/show br-int | grep <Dest MAC>
qv
b
per-VM Linux Bridge (iptables)
tap
eth0
VM
qbr
19 5 fa:16:3e:dd:ff:cf 0
packet switch to port 19 which is qvo
brctl show <qvo>
qbr0d4c2f0e-8b
8000.ba89713f6904
no
qvb0d4c2f0e-8b
tap0d4c2f0e-8b
tcpdump icmp -e -i <tap> (the VM vNIC)
virsh list
virsh dumpxml <Instance ID> | grep “<nova:name”
to check it is your
VM
virsh dumpxml <Instance ID> | grep -A 7 "<interface“
<source bridge='qbr0d4c2f0e-8b'/>
Use Case 3
North-South with Floating IP
29
Use Case 3: North-South with Floating IP
30
Use Case 3: North-South with Floating IP
http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
31
Use Case 3: North-South with Floating IP
What you need (Refer to the Cloud Lab for How To)
• 1 VMs, with a Floating IP attached to it, with Security Group allowing Ping /
SSH
Scenario
Start ping from VM to outside world (www.hp.com = 15.201.49.153 ) and start
chasing packet
Note: in this case Helion OpenStack will use distributed routing and static NAT
capability
Use Case 3: North-South with Floating IP
VM
eth0
ping 15.201.49.155
(www.hp.com)
Don’t care it is not answering
tap
per-VM Linux Bridge (qbr)
qv
Iptables
b
VLAN
qvo
Tag
Compute1 vSwitch Integration Bridge (brint)
Table 0 – Forward
normal
qr
virsh list
virsh dumpxml <Instance ID> | grep “<nova:name”
to check it is
your VM
virsh dumpxml <Instance ID> | grep -A 7 "<interface“
<source bridge='qbr551d286a-e4'/>
<target dev='tap551d286a-e4'/>
tcpdump icmp -e -i <tap>
10:58:40.252780 fa:16:3e:ee:5c:7f (oui Unknown) > fa:16:3e:10:8a:e6 (oui Unknown), ethertype
IPv4 (0x0800), length 98: 192.168.200.9 > 15.201.49.155: ICMP echo request, id 6517, seq 71,
length 64
(sending packet to MAC of default gateway which is DVR MAC)
ovs-vsctl show | grep -A3 <qvo ID>
tag: 2 Tenants are locally isolated on L2 by assigning VLAN tags
ovs-ofctl show br-int
12(qr-e6f4ab72-5b): addr:00:00:00:00:00:00
13(qvo3f3ebb06-dd): addr:ca:70:14:31:ba:c3
12 Port Id used for OpenFlow rules
ovs-ofctl dump-flows br-int table=0
cookie=0x0, duration=180787.809s, table=0, n_packets=67245, n_bytes=16690680,
idle_age=0, hard_age=65534, priority=1 actions=NORMAL
match is with rule forward NORMAL
ovs-appctl fdb/show br-int | grep <Dest MAC>
12
33
2 fa:16:3e:10:8a:e6 33
packet switch to router port 12 (= qr-e6f4ab72-5b)
Use Case 3: North-South with Floating IP
Get router ID fom GUI
qr
c3be0f2e-88c7-445e-89aa-9c17b8d3761b
Compute 1
Router namespace
(qrouter)
Static
NAT
Routing
rfp
ip netns | grep c3be0f2e-88c7-445e-89aa-9c17b8d3761b
qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b
ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip a
2: rfp-c3be0f2e-8
38: qr-e6f4ab72-5b
inet 192.168.25.87/32 and 169.254.31.238/31
inet 192.168.200.1/24
ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip rule
list
32769: from 192.168.200.9 lookup 16
ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b ip route
show table 16
default via 169.254.31.239 dev rfp-c3be0f2e-8
ip netns exec qrouter-c3be0f2e-88c7-445e-89aa-9c17b8d3761b
iptables --table nat --list
target prot opt source
destination
SNAT
all -- 192.168.200.9
anywhere
{DNAT
all -- anywhere
192.168.25.87
to:192.168.25.87
to:192.168.200.9]
ip netns exec qrouter-89ca06dc-6d80-469f-b86f-34d5e359988d
tcpdump icmp -e -l -i rfp-c3be0f2e-8
34
11:26:33.261025 b2:eb:f8:8c:0d:02 (oui Unknown) > c2:3b:9c:8f:b6:66 (oui Unknown), ethertype IPv4 (0x0800), length
98: 192.168.25.87 > 15.201.49.155: ICMP echo request, id 6517, seq 1744, length 64
SNATing Done: IP has been translated (compared to a tcpdump on qr port)
Use Case 3: North-South with Floating IP
ip netns
rfp
fip-46059b8d-52a0-4934-86f2-e0364f119797
ip netns exec fip-46059b8d-52a0-4934-86f2-e0364f119797 ip
a
fpr
2: fpr-c3be0f2e-8
43: fg-86f4105d-89
Compute 1
Floating IP namespace (fip)
ip netns exec fip-46059b8d-52a0-4934-86f2-e0364f119797 ip
route | grep fpr-c3be0f2e-8
169.254.31.238/31 dev fpr-c3be0f2e-8 proto kernel scope link src 169.254.31.239
192.168.25.87 via 169.254.31.238 dev fpr-c3be0f2e-8
MAC
ip netns exec fip-46059b8d-52a0-4934-86f2-e0364f119797
tcpdump icmp -e -l -i fg-86f4105d-89
fg
11:38:37.267321 fa:16:3e:4f:af:aa (oui Unknown) > 78:48:59:38:41:e3 (oui Unknown), ethertype IPv4
(0x0800), length 98: 192.168.25.87 > 15.201.49.155: ICMP echo request, id 6517, seq 2468, length 64
versus
11:37:22.265723 b2:eb:f8:8c:0d:02 (oui Unknown) > c2:3b:9c:8f:b6:66 (oui Unknown), ethertype IPv4
(0x0800), length 98: 192.168.25.87 > 15.201.49.155: ICMP echo request, id 6517, seq 2393, length 64
fg
ovs-vsctl show | grep –A4 br-ex
Compute 1
External Bridge (br-ex)
ovs-ofctl show br-ex | grep '(‘
Switching
35
inet 169.254.31.239/31
inet 192.168.25.91/24
VLAN2
5
Port "fg-86f4105d-89“
Port "vlan25“
1(vlan25): addr:c4:34:6b:ae:d7:b8
ovs-ofctl dump-flows br-ex
cookie=0x0, duration=183526.882s, table=0, n_packets=20685, n_bytes=2211058, idle_age=1, hard_age=65534,
priority=0 actions=NORMAL
ovs-appctl fdb/show br-ex
1
0 78:48:59:38:41:e3
4
Use Case 4
East-West routing – VM on different computes / networks
36
Use Case 4: East-West routing – VM on different computes /
networks
37
Use Case 4: East-West routing – VM on different computes /
networks
http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
38
Use Case 5
North-South routing with SNAT
39
Use Case 5: North-South routing with SNAT
40
Conclusion
41
Reference
http://docs.openstack.org/openstack-ops/content/network_troubleshooting.html
http://docs.openstack.org/networking-guide/
incl. http://docs.openstack.org/networking-guide/deploy_scenario3a.html
http://docs.openstack.org/networking-guide/scenario_dvr_ovs.html
42
Legacy routing in Neutron
IP forwarding
• Inter-subnet (east-west) , traffic between VMs
• Floating IP (north-south), traffic between external and VM
• Default SNAT (north-south), traffic from VM to external
45
Network, subnet and port are the 3
core ressources of Neutron
46
DVR - Neutron plug-in and Agent
On Compute Node / Hypervisor
• L2 agent (OVS or bridge)
–
to configure the SW bridges
– Applies Security Group Rules
• L3 agent (Linux Network namespace)
• Metadata
• nova
On Network Node
• L3 agent (Linux Network namespace) – centralized part
• DHCP
Services: LBaaS, FWaaS (north -> South) in qr, VPNaaS
47
DVR – Distributed Routing
Avoid inter-subnet traffic to reach the network note
Basically it is about duplicate the router in the compute node, same for Floating IP
SNAT still centralized
Do a ip netns to see the existing namespaces
qr – one per tenant
rfp = router to floating IP
fip – one per compute node
fpr = floating to router IP, internal port 169.254.31.x
fg = FIP gateway port, with Public IP @
snat – on the network node
sg = snap gateway
qdhcp – on the network node
48
from Openstack summit vancouver – DVR namepsace prez
network node
49
compute node: 2 tenants
Related documents