Ubuntu a bonding - Mode 4, 802.3ad - nefunguje jak by mel?
Pokousim se prinutit Ubuntu server 16.04 LTS k bondingu dvou sitovek jako 802.3ad, a tak neak se uplne nedari. Mate nekdo s timto osobni zkusenost?
Interfaces mam takhle:
bond-slaves eth1, eth2
bond-mode 4
bond-miimon 100
bond-lacp-rate 1
plus standardni nezbytnosti, sam bond a sitovky jsou "up" a nejak to bezi:
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 9c:69:b4:61:ae:ae
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 13
Partner Key: 20030
Partner Mac Address: 60:9c:9f:22:a3:00
Slave Interface: eth1
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 9c:69:b4:61:ae:ae
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 1
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: 9c:69:b4:61:ae:ae
port key: 13
port priority: 255
port number: 1
port state: 63
details partner lacp pdu:
system priority: 1
system mac address: 60:9c:9f:22:a3:00
oper key: 20030
port priority: 1
port number: 15
port state: 61
Slave Interface: eth2
MII Status: up
Speed: 10000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 9c:69:b4:61:ae:af
Slave queue ID: 0
Aggregator ID: 2
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 1
details actor lacp pdu:
system priority: 65535
system mac address: 9c:69:b4:61:ae:ae
port key: 13
port priority: 255
port number: 2
port state: 63
details partner lacp pdu:
system priority: 1
system mac address: 60:9c:9f:22:a3:00
oper key: 20030
port priority: 1
port number: 16
port state: 61
- jenze odchozi provoz se nedeli na oba slaves, ale tece vse jednim. Jen prichozi traffic se dle vseho deli na eth1 a eth2.
Napada nekoho, co delam spatne?
Ty pouzivas LACP a to by ti mala podporovat aj protistrana...ako je nastavena ta?
Switch by mel byt nastaveny spravne - ale to uz jde mimo mne. Nemam tam nastavene xmit-hash-policy - je tam neco defaultniho, nevim jestli neni chyba v tom, a moznosti nastaveni co jsou mi moc nerikaji (layer2+3 nebo layer 3+4...?).
Odporuca sa 3+4.
Pozri vypis z cat /net/proc/bonding/xxxx
Tak uz to rozklada na oba jak ma Dal jsem nakonec 2+3, a taky OK, asi by mohlo byt i 3+4.
Pozor na prvotne uspechy...ja som nasadil LACP na UBNT switchoch a v laboratornych podmienkach mi fungoval aj ked som stackol 4 switche na seba.
Potom som to nasadil do ostrej prevadzk a do 20min sa cela siet zblaznila.
Odvtedy pouzivam maximalne active-backup mode, pretoze na 10Gb sieti aj tak nevyuzijem naplno potencial 10Gb, pretoze ma brzdia disky (aj klasicke SSD) a nvme na serveroch okrem jedneho NASka nemam.
No, ja tu rychlost prave vyuziju, je to pro TV streamy a v serveru mam dvouportovou 10Gb sitovku. 50-60% provozu je "live" - to tece z ram disku, zbytek archiv/timeshift, a to pak z SSD. A protoze dost pameti + se sleduje hodne "to same", spousta se odbavi z diskove cache. Tak to davaji i obycejne SATA SSD.
- zatim tam tece kolem 400Mb, tak uvidim az to bude X-nasobek...