Modify

Opened 7 years ago

Closed 4 years ago

Last modified 4 years ago

#9620 closed defect (fixed)

qos-scripts: broken in trunk after switch to ifb

Reported by: anonymous Owned by: developers
Priority: normal Milestone: Barrier Breaker 14.07
Component: packages Version: Trunk
Keywords: Cc:

Description

After r25640 QoS is not effective anymore. Even with shaper for 2 times lower then real throughput.

You can test this very easily - run torrents with many peers and look for mtr/ping which must be going in Priority target (0x1).

backfire with imq - torrents does not affect icmp.
trunk with ifb - pings rises significally, drops occurs.

Attachments (2)

qos.patch (779 bytes) - added by igor 6 years ago.
qos2.patch (778 bytes) - added by igor 5 years ago.
QOS rules fix for ifb

Download all attachments as: .zip

Change History (21)

comment:1 Changed 7 years ago by nbd

Did you check whether it's really ifb? It could also be the removal of layer7 (which was a source of nasty memory leaks).
layer7 will be replaced by a new solution soon.

comment:2 Changed 7 years ago by anonymous

Yeah, this situation is happening after r25640 definately. I don't use layer7 at all and configs for backfire and trunk are the same.

comment:3 Changed 7 years ago by anonymous

Looks, like all ingress traffic doesn't classify at all - all packets simply going to default 3rd class.

comment:4 Changed 7 years ago by anonymous

# tc -s filter show parent ffff: dev pppoe-wan
filter protocol ip pref 1 u32 
filter protocol ip pref 1 u32 fh 800: ht divisor 1 
filter protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:1  (rule hit 450 success 450)
  match 00000000/00000000 at 0 (success 450 ) 
	action order 1:  connmark	Action statistics:
	Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
	backlog 0b 0p requeues 0 

	action order 2: mirred (Egress Redirect to device ifb0) stolen
 	index 5 ref 1 bind 1 installed 45 sec used 1 sec
 	Action statistics:
	Sent 31780 bytes 450 pkt (dropped 0, overlimits 0 requeues 0) 
	backlog 0b 0p requeues 0

Maybe some bug with act_connmark?

comment:5 Changed 7 years ago by kozax3@…

Hello,
I can confirm that qos scripts are not working at all on r26529 build.
Any news on this issue ?

comment:6 Changed 6 years ago by igor

Problem confirmed with trunk r30708. Ingress traffic is marked correctly by iptables, but placed in default 3rd class, as stated above:

# tc -s filter show parent ffff: dev eth1
filter protocol ip pref 1 u32 
filter protocol ip pref 1 u32 fh 800: ht divisor 1 
filter protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:1  (rule hit 157506 success 157506)
  match 00000000/00000000 at 0 (success 157506 ) 
	action order 1:  connmark	Action statistics:
	Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
	backlog 0b 0p requeues 0 

	action order 2: mirred (Egress Redirect to device ifb0) stolen
 	index 1 ref 1 bind 1 installed 1074 sec used 0 sec
 	Action statistics:
	Sent 167021748 bytes 157506 pkt (dropped 0, overlimits 11068 requeues 0) 
	backlog 0b 0p requeues 0 
# iptables -vnL -t mangle
Chain PREROUTING (policy ACCEPT 229K packets, 157M bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 3420 packets, 272K bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 226K packets, 157M bytes)
 pkts bytes target     prot opt in     out     source               destination         
 109K 6873K qos_Default  all  --  *      eth1    0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 3376 packets, 489K bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1649  158K qos_Default  all  --  *      eth1    0.0.0.0/0            0.0.0.0/0           

Chain POSTROUTING (policy ACCEPT 229K packets, 158M bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain qos_Default (2 references)
 pkts bytes target     prot opt in     out     source               destination         
 111K 7031K CONNMARK   all  --  *      *       0.0.0.0/0            0.0.0.0/0           CONNMARK restore mask 0xff 
 110K 6832K qos_Default_ct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff 
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x1/0xff length 400:65535 MARK and 0xffffff00 
    0     0 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x2/0xff length 800:65535 MARK and 0xffffff00 
  492 47895 MARK       udp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff length 0:500 MARK xset 0x2/0xff 
 2596  220K MARK       icmp --  *      *       0.0.0.0/0            0.0.0.0/0           MARK xset 0x1/0xff 
 106K 6549K MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff tcp spts:1024:65535 dpts:1024:65535 MARK xset 0x4/0xff 
    0     0 MARK       udp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff udp spts:1024:65535 dpts:1024:65535 MARK xset 0x4/0xff 
   45  2700 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           length 0:128 mark match !0x4/0xff tcp flags:0x3F/0x02 MARK xset 0x1/0xff 
  613 31912 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           length 0:128 mark match !0x4/0xff tcp flags:0x3F/0x10 MARK xset 0x1/0xff 

Chain qos_Default_ct (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff tcp multiport ports 22,53 MARK xset 0x1/0xff 
  173 11188 MARK       udp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff udp multiport ports 22,53 MARK xset 0x1/0xff 
   63  3556 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff tcp multiport ports 20,21,25,80,110,443,993,995 MARK xset 0x3/0xff 
    1    40 MARK       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff tcp multiport ports 5222 MARK xset 0x2/0xff 
    0     0 MARK       udp  --  *      *       0.0.0.0/0            0.0.0.0/0           mark match 0x0/0xff udp multiport ports 5222 MARK xset 0x2/0xff 
 110K 6832K CONNMARK   all  --  *      *       0.0.0.0/0            0.0.0.0/0           CONNMARK save mask 0xff 
# qos-stat

# Interface: wan
# Direction: Egress
# Stats:     Start

class hfsc 1: root 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 0 level 2 

class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 9000Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 107511 work 8580993 bytes level 1 

class hfsc 1:10 parent 1:1 leaf 100: rt m1 5250Kbit d 86us m2 900000bit ls m1 5250Kbit d 86us m2 5000Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 314554 bytes 3440 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 3390 work 314554 bytes rtwork 286822 bytes level 0 

class hfsc 1:20 parent 1:1 leaf 200: rt m1 4790Kbit d 217us m2 4500Kbit ls m1 4790Kbit d 217us m2 2500Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 54837 bytes 493 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 467 work 54837 bytes rtwork 48841 bytes level 0 

class hfsc 1:30 parent 1:1 leaf 300: ls m1 0bit d 100.0ms m2 1250Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 175500 bytes 420 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 393 work 175500 bytes level 0 

class hfsc 1:40 parent 1:1 leaf 400: ls m1 0bit d 200.0ms m2 250000bit ul m1 0bit d 0us m2 9000Kbit 
 Sent 8036156 bytes 106282 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 103298 work 8036102 bytes level 0 

class red 300:1 parent 300: 

class red 400:1 parent 400: 


# Interface: wan
# Direction: Egress
# Stats:     End


# Interface: wan
# Direction: Ingress
# Stats:     Start

class hfsc 1: root 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 0 level 2 

class hfsc 1:1 parent 1: sc m1 0bit d 0us m2 9000Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 30969 work 155410538 bytes level 1 

class hfsc 1:10 parent 1:1 leaf 100: rt m1 2040Kbit d 217us m2 900000bit ls m1 2040Kbit d 217us m2 5000Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 38567 bytes 167 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 165 work 38567 bytes rtwork 25116 bytes level 0 

class hfsc 1:20 parent 1:1 leaf 200: rt m1 4690Kbit d 217us m2 4500Kbit ls m1 4690Kbit d 217us m2 2500Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 0 level 0 

class hfsc 1:30 parent 1:1 leaf 300: ls m1 0bit d 100.0ms m2 1250Kbit ul m1 0bit d 0us m2 9000Kbit 
 Sent 155371971 bytes 148625 pkt (dropped 11068, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 30813 work 155371971 bytes level 0 

class hfsc 1:40 parent 1:1 leaf 400: ls m1 0bit d 200.0ms m2 250000bit ul m1 0bit d 0us m2 9000Kbit 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0 
 period 0 level 0 

class red 300:1 parent 300: 

class red 400:1 parent 400: 


# Interface: wan
# Direction: Ingress
# Stats:     End

comment:7 Changed 6 years ago by igor

trunk r32658 - still the same problem.

comment:8 Changed 6 years ago by anonymous

I can confirm that this is still affecting trunk. The issue is occurring because act_mirred does not preserve the packet mark and ifb does not support netfilter, so the only way that tc can filter on mark is if the mark is restored by act_connmark. However, this only works when the QoS decision is actually stored in the conntrack mark field, so ingress traffic will only be classified on "classify" rules; "reclassify" and "default" will not work. (Whoever did the ifb switch should know this, but doesn't seem to have documented it anywhere for some reason.)

Perhaps a more correct way to work around ifb's lack of features could be to patch act_mirred to preserve the packet mark and get rid of act_connmark?

comment:9 Changed 6 years ago by anonymous

Never mind, I'm an idiot. What I wrote above is wrong! :)

comment:10 Changed 6 years ago by anonymous

As a workaround for BitTorrent ingress classification you can change the default "config default" rule for ports >= 1024 to "config classify".

Changed 6 years ago by igor

comment:11 Changed 6 years ago by anonymous

Seems that you're right. And if this only about conntrack mark field, it can be easily fixed with simple patch that I attached.

Changed 5 years ago by igor

QOS rules fix for ifb

comment:12 Changed 4 years ago by Weedy <weedy2887@…>

So is this believed to be fixed in HEAD or still broken?

comment:13 Changed 4 years ago by igor

Still broken. Need to apply qos2.patch for every new build.

comment:14 Changed 4 years ago by anonymous

Igor, shouldn't there be a --restore-mark command in it ?

comment:15 Changed 4 years ago by igor

qos2.patch specially removes restore-mark. So each new packet is fully clean and passes the entire chain of classification with final connmark save-mark at the end. Yes, we have slightly more CPU load with this, but QoS works exactly as it should with such method and act_connmark.

Before switch to ifb (when IMQ was used) logic of analysis was different - we didn't save mark for reclassify actions and with restoring mark at the beginning of chain we did reclassify actions each time again. But after the switch to ifb we need to save mark at the end of chain (I try this with first version of patch), so all packets's mark is restoring with act_connmark. And without removing restore-mark at beginning we get broken logic of classify and reclassify actions.

qos2.patch truly fixes QoS after the ifb switch and my tests for all this time confirms this. It is simple and easy fix for current state of QoS chain and logic.

Maybe developers could fully rewrite chain and logic with ifb/act_connmark in mind and save some CPU time. But seems to me that they lost interest or don't use QoS at all as they have broken QoS in OpenWrt for 3 years already.

comment:16 Changed 4 years ago by Peter

Igor, thanks so much for this. You've just saved me a lot for headache as I've been trying to sort this for days.

comment:17 Changed 4 years ago by anonymous

@igor, thanks for your patch!
just a side note, stop blaming the "developers", blame yourself, you have a working patch, so submit it the right way https://dev.openwrt.org/wiki/SubmittingPatches

comment:18 Changed 4 years ago by nbd

  • Resolution set to fixed
  • Status changed from new to closed

fixed in r41682

comment:19 Changed 4 years ago by jow

  • Milestone changed from Attitude Adjustment 12.09 to Barrier Breaker 14.07

Milestone Attitude Adjustment 12.09 deleted

Add Comment

Modify Ticket

Action
as closed .
The resolution will be deleted. Next status will be 'reopened'.
Author


E-mail address and user name can be saved in the Preferences.

 
Note: See TracTickets for help on using tickets.