Modify

Opened 7 years ago

Closed 7 years ago

#8590 closed defect (wontfix)

l7 filter makes the nf_conntrack_count inconsistent with /proc/net/nf_conntrack table

Reported by: anonymous Owned by: developers
Priority: high Milestone: Backfire 10.03.1
Component: kernel Version: Trunk
Keywords: netfilter nf_conntrack nf_conntrack_count l7filter Cc:

Description

The detail is in this thread:

https://forum.openwrt.org/viewtopic.php?pid=124645

In short, with an l7 filter, say ssh, activated in qos, together with nf_conntrack_acct=1 (so the filter could really work), the nf_conntrack_count will keep increasing till it will reach the nf_conntrack_max although there will be no more than 1000 connections listed in /proc/net/nf_conntrack_table.

I'm running the svn version on a WNDR3700. The versions I tried include r24196, r24886, r24895 and r24908. Except r24196 was compiled with kernel 2.6.32, the rest of the versions were all compile with kernel 2.6.36.2.

If you need any other information, let me know and try to explain how to get it since I'm really not good at this.

hato

Attachments (0)

Change History (7)

comment:1 Changed 7 years ago by Denis Gryzlov <gryzlov@…>

Finally, someone have found the cause of this issue. This memory leak was driving me crazy almost for a week now :-)

I'm currently using WNDR3700 with the latest r24915 build.
I've configured it to use SLAB with optional CONFIG_SLABINFO (default for ar71xx is SLUB allocator), installed only a few basic packages such as QoS-scripts, UPnP port mapping, DynDNS and procps with conntrack-tools for monitoring.

The connection tracking table size (/proc/sys/net/netfilter/nf_conntrack_count) was constantly growing till it reached nf_conntrack_max (16384), after that the network was very unresponsive and unusable.

According to /proc/slabinfo the most "leaky" parts of SLAB memory were nf_conntrack_802e9ea0 and ip_dst_cache.
Table flush command (conntrack -F conntrack) didn't clear all caches, only a few rows could deleted by this command.

I switched off l7-protocol filtering for a bittorrent traffic in QoS settings and now my SUnreclaim/Slab memory is stable and do not leak. If you will need any more info from me, I'll gladly help or try any new build or settings.

Thanks!

comment:2 Changed 7 years ago by Stijn Tintel <stijn@…>

Looks like I am experiencing the same problem since some time. Currently running trunk r24878 with 2.6.32.27 on an ar71xx (RSPro), but I noticed it first on December 6 with trunk r24196 with 2.6.32.26.

I am also using layer7 matching for bittorrent and edonkey traffic (default QoS rules). With this enabled, after 22h uptime:

root@wrt0:~# uptime; wc -l /proc/net/nf_conntrack; cat /proc/sys/net/netfilter/nf_conntrack_count
 18:49:53 up 21:59, load average: 0.00, 0.01, 0.00
137 /proc/net/nf_conntrack
5319

After disabling the layer7 rules in QoS and a reboot and some uptime with an active torrent client behind the RSPro, the values seem to stay the same:

root@wrt0:~# uptime; wc -l /proc/net/nf_conntrack; cat /proc/sys/net/netfilter/nf_conntrack_count
 19:25:45 up 31 min, load average: 0.01, 0.02, 0.00
358 /proc/net/nf_conntrack
358

I am also seeing this on x86, trunk r24196, kernel 2.6.32.26:

root@vr0:~# uptime; wc -l /proc/net/nf_conntrack; cat /proc/sys/net/netfilter/nf_conntrack_count
 01:53:37 up 39 days,  4:04, load average: 0.00, 0.00, 0.00
5 /proc/net/nf_conntrack
18

The difference here is rather low, probably because there are no torrent clients running behind this one so the layer7 rules almost never match.

When I try kernel 2.6.36.3 on trunk r24935, the iptables layer7 rules no longer seem to match, although I do have an active torrent client. So I can't tell if the leak is solved in the layer7_2.22 patches.

If you look at the new homepage for l7-filter, it seems that they are now focusing on the l7-filter-userspace version. Maybe it would be better to replace the layer7 kernel/netfilter patches with the userspace version?

comment:3 Changed 7 years ago by anonymous

For kernel 2.6.36, nf_conntrack_acct now is default 0 since l7-filter will not work with default setting. If you re-enable flow counting, then you should see this problem again. You can do this by adding a line in /etc/sysctl:
net.netfilter.nf_conntrack_acct=1

hato

comment:4 Changed 7 years ago by Stijn Tintel <stijn@…>

Thanks, with nf_conntrack_acct=1 the rules match again, and the problem starts occurring again:

root@wrt0:~# uptime; wc -l /proc/net/nf_conntrack; cat /proc/sys/net/netfilter/nf_conntrack_count; egrep '^#|conntrack' /proc/slabinfo 
 15:11:30 up 19:03, load average: 0.02, 0.08, 0.08
190 /proc/net/nf_conntrack
315
# name            <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail>
nf_conntrack_expect     21     21    192   21    1 : tunables    0    0    0 : slabdata      1      1      0
nf_conntrack_80338680    319    645    272   15    1 : tunables    0    0    0 : slabdata     43     43      0

I enabled CONFIG_DEBUG_KMEMLEAK and CONFIG_KMEMCHECK in the kernel of my x86 image, but for some reason this doesn't work. There is no /proc/sys/kernel/kmemcheck nor /sys/kernel/debug/kmemleak ... Other suggestions to get more debug info for this problem are welcome.

comment:5 Changed 7 years ago by pyther@…

Is there any solution to this?

comment:6 Changed 7 years ago by nbd

I just took a quick look at the layer7 code - it looks pretty nasty and in some places there are lines that look like obvious memleaks.

Not sure how to fix this easily, I'll see if there's a newer version of layer7 out there. If there isn't, we should probably abandon layer7 entirely.

comment:7 Changed 7 years ago by jow

  • Resolution set to wontfix
  • Status changed from new to closed

Layer7 has been dropped in trunk, there is no more maintenance going into this component.

Add Comment

Modify Ticket

Action
as closed .
The resolution will be deleted. Next status will be 'reopened'.
Author


E-mail address and user name can be saved in the Preferences.

 
Note: See TracTickets for help on using tickets.