How to tell how much memory TCP buffers are actually using?Kernel socket structure and TCP_DIAGKernel socket structure and TCP_DIAGMemory ussage for TCP or named pipe message buffers?Why is my TCP throughput much greater than UDP throughput?How to find out how many TCP retransmits are occurring?How much of Linux's recent TCP bufferbloat remediation also applies to SCTP?does NFS overrides TCP keepalive time?TCP SYN,ACK RetransmissionsSocat- No bufferingUsing bash for tcp connections
What are these boxed doors outside store fronts in New York?
Can an x86 CPU running in real mode be considered to be basically an 8086 CPU?
New order #4: World
Prevent a directory in /tmp from being deleted
least quadratic residue under GRH: an EXPLICIT bound
How can the DM most effectively choose 1 out of an odd number of players to be targeted by an attack or effect?
Patience, young "Padovan"
Why did the Germans forbid the possession of pet pigeons in Rostov-on-Don in 1941?
declaring a variable twice in IIFE
Can a German sentence have two subjects?
Why don't electron-positron collisions release infinite energy?
The magic money tree problem
How is it possible for user's password to be changed after storage was encrypted? (on OS X, Android)
How to type dʒ symbol (IPA) on Mac?
How to report a triplet of septets in NMR tabulation?
Is it possible to do 50 km distance without any previous training?
Why is an old chain unsafe?
Are white and non-white police officers equally likely to kill black suspects?
Is it possible to make sharp wind that can cut stuff from afar?
Why is "Reports" in sentence down without "The"
A Journey Through Space and Time
How do we improve the relationship with a client software team that performs poorly and is becoming less collaborative?
What typically incentivizes a professor to change jobs to a lower ranking university?
XeLaTeX and pdfLaTeX ignore hyphenation
How to tell how much memory TCP buffers are actually using?
Kernel socket structure and TCP_DIAGKernel socket structure and TCP_DIAGMemory ussage for TCP or named pipe message buffers?Why is my TCP throughput much greater than UDP throughput?How to find out how many TCP retransmits are occurring?How much of Linux's recent TCP bufferbloat remediation also applies to SCTP?does NFS overrides TCP keepalive time?TCP SYN,ACK RetransmissionsSocat- No bufferingUsing bash for tcp connections
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I've got a front end machine with about 1k persistent, very low-bandwidth TCP connections. It's a bit memory constrained so I'm trying to figure out where a few hundred MBs are going. TCP buffers are one possible culprit, but I can't make a dent in these questions:
- Where is the memory reported? Is it part of the
buff/cache
item intop
, or is it part of the process'sRES
metric? - If I want to reduce it on a per-process level, how do I ensure that my reductions are having the desired effect?
- Do the buffers continue to take up some memory even when there's minimal traffic flowing, or do they grow dynamically, with the buffer sizes merely being the maximum allowable size?
I realize one possible answer is "trust the kernel to do this for you," but I want to rule out TCP buffers as a source of memory pressure.
Investigation: Question 1
This page writes, "the 'buffers' memory is memory used by Linux to buffer network and disk connections." This implies that they're not part of the RES
metric in top
.
To find the actual memory usage, /proc/net/sockstat
is the most promising:
sockets: used 3640
TCP: inuse 48 orphan 49 tw 63 alloc 2620 mem 248
UDP: inuse 6 mem 10
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
This is the best explanation I could find, but mem
isn't addressed there. It is addressed here, but 248*4k ~= 1MB, or about 1/1000 the system-wide max, which seems like an absurdly low number for a server with hundreds of persistent connections and sustained .2-.3Mbit/sec network traffic.
Of course, the system memory limits themselves are:
$ grep . /proc/sys/net/ipv4/tcp*mem
/proc/sys/net/ipv4/tcp_mem:140631 187510 281262
/proc/sys/net/ipv4/tcp_rmem:4096 87380 6291456
/proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304
tcp_mem
's third parameter is the system-wide maximum number of 4k pages dedicated to TCP buffers; if the total of buffer size ever surpasses this value, the kernel will start dropping packets. For non-exotic workloads there's no need to tune this value.
Next up is /proc/meminfo
, and its mysterious Buffers
and Cached
items. I looked at several sources but couldn't find any that claimed it accounted for TCP buffers.
...
MemAvailable: 8298852 kB
Buffers: 192440 kB
Cached: 2094680 kB
SwapCached: 34560 kB
...
Investigation: Questions 2-3
To inspect TCP buffer sizes at the process level, we've got quite a few options, but none of them seem to provide the actual allocated memory instead of the current queue size or maximum.
There's ss -m --info
:
State Recv-Q Send-Q
ESTAB 0 0
... <snip> ....
skmem:(r0,rb1062000,t0,tb2626560,f0,w0,o0,bl0) ...<snip> rcv_space:43690
So we have
Recv-Q
andSend-Q
, the current buffer usager
andt
, which are explained in this excellent post, but it's unclear how they're different fromRecv-Q
andSend-Q
- Something called
rb
, which looks suspiciously like some sort of max buffer size, but for which I couldn't find any documentation rcv_space
, which this page claims isn't the actual buffer size; for that you need to callgetsockopt
This answer suggests lsof
, but the size/off seems to be reporting the same buffer usage as ss
:
COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sslocal 4032 michael 82u IPv4 1733921 0t0 TCP localhost:socks->localhost:59594 (ESTABLISHED)
And then these answers suggest that lsof can't return the actual buffer size. It does provide a kernel module that should do the trick, but it only seems to work on sockets whose buffer sizes have been fixed with setsockopt
; if not, SO_SNDBUF and SO_RCVBUF aren't included.
tcp
add a comment |
I've got a front end machine with about 1k persistent, very low-bandwidth TCP connections. It's a bit memory constrained so I'm trying to figure out where a few hundred MBs are going. TCP buffers are one possible culprit, but I can't make a dent in these questions:
- Where is the memory reported? Is it part of the
buff/cache
item intop
, or is it part of the process'sRES
metric? - If I want to reduce it on a per-process level, how do I ensure that my reductions are having the desired effect?
- Do the buffers continue to take up some memory even when there's minimal traffic flowing, or do they grow dynamically, with the buffer sizes merely being the maximum allowable size?
I realize one possible answer is "trust the kernel to do this for you," but I want to rule out TCP buffers as a source of memory pressure.
Investigation: Question 1
This page writes, "the 'buffers' memory is memory used by Linux to buffer network and disk connections." This implies that they're not part of the RES
metric in top
.
To find the actual memory usage, /proc/net/sockstat
is the most promising:
sockets: used 3640
TCP: inuse 48 orphan 49 tw 63 alloc 2620 mem 248
UDP: inuse 6 mem 10
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
This is the best explanation I could find, but mem
isn't addressed there. It is addressed here, but 248*4k ~= 1MB, or about 1/1000 the system-wide max, which seems like an absurdly low number for a server with hundreds of persistent connections and sustained .2-.3Mbit/sec network traffic.
Of course, the system memory limits themselves are:
$ grep . /proc/sys/net/ipv4/tcp*mem
/proc/sys/net/ipv4/tcp_mem:140631 187510 281262
/proc/sys/net/ipv4/tcp_rmem:4096 87380 6291456
/proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304
tcp_mem
's third parameter is the system-wide maximum number of 4k pages dedicated to TCP buffers; if the total of buffer size ever surpasses this value, the kernel will start dropping packets. For non-exotic workloads there's no need to tune this value.
Next up is /proc/meminfo
, and its mysterious Buffers
and Cached
items. I looked at several sources but couldn't find any that claimed it accounted for TCP buffers.
...
MemAvailable: 8298852 kB
Buffers: 192440 kB
Cached: 2094680 kB
SwapCached: 34560 kB
...
Investigation: Questions 2-3
To inspect TCP buffer sizes at the process level, we've got quite a few options, but none of them seem to provide the actual allocated memory instead of the current queue size or maximum.
There's ss -m --info
:
State Recv-Q Send-Q
ESTAB 0 0
... <snip> ....
skmem:(r0,rb1062000,t0,tb2626560,f0,w0,o0,bl0) ...<snip> rcv_space:43690
So we have
Recv-Q
andSend-Q
, the current buffer usager
andt
, which are explained in this excellent post, but it's unclear how they're different fromRecv-Q
andSend-Q
- Something called
rb
, which looks suspiciously like some sort of max buffer size, but for which I couldn't find any documentation rcv_space
, which this page claims isn't the actual buffer size; for that you need to callgetsockopt
This answer suggests lsof
, but the size/off seems to be reporting the same buffer usage as ss
:
COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sslocal 4032 michael 82u IPv4 1733921 0t0 TCP localhost:socks->localhost:59594 (ESTABLISHED)
And then these answers suggest that lsof can't return the actual buffer size. It does provide a kernel module that should do the trick, but it only seems to work on sockets whose buffer sizes have been fixed with setsockopt
; if not, SO_SNDBUF and SO_RCVBUF aren't included.
tcp
add a comment |
I've got a front end machine with about 1k persistent, very low-bandwidth TCP connections. It's a bit memory constrained so I'm trying to figure out where a few hundred MBs are going. TCP buffers are one possible culprit, but I can't make a dent in these questions:
- Where is the memory reported? Is it part of the
buff/cache
item intop
, or is it part of the process'sRES
metric? - If I want to reduce it on a per-process level, how do I ensure that my reductions are having the desired effect?
- Do the buffers continue to take up some memory even when there's minimal traffic flowing, or do they grow dynamically, with the buffer sizes merely being the maximum allowable size?
I realize one possible answer is "trust the kernel to do this for you," but I want to rule out TCP buffers as a source of memory pressure.
Investigation: Question 1
This page writes, "the 'buffers' memory is memory used by Linux to buffer network and disk connections." This implies that they're not part of the RES
metric in top
.
To find the actual memory usage, /proc/net/sockstat
is the most promising:
sockets: used 3640
TCP: inuse 48 orphan 49 tw 63 alloc 2620 mem 248
UDP: inuse 6 mem 10
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
This is the best explanation I could find, but mem
isn't addressed there. It is addressed here, but 248*4k ~= 1MB, or about 1/1000 the system-wide max, which seems like an absurdly low number for a server with hundreds of persistent connections and sustained .2-.3Mbit/sec network traffic.
Of course, the system memory limits themselves are:
$ grep . /proc/sys/net/ipv4/tcp*mem
/proc/sys/net/ipv4/tcp_mem:140631 187510 281262
/proc/sys/net/ipv4/tcp_rmem:4096 87380 6291456
/proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304
tcp_mem
's third parameter is the system-wide maximum number of 4k pages dedicated to TCP buffers; if the total of buffer size ever surpasses this value, the kernel will start dropping packets. For non-exotic workloads there's no need to tune this value.
Next up is /proc/meminfo
, and its mysterious Buffers
and Cached
items. I looked at several sources but couldn't find any that claimed it accounted for TCP buffers.
...
MemAvailable: 8298852 kB
Buffers: 192440 kB
Cached: 2094680 kB
SwapCached: 34560 kB
...
Investigation: Questions 2-3
To inspect TCP buffer sizes at the process level, we've got quite a few options, but none of them seem to provide the actual allocated memory instead of the current queue size or maximum.
There's ss -m --info
:
State Recv-Q Send-Q
ESTAB 0 0
... <snip> ....
skmem:(r0,rb1062000,t0,tb2626560,f0,w0,o0,bl0) ...<snip> rcv_space:43690
So we have
Recv-Q
andSend-Q
, the current buffer usager
andt
, which are explained in this excellent post, but it's unclear how they're different fromRecv-Q
andSend-Q
- Something called
rb
, which looks suspiciously like some sort of max buffer size, but for which I couldn't find any documentation rcv_space
, which this page claims isn't the actual buffer size; for that you need to callgetsockopt
This answer suggests lsof
, but the size/off seems to be reporting the same buffer usage as ss
:
COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sslocal 4032 michael 82u IPv4 1733921 0t0 TCP localhost:socks->localhost:59594 (ESTABLISHED)
And then these answers suggest that lsof can't return the actual buffer size. It does provide a kernel module that should do the trick, but it only seems to work on sockets whose buffer sizes have been fixed with setsockopt
; if not, SO_SNDBUF and SO_RCVBUF aren't included.
tcp
I've got a front end machine with about 1k persistent, very low-bandwidth TCP connections. It's a bit memory constrained so I'm trying to figure out where a few hundred MBs are going. TCP buffers are one possible culprit, but I can't make a dent in these questions:
- Where is the memory reported? Is it part of the
buff/cache
item intop
, or is it part of the process'sRES
metric? - If I want to reduce it on a per-process level, how do I ensure that my reductions are having the desired effect?
- Do the buffers continue to take up some memory even when there's minimal traffic flowing, or do they grow dynamically, with the buffer sizes merely being the maximum allowable size?
I realize one possible answer is "trust the kernel to do this for you," but I want to rule out TCP buffers as a source of memory pressure.
Investigation: Question 1
This page writes, "the 'buffers' memory is memory used by Linux to buffer network and disk connections." This implies that they're not part of the RES
metric in top
.
To find the actual memory usage, /proc/net/sockstat
is the most promising:
sockets: used 3640
TCP: inuse 48 orphan 49 tw 63 alloc 2620 mem 248
UDP: inuse 6 mem 10
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
This is the best explanation I could find, but mem
isn't addressed there. It is addressed here, but 248*4k ~= 1MB, or about 1/1000 the system-wide max, which seems like an absurdly low number for a server with hundreds of persistent connections and sustained .2-.3Mbit/sec network traffic.
Of course, the system memory limits themselves are:
$ grep . /proc/sys/net/ipv4/tcp*mem
/proc/sys/net/ipv4/tcp_mem:140631 187510 281262
/proc/sys/net/ipv4/tcp_rmem:4096 87380 6291456
/proc/sys/net/ipv4/tcp_wmem:4096 16384 4194304
tcp_mem
's third parameter is the system-wide maximum number of 4k pages dedicated to TCP buffers; if the total of buffer size ever surpasses this value, the kernel will start dropping packets. For non-exotic workloads there's no need to tune this value.
Next up is /proc/meminfo
, and its mysterious Buffers
and Cached
items. I looked at several sources but couldn't find any that claimed it accounted for TCP buffers.
...
MemAvailable: 8298852 kB
Buffers: 192440 kB
Cached: 2094680 kB
SwapCached: 34560 kB
...
Investigation: Questions 2-3
To inspect TCP buffer sizes at the process level, we've got quite a few options, but none of them seem to provide the actual allocated memory instead of the current queue size or maximum.
There's ss -m --info
:
State Recv-Q Send-Q
ESTAB 0 0
... <snip> ....
skmem:(r0,rb1062000,t0,tb2626560,f0,w0,o0,bl0) ...<snip> rcv_space:43690
So we have
Recv-Q
andSend-Q
, the current buffer usager
andt
, which are explained in this excellent post, but it's unclear how they're different fromRecv-Q
andSend-Q
- Something called
rb
, which looks suspiciously like some sort of max buffer size, but for which I couldn't find any documentation rcv_space
, which this page claims isn't the actual buffer size; for that you need to callgetsockopt
This answer suggests lsof
, but the size/off seems to be reporting the same buffer usage as ss
:
COMMAND PID TID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sslocal 4032 michael 82u IPv4 1733921 0t0 TCP localhost:socks->localhost:59594 (ESTABLISHED)
And then these answers suggest that lsof can't return the actual buffer size. It does provide a kernel module that should do the trick, but it only seems to work on sockets whose buffer sizes have been fixed with setsockopt
; if not, SO_SNDBUF and SO_RCVBUF aren't included.
tcp
tcp
asked Jan 25 '18 at 4:05
Mike FischerMike Fischer
76115
76115
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
/proc/net/sockstat
, specifically the mem
field, is where to look. This value is is reported in kernel pages and corresponds directly to /proc/sys/net/ipv4/tcp_mem
.
At the individual socket level, memory is allocated in kernel space only until the user space code reads it, at which time the kernel memory is freed (see here). sk_buff->truesize
is the sum of both the amount of data buffered, as well as the socket structure itself (see here, and the patch which corrected for memory alignment is talked about here)
I suspect that the mem
field of /proc/net/sockstat
is calculated simply by summing sk_buff->truesize
for all sockets, but I'm not familiar enough with the kernel source to know where to look for that.
By way of confirmation, this feature request from the netdata monitoring system includes a lot of good discussion and relevant links as well, and it backs up this interpretation of /proc/net/sockstat
.
This post on the "out of socket memory" error contains some more general discussion of different memory issues.
add a comment |
This is a very complex question that may require delving into the kernel source to find an answer.
It does not seem as though the buffer is included in the process's RES statistic. See this article (if you haven't already). According to the author:
device drivers allocate a region of memory for the device to perform DMA to incoming packets
Further down in the section "Tuning: Socket receive queue memory" it seems like net.core.wmem_max
and net.core.rmem_max
are the maximum buffer sizes. Again, not sure how to see actually how much memory is being used.
Apparently, within the networking stack there is a problem with poor documentation, and obviously a large amount of complexity. Here is the
Further, the more I read about the way buffering is handled, it does not seem as though the vanilla kernel supports viewing anything other than how much memory is allocated as a buffer.
This bit of documentation on DMA within the kernel may also be of use to you, or at least give you a sense of where you can go from here, but for now I think the kernel module provided is the closest you may be able to get.
1
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f419518%2fhow-to-tell-how-much-memory-tcp-buffers-are-actually-using%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
/proc/net/sockstat
, specifically the mem
field, is where to look. This value is is reported in kernel pages and corresponds directly to /proc/sys/net/ipv4/tcp_mem
.
At the individual socket level, memory is allocated in kernel space only until the user space code reads it, at which time the kernel memory is freed (see here). sk_buff->truesize
is the sum of both the amount of data buffered, as well as the socket structure itself (see here, and the patch which corrected for memory alignment is talked about here)
I suspect that the mem
field of /proc/net/sockstat
is calculated simply by summing sk_buff->truesize
for all sockets, but I'm not familiar enough with the kernel source to know where to look for that.
By way of confirmation, this feature request from the netdata monitoring system includes a lot of good discussion and relevant links as well, and it backs up this interpretation of /proc/net/sockstat
.
This post on the "out of socket memory" error contains some more general discussion of different memory issues.
add a comment |
/proc/net/sockstat
, specifically the mem
field, is where to look. This value is is reported in kernel pages and corresponds directly to /proc/sys/net/ipv4/tcp_mem
.
At the individual socket level, memory is allocated in kernel space only until the user space code reads it, at which time the kernel memory is freed (see here). sk_buff->truesize
is the sum of both the amount of data buffered, as well as the socket structure itself (see here, and the patch which corrected for memory alignment is talked about here)
I suspect that the mem
field of /proc/net/sockstat
is calculated simply by summing sk_buff->truesize
for all sockets, but I'm not familiar enough with the kernel source to know where to look for that.
By way of confirmation, this feature request from the netdata monitoring system includes a lot of good discussion and relevant links as well, and it backs up this interpretation of /proc/net/sockstat
.
This post on the "out of socket memory" error contains some more general discussion of different memory issues.
add a comment |
/proc/net/sockstat
, specifically the mem
field, is where to look. This value is is reported in kernel pages and corresponds directly to /proc/sys/net/ipv4/tcp_mem
.
At the individual socket level, memory is allocated in kernel space only until the user space code reads it, at which time the kernel memory is freed (see here). sk_buff->truesize
is the sum of both the amount of data buffered, as well as the socket structure itself (see here, and the patch which corrected for memory alignment is talked about here)
I suspect that the mem
field of /proc/net/sockstat
is calculated simply by summing sk_buff->truesize
for all sockets, but I'm not familiar enough with the kernel source to know where to look for that.
By way of confirmation, this feature request from the netdata monitoring system includes a lot of good discussion and relevant links as well, and it backs up this interpretation of /proc/net/sockstat
.
This post on the "out of socket memory" error contains some more general discussion of different memory issues.
/proc/net/sockstat
, specifically the mem
field, is where to look. This value is is reported in kernel pages and corresponds directly to /proc/sys/net/ipv4/tcp_mem
.
At the individual socket level, memory is allocated in kernel space only until the user space code reads it, at which time the kernel memory is freed (see here). sk_buff->truesize
is the sum of both the amount of data buffered, as well as the socket structure itself (see here, and the patch which corrected for memory alignment is talked about here)
I suspect that the mem
field of /proc/net/sockstat
is calculated simply by summing sk_buff->truesize
for all sockets, but I'm not familiar enough with the kernel source to know where to look for that.
By way of confirmation, this feature request from the netdata monitoring system includes a lot of good discussion and relevant links as well, and it backs up this interpretation of /proc/net/sockstat
.
This post on the "out of socket memory" error contains some more general discussion of different memory issues.
answered Jan 26 '18 at 9:22
Mike FischerMike Fischer
76115
76115
add a comment |
add a comment |
This is a very complex question that may require delving into the kernel source to find an answer.
It does not seem as though the buffer is included in the process's RES statistic. See this article (if you haven't already). According to the author:
device drivers allocate a region of memory for the device to perform DMA to incoming packets
Further down in the section "Tuning: Socket receive queue memory" it seems like net.core.wmem_max
and net.core.rmem_max
are the maximum buffer sizes. Again, not sure how to see actually how much memory is being used.
Apparently, within the networking stack there is a problem with poor documentation, and obviously a large amount of complexity. Here is the
Further, the more I read about the way buffering is handled, it does not seem as though the vanilla kernel supports viewing anything other than how much memory is allocated as a buffer.
This bit of documentation on DMA within the kernel may also be of use to you, or at least give you a sense of where you can go from here, but for now I think the kernel module provided is the closest you may be able to get.
1
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
add a comment |
This is a very complex question that may require delving into the kernel source to find an answer.
It does not seem as though the buffer is included in the process's RES statistic. See this article (if you haven't already). According to the author:
device drivers allocate a region of memory for the device to perform DMA to incoming packets
Further down in the section "Tuning: Socket receive queue memory" it seems like net.core.wmem_max
and net.core.rmem_max
are the maximum buffer sizes. Again, not sure how to see actually how much memory is being used.
Apparently, within the networking stack there is a problem with poor documentation, and obviously a large amount of complexity. Here is the
Further, the more I read about the way buffering is handled, it does not seem as though the vanilla kernel supports viewing anything other than how much memory is allocated as a buffer.
This bit of documentation on DMA within the kernel may also be of use to you, or at least give you a sense of where you can go from here, but for now I think the kernel module provided is the closest you may be able to get.
1
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
add a comment |
This is a very complex question that may require delving into the kernel source to find an answer.
It does not seem as though the buffer is included in the process's RES statistic. See this article (if you haven't already). According to the author:
device drivers allocate a region of memory for the device to perform DMA to incoming packets
Further down in the section "Tuning: Socket receive queue memory" it seems like net.core.wmem_max
and net.core.rmem_max
are the maximum buffer sizes. Again, not sure how to see actually how much memory is being used.
Apparently, within the networking stack there is a problem with poor documentation, and obviously a large amount of complexity. Here is the
Further, the more I read about the way buffering is handled, it does not seem as though the vanilla kernel supports viewing anything other than how much memory is allocated as a buffer.
This bit of documentation on DMA within the kernel may also be of use to you, or at least give you a sense of where you can go from here, but for now I think the kernel module provided is the closest you may be able to get.
This is a very complex question that may require delving into the kernel source to find an answer.
It does not seem as though the buffer is included in the process's RES statistic. See this article (if you haven't already). According to the author:
device drivers allocate a region of memory for the device to perform DMA to incoming packets
Further down in the section "Tuning: Socket receive queue memory" it seems like net.core.wmem_max
and net.core.rmem_max
are the maximum buffer sizes. Again, not sure how to see actually how much memory is being used.
Apparently, within the networking stack there is a problem with poor documentation, and obviously a large amount of complexity. Here is the
Further, the more I read about the way buffering is handled, it does not seem as though the vanilla kernel supports viewing anything other than how much memory is allocated as a buffer.
This bit of documentation on DMA within the kernel may also be of use to you, or at least give you a sense of where you can go from here, but for now I think the kernel module provided is the closest you may be able to get.
answered Jan 25 '18 at 5:32
Tyler ChambersTyler Chambers
37116
37116
1
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
add a comment |
1
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
1
1
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
I'd upvote, but I don't have enough rep yet :/ The packagecloud.io article you linked to, especially the sk_rcvqueues_full section, is the best explanation I've seen about how these things work. That led me over here, and putting the two together seems to indicate that they're max buffer sizes, and the memory is allocated only as long as the received (or to-be-sent) data lives in kernel space.
– Mike Fischer
Jan 26 '18 at 8:04
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f419518%2fhow-to-tell-how-much-memory-tcp-buffers-are-actually-using%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
-tcp