2 萬字綜述:Linux 網絡性能終極指南

原文:Linux Network Performance Ultimate Guide[1]

作者:Kien Nguyen-Tuan

參考資料

1、Linux 網絡棧

資料來源

1.1 Linux 網絡數據包接收

注意:有些 NIC 是 “多隊列”NIC。上圖僅顯示了單個環形緩衝區以方便理解,但根據您使用的 NIC 和硬件設置,系統中可能會有多個隊列。請查看在 CPU 之間共享數據包處理負載部分以瞭解詳細信息。

  1. 數據包到達網卡

  2. NIC 進行驗證MAC(如果不是混雜模式 [21])並FCS決定放棄或繼續

  3. NIC 將 DMA(直接內存訪問)[22]sk_buff數據包放入 RAM - 在稱爲或skb(套接字內核緩衝區 - SKB[23] )的內核數據結構中。

  4. NIC 將數據包_的引用_放入接收環形緩衝區隊列,rx直到rx-usecs超時或rx-frames。讓我們來談談 RX 環形緩衝區:

  1. NIC 引發HardIRQ“硬中斷”。

    egrep “CPU0|eth3” /proc/interrupts
        CPU0 CPU1 CPU2 CPU3 CPU4 CPU5
    110:    0    0    0    0    0    0   IR-PCI-MSI-edge   eth3-rx-0
    111:    0    0    0    0    0    0   IR-PCI-MSI-edge   eth3-rx-1
    112:    0    0    0    0    0    0   IR-PCI-MSI-edge   eth3-rx-2
    113:    2    0    0    0    0    0   IR-PCI-MSI-edge   eth3-rx-3
    114:    0    0    0    0    0    0   IR-PCI-MSI-edge   eth3-tx
  1. CPU 運行IRQ handler運行驅動程序代碼的。

  2. 驅動程序將調度一個 NAPI[25],清除 NIC 上的 HardIRQ,以便它可以爲新數據包到達生成 IRQ。

  3. 驅動觸發SoftIRQ (NET_RX_SOFTIRQ)

```
ps aux | grep ksoftirq
                                                                  # ksotirqd/<cpu-number>
root          13  0.0  0.0      0     0 ?        S    Dec13   0:00 [ksoftirqd/0]
root          22  0.0  0.0      0     0 ?        S    Dec13   0:00 [ksoftirqd/1]
root          28  0.0  0.0      0     0 ?        S    Dec13   0:00 [ksoftirqd/2]
root          34  0.0  0.0      0     0 ?        S    Dec13   0:00 [ksoftirqd/3]
root          40  0.0  0.0      0     0 ?        S    Dec13   0:00 [ksoftirqd/4]
root          46  0.0  0.0      0     0 ?        S    Dec13   0:00 [ksoftirqd/5]


```

```
watch -n1 grep RX /proc/softirqs
watch -n1 grep TX /proc/softirqs


```
  1. NAPI 從 rx 環形緩衝區輪詢數據。
  1. Linux 還爲 分配內存sk_buff

  2. Linux 填充元數據:協議、接口、setmatchheader、刪除以太網

  3. Linux 將 skb 傳遞給內核棧(netif_receive_skb

  4. 它設置網絡頭,克隆skb到 taps(即 tcpdump)並將其傳遞給 tc ingress

  5. netdev_max_backlog數據包按照以下算法處理到指定大小的 qdisc default_qdisc

```
sysctl net.core.netdev_max_backlog

net.core.netdev_max_backlog = 1000


```
  1. 它調用ip_rcv並處理 IP 數據包

  2. 它調用 netfilter( PREROUTING)

  3. 它查看路由表,看看是轉發還是本地

  4. 如果是本地的,則調用 netfilter ( LOCAL_IN)

  5. 它調用 L4 協議(例如tcp_v4_rcv

  6. 它找到了正確的插座

  7. 它進入 tcp 有限狀態機

  8. 將數據包排入接收緩衝區並按tcp_rmem規則調整大小

  1. 內核將發出信號,表示有數據可供應用程序使用(epoll 或任何輪詢系統)

  2. 應用程序喚醒並讀取數據

1.2 Linux 內核網絡傳輸

雖然比接收邏輯邏輯簡單,但發送邏輯仍然值得研究。

  1. 應用程序發送消息(sendmsg或其他)

  2. TCP 發送報文分配 skb_buff

  3. 它將 skb 放入大小tcp_wmem

    sysctl net.ipv4.tcp_wmem
    net.ipv4.tcp_wmem = 4096        16384   262144
  1. 構建 TCP 標頭(源端口和目標端口、校驗和)

  2. 調用 L3 處理程序(在本例中ipv4tcp_write_xmittcp_transmit_skb

  3. L3( ip_queue_xmit) 完成其工作:構建 IP 報頭並調用 netfilter( LOCAL_OUT)

  4. 調用輸出路由操作

  5. 調用 netfilter( POST_ROUTING)

  6. 對數據包進行分片 ( ip_output)

  7. 調用 L2 發送函數 ( dev_queue_xmit)

  8. txqueuelen使用其算法將長度爲輸出(QDisc)的隊列輸入default_qdisc

  1. 驅動程序代碼將數據包排入ring buffer tx

  2. soft IRQ (NET_TX_SOFTIRQ)驅動程序將在超時後執行tx-usecstx-frames

  3. 重新啓用 NIC 的硬 IRQ

  4. 驅動程序會將所有要發送的數據包映射到某個 DMA 區域

  5. NIC 通過 DMA 從 RAM 獲取數據包並進行傳輸

  6. 傳輸完成後,NIC 將發出一個hard IRQ信號來表示傳輸完成

  7. 驅動程序將處理此 IRQ(將其關閉)

  8. 並安排(soft IRQ)NAPI 投票系統

  9. NAPI 將處理接收數據包信號並釋放 RAM

2、網絡性能調優

調整 NIC 以獲得最佳吞吐量和延遲是一個複雜的過程,需要考慮許多因素。沒有可以廣泛適用於每個系統的通用配置。

網絡性能調整需要考慮一些因素。請注意,您的設備中的接口卡名稱可能不同,請更改相應的值。

接下來,讓我們跟蹤數據包的接收(和傳輸)並進行一些調整。

2.1 快速指南

2.1.1/proc/net/softnet_stat&/proc/net/sockstat

在我們繼續之前,讓我們先討論一下/proc/net/softnet_stat&/proc/net/sockstat因爲這些文件將會被大量使用。

cat /proc/net/softnet_stat

0000272d 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
000034d9 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000001
00002c83 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000002
0000313d 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000003
00003015 00000000 00000001 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000004
000362d2 00000000 000000d2 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000005
cat /proc/net/sockstat

sockets: used 937
TCP: inuse 21 orphan 0 tw 0 alloc 22 mem 5
UDP: inuse 9 mem 5
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0

2.1.2ss

ss -tm

#  -m, --memory
#         Show socket memory usage. The output format is:

#         skmem:(r<rmem_alloc>,rb<rcv_buf>,t<wmem_alloc>,tb<snd_buf>,
#                       f<fwd_alloc>,w<wmem_queued>,o<opt_mem>,
#                       bl<back_log>,d<sock_drop>)

#         <rmem_alloc>
#                the memory allocated for receiving packet

#         <rcv_buf>
#                the total memory can be allocated for receiving packet

#         <wmem_alloc>
#                the memory used for sending packet (which has been sent to layer 3)

#         <snd_buf>
#                the total memory can be allocated for sending packet

#         <fwd_alloc>
#                the  memory  allocated  by  the  socket as cache, but not used for receiving/sending packet yet. If need memory to send/receive packet, the memory in this
#                cache will be used before allocate additional memory.

#         <wmem_queued>
#                The memory allocated for sending packet (which has not been sent to layer 3)

#         <ropt_mem>
#                The memory used for storing socket option, e.g., the key for TCP MD5 signature

#         <back_log>
#                The memory used for the sk backlog queue. On a process context, if the process is receiving packet, and a new packet is received, it will be put into  the
#                sk backlog queue, so it can be received by the process immediately

#         <sock_drop>
#                the number of packets dropped before they are de-multiplexed into the socket

#  -t, --tcp
#         Display TCP sockets.

State       Recv-Q Send-Q        Local Address:Port        Peer Address:Port
ESTAB       0      0             192.168.56.102:ssh        192.168.56.1:56328
skmem:(r0,rb369280,t0,tb87040,f0,w0,o0,bl0,d0)

# rcv_buf: 369280 bytes
# snd_buf: 87040 bytes

2.1.3netstat

2.1.4sysctl

echo "value" > /proc/sys/location/variable
# To display a list of available sysctl variables
sysctl -a | less
# To only list specific variables use
sysctl variable1 [variable2] [...]
# To change a value temporarily use the sysctl command with the -w option:
sysctl -w variable=value
# To override the value persistently, the /etc/sysctl.conf file must be changed. This is the recommend method. Edit the /etc/sysctl.conf file.
vi /etc/sysctl.conf
# Then add/change the value of the variable
variable = value
# Save the changes and close the file. Then use the -p option of the sysctl command to load the updated sysctl.conf settings:
sysctl -p or sysctl -p /etc/sysctl.conf
# The updated sysctl.conf values will now be applied when the system restarts.

2.2 NIC 環形緩衝區

2.3 中斷合併 (IC) - rx-usecs、tx-usecs、rx-frames、tx-frames(硬件 IRQ)

2.4 IRQ 親和性

systemctl status irqbalance.service
systemctl stop irqbalance.service
# Determine the IRQ number associated with the Ethernet driver
grep eth0 /proc/interrupts

32:   0     140      45       850264      PCI-MSI-edge      eth0

# IRQ 32
# Check the current value
# The default value is 'f', meaning that the IRQ can be serviced
# on any of the CPUs
cat /proc/irq/32/smp_affinity

f

# CPU0 is the only CPU used
echo 1 > /proc/irq/32/smp_affinity
cat /proc/irq/32/smp_affinity

1

# Commas can be used to delimit smp_affinity values for discrete 32-bit groups
# This is required on systems with more than 32 cores
# For example, IRQ  40 is serviced on all cores of a 64-core system
cat /proc/irq/40/smp_affinity

ffffffff,ffffffff

# To service IRQ 40 on only the upper 32 cores
echo 0xffffffff,00000000 > /proc/irq/40/smp_affinity
cat /proc/irq/40/smp_affinity

ffffffff,00000000

2.5 在 CPU 之間共享數據包處理負載

資料來源:

以前,一切都很簡單。網卡很慢,只有一個隊列。當數據包到達時,網卡通過 DMA 複製數據包併發出中斷,Linux 內核收穫這些數據包並完成中斷處理。隨着網卡速度越來越快,基於中斷的模型可能會因大量傳入數據包而導致 IRQ 風暴。這將消耗大部分 CPU 功率並凍結系統。爲了解決這個問題,提出了 NAPI[36](中斷和輪詢)。當內核從網卡接收到中斷時,它開始輪詢設備並儘快收穫隊列中的數據包。NAPI 與當今常見的 1Gbps 網卡配合良好。但是,對於 10Gbps、20Gbps 甚至 40Gbps 網卡,NAPI 可能不夠用。如果我們仍然使用一個 CPU 和一個隊列來接收數據包,這些卡將需要更快的 CPU。幸運的是,多核 CPU 現在很流行,那麼爲什麼不併行處理數據包呢?

2.5.1 接收端擴展 (RSS)

2.5.2 接收數據包控制 (RPS)

/sys/class/net/<dev>/queues/rx-<n>/rps_cpus

# This file implements a bitmap of CPUs
# 0 (default): disabled

2.5.3 接收流控制 (RFS)

sysctl -w net.core.rps_sock_flow_entries 32768

2.5.4 加速接收流控制 (aRFS)

2.6 中斷合併(軟 IRQ)

2.7 接收 QDisc

2.8 發送 QDisc - txqueuelen 和 default_qdisc

### 2.9 TCP 讀寫緩衝區/隊列

- 定義在`tcp_mem``tcp_moderate_rcvbuf`時指定的[內存壓力](https://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php "內存壓力")是多少。

- 我們可以調整緩衝區的 mix-max 大小來提高性能:
    - 更改命令:
    ```shell
    sysctl -w net.ipv4.tcp_rmem="min default max"
    sysctl -w net.ipv4.tcp_wmem="min default max"
    ```
    - 保留該值,檢查[此項](https://access.redhat.com/discussions/2944681 "此項")
    - 如何監控:檢查`/proc/net/sockstat`### 2.10 TCP FSM 和擁塞算法

> Accept 和 SYN 隊列由 net.core.somaxconn 和 net.ipv4.tcp_max_syn_backlog 控制。[現在 net.core.somaxconn 限制了這兩個隊列的大小](https://blog.cloudflare.com/syn-packet-handling-in-the-wild/#queuesizelimits "現在 net.core.somaxconn 限制了這兩個隊列的大小")。

- `net.core.somaxconn`:爲傳遞給 的 backlog 參數的值提供上限`listen() function`,在用戶空間中稱爲`SOMAXCONN`。如果您更改此值,您還應將應用程序更改爲兼容的值。您可以查看[Envoy 的性能調優說明](https://ntk148v.github.io/posts/til/envoy/performance.md "Envoy 的性能調優說明")。
- `net.ipv4.tcp_fin_timeout`:指定在強制關閉套接字之前等待最終 FIN 數據包的秒數。
- `net.ipv4.tcp_available_congestion_control`:顯示已註冊的可用擁塞控制選項。
- `net.ipv4.tcp_congestion_control`:設置用於新連接的擁塞控制算法。
- `net.ipv4.tcp_max_syn_backlog`:設置尚未收到連接客戶端確認的排隊連接請求的最大數量;如果超過此數字,內核將開始丟棄請求。
- `net.ipv4.tcp_syncookies`:啓用/禁用 syn cookie,用於防止 syn flood 攻擊。
- `net.ipv4.tcp_slow_start_after_idle`:啓用/禁用 tcp 慢啓動。

![](https://upload.wikimedia.org/wikipedia/commons/thumb/f/f6/Tcp_state_diagram_fixed_new.svg/796px-Tcp_state_diagram_fixed_new.svg.png?20140126065545)
- 您可能想要檢查[寬帶調整說明](https://ntk148v.github.io/posts/linux-network-performance-ultimate-guide/broadband-tweaks.md "寬帶調整說明")### 2.11 NUMA

- 這個術語超出了網絡性能的範疇。
- 非均勻內存訪問 (NUMA) 是一種內存架構,它允許處理器比其他傳統技術更快地訪問內存內容。換句話說,處理器訪問本地內存的速度比訪問非本地內存快得多。這是因爲在 NUMA 設置中,每個處理器都被分配了一個專門供自己使用的特定本地內存。這消除了非本地內存的共享,減少了當多個請求訪問同一內存位置時的延遲(更少的內存鎖)-> 提高網絡性能(因爲 CPU 必須訪問環形緩衝區(內存)來處理數據包)
![](15-Linux網絡性能終極指南.png)

- NUMA 架構將 CPU、內存和設備的子集拆分爲不同的“節點”,實際上創建了具有快速互連和通用操作系統的多臺小型計算機。NUMA 系統需要與非 NUMA 系統進行不同的調整。對於 NUMA,其目標是將單個節點中設備的所有中斷分組到屬於該節點的 CPU 核心上。
- 儘管這看起來好像有助於減少延遲,但衆所周知,NUMA 系統與實時應用程序的交互很差,因爲它們可能導致意外事件延遲。
- 確定 NUMA 節點:
```shell
ls -ld /sys/devices/system/node/node*

drwxr-xr-x. 3 root root 0 Aug 15 19:44 /sys/devices/system/node/node0
drwxr-xr-x. 3 root root 0 Aug 15 19:44 /sys/devices/system/node/node1
cat /sys/devices/system/node/node0/cpulist

0-5

cat /sys/devices/system/node/node1/cpulist
# empty
systemctl stop irqbalance

2.12 更多內容——數據包處理

本節爲高級部分,介紹了一些實現高性能的高級模塊 / 框架。

2.12.1AF_PACKETv4

資料來源:

5.12.2PACKET_MMAP

資料來源:

5.12.3 繞過內核:數據平面開發套件(DPDK)

資料來源:

2.12.4 PF_RING

資料來源:

2.12.5 可編程數據包處理:eXpress 數據路徑 (XDP)

資料來源:

本文在 AI 的幫助下進行了翻譯,如果有些術語沒有糾正,歡迎在評論區指正

參考資料

[1]

Linux Network Performance Ultimate Guide:https://ntk148v.github.io/posts/linux-network-performance-ultimate-guide/

[2]

https://github.com/leandromoreira/linux-network-performance-parameters/:https://github.com/leandromoreira/linux-network-performance-parameters/

[3]

https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf:https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf

[4]

https://www.coverfire.com/articles/queueing-in-the-linux-network-stack/:https://www.coverfire.com/articles/queueing-in-the-linux-network-stack/

[5]

https://blog.cloudflare.com/how-to-achieve-low-latency/:https://blog.cloudflare.com/how-to-achieve-low-latency/

[6]

https://blog.cloudflare.com/how-to-receive-a-million-packets/:https://blog.cloudflare.com/how-to-receive-a-million-packets/

[7]

https://beej.us/guide/bgnet/html/:https://beej.us/guide/bgnet/html/

[8]

https://blog.csdn.net/armlinuxww/article/details/111930788:https://blog.csdn.net/armlinuxww/article/details/111930788

[9]

https://www.ibm.com/docs/en/linux-on-systems?topic=recommendations-network-performance-tuning:https://www.ibm.com/docs/en/linux-on-systems?topic=recommendations-network-performance-tuning

[10]

https://blog.packagecloud.io/illusterated-guide-monitoring-tuning-linux-networking-stack-receiving-data/:https://blog.packagecloud.io/illustrated-guide-monitoring-tuning-linux-networking-stack-receiving-data/

[11]

https://blog.packagecloud.io/monitoring-tuning-linux-networking-stack-receiving-data/:https://blog.packagecloud.io/monitoring-tuning-linux-networking-stack-receiving-data/

[12]

https://blog.packagecloud.io/monitoring-tuning-linux-networking-stack-sending-data/:https://blog.packagecloud.io/monitoring-tuning-linux-networking-stack-sending-data/

[13]

https://www.sobyte.net/post/2022-10/linux-net-snd-rcv/:https://www.sobyte.net/post/2022-10/linux-net-snd-rcv/

[14]

https://juejin.cn/post/7106345054368694280:https://juejin.cn/post/7106345054368694280

[15]

https://openwrt.org/docs/guide-developer/networking/praxis:https://openwrt.org/docs/guide-developer/networking/praxis

[16]

https://blog.51cto.com/u_15169172/2710604:https://blog.51cto.com/u_15169172/2710604

[17]

https://sn0rt.github.io/media/paper/TCPlinux.pdf:https://sn0rt.github.io/media/paper/TCPlinux.pdf

[18]

https://medium.com/coccoc-engineering-blog/linux-network-ring-buffers-cea7ead0b8e8:https://medium.com/coccoc-engineering-blog/linux-network-ring-buffers-cea7ead0b8e8

[19]

HOWTO#sysctl 部分:https://ntk148v.github.io/posts/linux-network-performance-ultimate-guide/#204-sysctl

[20]

PackageCloud 的文章:https://blog.packagecloud.io/illustrated-guide-monitoring-tuning-linux-networking-stack-receiving-data

[21]

混雜模式:https://unix.stackexchange.com/questions/14056/what-is-kernel-ip-forwarding

[22]

DMA(直接內存訪問):https://en.wikipedia.org/wiki/Direct_memory_access

[23]

SKB:http://vger.kernel.org/~davem/skb.html

[24]

循環緩衝區:https://en.wikipedia.org/wiki/Circular_buffer

[25]

NAPI:https://en.wikipedia.org/wiki/New_API

[26]

優先:https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_output.c#L241

[27]

請點擊此處:https://unix.stackexchange.com/questions/419518/how-to-tell-how-much-memory-tcp-buffers-are-actually-using

[28]

此處。:https://access.redhat.com/solutions/8694

[29]

這裏:https://unix.stackexchange.com/questions/542546/what-is-the-systemd-native-way-to-manage-nic-ring-buffer-sizes-before-bonded-int

[30]

irqbalancer:https://github.com/Irqbalance/irqbalance

[31]

用於在 Intel NIC 上設置 IRQ 親和性的腳本:https://gist.github.com/xdel/9c50ccedea9e0c9d0000d550b07ee242

[32]

一把雙刃劍:https://stackoverflow.com/questions/48659720/is-it-a-good-practice-to-set-interrupt-affinity-and-io-handling-thread-affinity

[33]

http://balodeamit.blogspot.com/2013/10/receive-side-scaling-and-receive-packet.html:http://balodeamit.blogspot.com/2013/10/receive-side-scaling-and-receive-packet.html

[34]

https://garycplin.blogspot.com/2017/06/linux-network-scaling-receives-packets.html:https://garycplin.blogspot.com/2017/06/linux-network-scaling-receives-packets.html

[35]

https://github.com/torvalds/linux/blob/master/Documentation/networking/scaling.rst:https://github.com/torvalds/linux/blob/master/Documentation/networking/scaling.rst

[36]

NAPI:https://wiki.linuxfoundation.org/networking/napi

[37]

balodeamit 博客:http://balodeamit.blogspot.com/2013/10/receive-side-scaling-and-receive-packet.html

[38]

Linux 內核文檔:https://github.com/torvalds/linux/blob/v4.11/Documentation/networking/scaling.txt#L80

[39]

softnet_data:https://github.com/torvalds/linux/blob/v4.11/include/linux/netdevice.h#L2788

[40]

這個:https://garycplin.blogspot.com/2017/06/linux-network-scaling-receives-packets.html

[41]

此項:https://access.redhat.com/discussions/2944681

[42]

此項:https://access.redhat.com/discussions/2944681

[43]

此項:https://access.redhat.com/discussions/2944681

[44]

此項:https://access.redhat.com/discussions/2944681

[45]

流量控制(tc):http://tldp.org/HOWTO/Traffic-Control-HOWTO/intro.html

[46]

緩衝區浮動:https://www.bufferbloat.net/projects/codel/wiki/

[47]

本文 - 隊列學科部分:https://www.coverfire.com/articles/queueing-in-the-linux-network-stack/

[48]

https://developer.ibm.com/articles/j-zerocopy/:https://developer.ibm.com/articles/j-zerocopy/

[49]

https://lwn.net/Articles/737947/d:https://lwn.net/Articles/737947/d

[50]

零拷貝:https://en.wikipedia.org/wiki/Zero-copy

[51]

IBM 文章:https://developer.ibm.com/articles/j-zerocopy/

[52]

https://docs.kernel.org/networking/packet_mmap.html:https://docs.kernel.org/networking/packet_mmap.html

[53]

https://blog.cloudflare.com/kernel-bypass/:https://blog.cloudflare.com/kernel-bypass/

[54]

https://www.cse.iitb.ac.in/~mythili/os/anno_slides/network_stack_kernel_bypass_slides.pdf:https://www.cse.iitb.ac.in/~mythili/os/anno_slides/network_stack_kernel_bypass_slides.pdf

[55]

https://selectel.ru/blog/en/2016/11/24/introduction-dpdk-architecture-principles/:https://selectel.ru/blog/en/2016/11/24/introduction-dpdk-architecture-principles/

[56]

https://www.slideshare.net/garyachy/dpdk-44585840:https://www.slideshare.net/garyachy/dpdk-44585840

[57]

幻燈片:https://www.cse.iitb.ac.in/~mythili/os/anno_slides/network_stack_kernel_bypass_slides.pdf

[58]

此處:https://lwn.net/Articles/143397/

[59]

告訴我們端口:http://dpdk.org/doc/guides-16.04/linux_gsg/build_dpdk.html#loading-modules-to-enable-userspace-io-for-dpdk

[60]

https://www.ntop.org/products/packet-capture/pf_ring/:https://www.ntop.org/products/packet-capture/pf_ring/

[61]

https://repository.ellak.gr/ellak/bitstream/11087/1537/1/5-deri-high-speed-network-analysis.pdf:https://repository.ellak.gr/ellak/bitstream/11087/1537/1/5-deri-high-speed-network-analysis.pdf

[62]

https://www.synacktiv.com/en/publications/writing-namespace-isolation-with-pf_ring-before-700.html:https://www.synacktiv.com/en/publications/breaking-namespace-isolation-with-pf_ring-before-700.html

[63]

PF_RING:https://github.com/ntop/PF_RING

[64]

自 7.5 版本包含對:https://www.ntop.org/guides/pf_ring/modules/af_xdp.html

[65]

https://www.iovisor.org/technology/xdp:https://www.iovisor.org/technology/xdp

[66]

https://blogs.igalia.com/dpino/2019/01/10/the-express-data-path/:https://blogs.igalia.com/dpino/2019/01/10/the-express-data-path/

[67]

https://pantheon.tech/what-is-af_xdp/:https://pantheon.tech/what-is-af_xdp/

[68]

https://github.com/iovisor/bpf-docs/blob/master/Express_Data_Path.pdf:https://github.com/iovisor/bpf-docs/blob/master/Express_Data_Path.pdf

[69]

https://github.com/xdp-project/xdp-paper/blob/master/xdp-the-express-data-path.pdf:https://github.com/xdp-project/xdp-paper/blob/master/xdp-the-express-data-path.pdf

[70]

http://vger.kernel.org/lpc_net2018_talks/lpc18_paper_af_xdp_perf-v2.pdf:http://vger.kernel.org/lpc_net2018_talks/lpc18_paper_af_xdp_perf-v2.pdf

[71]

https://arthurchiao.art/blog/firewalling-with-bpf-xdp/:https://arthurchiao.art/blog/firewalling-with-bpf-xdp/

[72]

https://archive.fosdem.org/2018/schedule/event/xdp/attachments/slides/2220/export/events/attachments/xdp/slides/2220/fosdem18_SdN_NFV_qmonnet_XDPoffload.pdf:https://archive.fosdem.org/2018/schedule/event/xdp/attachments/slides/2220/export/events/attachments/xdp/slides/2220/fosdem18_SdN_NFV_qmonnet_XDPoffload.pdf

[73]

https://people.netfilter.org/hawk/presentations/KernelRecipes2018/XDP_Kernel_Recipes_2018.pdf:https://people.netfilter.org/hawk/presentations/KernelRecipes2018/XDP_Kernel_Recipes_2018.pdf

[74]

https://legacy.netdevconf.info/2.1/session.html?gospodarek:https://legacy.netdevconf.info/2.1/session.html?gospodarek

[75]

查看:https://ntk148v.github.io/posts/linux-network-performance-ultimate-guide/ebpf/README.md

[76]

Linux 4.18:https://www.kernel.org/doc/html/v4.18/networking/af_xdp.html

[77]

DPDK:https://doc.dpdk.org/guides/nics/af_xdp.html

本文由 Readfog 進行 AMP 轉碼,版權歸原作者所有。
來源https://mp.weixin.qq.com/s/Y4CAZywJbvySrI1ZHx8McQ