简介
Rinetd 这种方式其实两三个月前就已经有了,是 v2ex 网友 @linhua 的成果,他直接将 BBR 内置到了 Rinetd 里边,比较方便的就能配置出来。也就是由于配置比较简单,我本来没想再写这个的一键配置脚本(@linhua 实现了一个 https://github.com/linhua55/lkl_study),但由于很多朋友使用 haproxy 的方式失败了,网上的脚本也只支持 Ubuntu 16 和 CentOS 7 以上的系统,我还是决定再写一个通用的 rinetd-bbr 一键脚本。
ps:正在写,过一段时间再发布。先写一下手动搭建的方法。
手动搭建
仅支持 64 位系统。
1.下载文件到 /usr/bin/rinetd-bbr
wget -O /usr/bin/rinetd-bbr https://github.com/linhua55/lkl_study/releases/download/v1.2/rinetd_bbr_powered
2.设置权限
chmod a+x /usr/bin/rinetd-bbr
3.创建配置文件
vi /etc/rinetd-bbr.conf
输入以下内容
# bindadress bindport connectaddress connectport
0.0.0.0 443 0.0.0.0 443
其中的 443 请改为你的端口
IP 地址统一写 0.0.0.0
4.获取接口名称
# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: venet0: <BROADCAST,POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/void
inet 127.0.0.2/32 scope host venet0
inet 10.10.10.10/32 brd 10.10.10.10 scope global venet0:0
看具有公网 IP 的接口名称(比如我的公网 IP 是 10.10.10.10),上面这种的接口是 venet0:0 而不是 venet0
基本 OpenVZ 应该都是 venet0:0 接口。
5.启动
/usr/bin/rinetd-bbr -f -c /etc/rinetd-bbr.conf raw venet0:0 &
增加开机启动
sudo vi /etc/rc.local
exit 0 前面加入
/usr/bin/rinetd-bbr -f -c /etc/rinetd-bbr.conf raw venet0:0 &
注意:将最后的接口改为你上面获取到的接口。在命令最后面加 & 以使其能后台运行。
验证
正常情况下的输出:
[ 0.000000] Linux version 4.10.0+ (root@gcc) (gcc version 4.9.4 (Ubuntu 4.9.4-2ubuntu1~14.04.1) ) #1 Mon Jul 31 04:50:50 UTC 2017
[ 0.000000] bootmem address range: 0x7f2acc000000 - 0x7f2acffff000
[ 0.000000] Built 1 zonelists in Zone order, mobility grouping on. Total pages: 16159
[ 0.000000] Kernel command line: virtio_mmio.device=268@0x1000000:1
[ 0.000000] PID hash table entries: 256 (order: -1, 2048 bytes)
[ 0.000000] Dentry cache hash table entries: 8192 (order: 4, 65536 bytes)
[ 0.000000] Inode-cache hash table entries: 4096 (order: 3, 32768 bytes)
[ 0.000000] Memory available: 64492k/0k RAM
[ 0.000000] SLUB: HWalign=32, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.000000] NR_IRQS:4096
[ 0.000000] lkl: irqs initialized
[ 0.000000] clocksource: lkl: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
[ 0.000001] lkl: time and timers initialized (irq2)
[ 0.000003] pid_max: default: 4096 minimum: 301
[ 0.000021] Mount-cache hash table entries: 512 (order: 0, 4096 bytes)
[ 0.000023] Mountpoint-cache hash table entries: 512 (order: 0, 4096 bytes)
[ 0.009053] console [lkl_console0] enabled
[ 0.009056] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
[ 0.009128] NET: Registered protocol family 16
[ 0.009265] clocksource: Switched to clocksource lkl
[ 0.009324] NET: Registered protocol family 2
[ 0.009418] TCP established hash table entries: 512 (order: 0, 4096 bytes)
[ 0.009421] TCP bind hash table entries: 512 (order: 0, 4096 bytes)
[ 0.009503] TCP: Hash tables configured (established 512 bind 512)
[ 0.009971] UDP hash table entries: 128 (order: 0, 4096 bytes)
[ 0.009976] UDP-Lite hash table entries: 128 (order: 0, 4096 bytes)
[ 0.010060] virtio-mmio: Registering device virtio-mmio.0 at 0x1000000-0x100010b, IRQ 1.
[ 0.010186] workingset: timestamp_bits=62 max_order=14 bucket_order=0
[ 0.010203] virtio-mmio virtio-mmio.0: Failed to enable 64-bit or 32-bit DMA. Trying to continue, but this might not work.
[ 0.010350] NET: Registered protocol family 10
[ 0.010849] Segment Routing with IPv6
[ 0.010859] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[ 0.010993] Warning: unable to open an initial console.
[ 0.011006] This architecture does not have kernel memory protection.
[ 2.169284] random: fast init done
查看 iptables 规则:
# iptables -t raw -nL
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 /* LKL_RAW */
DROP tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 /* LKL_RAW */
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
已经有两条规则就代表成功了
评论
发表评论