tag | f3c1b2acc125fa69f5f4890e22156d06a3b6fd43 | |
---|---|---|
tagger | Yuansheng Wang <membphis@gmail.com> | Tue Jun 04 08:12:03 2019 +0800 |
object | a7697c2d4bf90a62209673437d6d742dae55f422 |
commit | a7697c2d4bf90a62209673437d6d742dae55f422 | [log] [tgz] |
---|---|---|
author | Yuansheng Wang <membphis@gmail.com> | Tue Jun 04 08:11:39 2019 +0800 |
committer | Yuansheng Wang <membphis@gmail.com> | Tue Jun 04 08:11:39 2019 +0800 |
tree | 118525a5b7ffd954c079dd0b0fc693506b4f83bf | |
parent | 7090e02d4b7a3fb77ac017d285d83902b43a42b8 [diff] |
CLI: Initialized etcd and `nginx.conf` at `apisix` server startup.
APISIX is a cloud-native microservices API gateway, delivering the ultimate performance, security, open source and scalable platform for all your APIs and microservices.
sudo yum install yum-utils sudo yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo sudo yum install openresty
sudo yum install etcd
wget http://39.97.63.215/download/apisix-0.1-2.noarch.rpm sudo rpm -ivh apisix-0.1-2.noarch.rpm
If no error has occurred, APISIX is already installed in this directory: /usr/share/lua/5.1/apisix
.
Now, you can try APISIX, go to Quickstart.
Now OpenResty and etcd are installed, we can use Luarocks to install APISIX’s Lua sources:
luarocks install apisix
If you want to know more details, the Luarocks will clone and compile the following dependencies:
ngx.var.*
if the C module is not found.systemctl start etcd
curl http://127.0.0.1:2379/v2/keys/apisix/routes -X PUT -d dir=true curl http://127.0.0.1:2379/v2/keys/apisix/upstreams -X PUT -d dir=true curl http://127.0.0.1:2379/v2/keys/apisix/services -X PUT -d dir=true
sudo openresty -p /usr/share/lua/5.1/apisix -c /usr/share/lua/5.1/apisix/conf/nginx.conf
For the convenience of testing, we set up a maximum of 2 visits in 60 seconds, and return 503 if the threshold is exceeded:
curl http://127.0.0.1:2379/v2/keys/apisix/routes/1 -X PUT -d value=' { "methods": ["GET"], "uri": "/index.html", "id": 1, "plugin_config": { "limit-count": { "count": 2, "time_window": 60, "rejected_code": 503, "key": "remote_addr" } }, "upstream": { "type": "roundrobin", "nodes": { "39.97.63.215:80": 1 } } }'
$ curl -i http://127.0.0.1:9080/index.html HTTP/1.1 200 OK Content-Type: text/html Content-Length: 13175 Connection: keep-alive X-RateLimit-Limit: 2 X-RateLimit-Remaining: 1 Server: APISIX web server Date: Mon, 03 Jun 2019 09:38:32 GMT Last-Modified: Wed, 24 Apr 2019 00:14:17 GMT ETag: "5cbfaa59-3377" Accept-Ranges: bytes ...
n1-highcpu-8 (8 vCPUs, 7.2 GB memory) on Google Cloud
But we only used 4 cores to run APISIX, and left 4 cores for system and wrk, which is the HTTP benchmarking tool.
Only used APISIX as the reverse proxy server, with no logging, limit rate, or other plugins enabled, and the response size was 1KB.
The x-axis means the size of CPU core, and the y-axis is QPS.
Note the y-axis latency in microsecond(μs) not millisecond.
The result of Flame Graph:
And if you want to run the benchmark test in your machine, you should run another Nginx to listen 80 port.
curl http://127.0.0.1:2379/v2/keys/apisix/routes/1 -X PUT -d value=' { "methods": ["GET"], "uri": "/hello", "id": 1, "plugin_config": {}, "upstream": { "type": "roundrobin", "nodes": { "127.0.0.1:80": 1, "127.0.0.2:80": 1 } } }'
then run wrk:
wrk -d 60 --latency http://127.0.0.1:9080/hello
Only used APISIX as the reverse proxy server, enabled the limit rate and prometheus plugins, and the response size was 1KB.
The x-axis means the size of CPU core, and the y-axis is QPS.
Note the y-axis latency in microsecond(μs) not millisecond.
The result of Flame Graph:
And if you want to run the benchmark test in your machine, you should run another Nginx to listen 80 port.
curl http://127.0.0.1:2379/v2/keys/apisix/routes/1 -X PUT -d value=' { "methods": ["GET"], "uri": "/hello", "id": 1, "plugin_config": { "limit-count": { "count": 999999999, "time_window": 60, "rejected_code": 503, "key": "remote_addr" }, "prometheus":{} }, "upstream": { "type": "roundrobin", "nodes": { "127.0.0.1:80": 1, "127.0.0.2:80": 1 } } }'
then run wrk:
wrk -d 60 --latency http://127.0.0.1:9080/hello