1.Introduction
API网关是随着微服务(Microservice)概念兴起的一种架构模式。原本一个庞大的单体应用(All in one)业务系统被拆分成许多微服务(Microservice)系统进行独立的维护和部署,服务拆分带来的变化是API的规模成倍增长,API的管理难度也在日益增加,使用API网关发布和管理API逐渐成为一种趋势。一般来说,API网关是运行于外部请求与内部服务之间的一个流量入口,实现对外部请求的协议转换、鉴权、流控、参数校验、监控等通用功能。
本文即将介绍的Kong,是一个开源的API gateway和微服务管理的工具,基于Nginx和lua-nginx-module(特殊的OpenResty),Kong具有可插拔的架构,使其功能强大且灵活。
2.Key Concepts
- Service: Kong的一个实体对象,表示了外部的上游API或者微服务
- Route: Kong的一个实体对象,表示了一种将下游请求映射到上游服务的路由
- Consumer: Kong的一个实体对象,表示使用API的开发者或者机器,在使用Kong时,一个Consumer仅与Kong交互。
- Plugin:插件用于是Kong内部将请求转发给上游API前后执行的一系列动作,Kong在其插件库中提供了非常强大的插件
- Credential: A certificate object represents a public certificate/private key pair for an SSL certificate.
- SNI: An SNI object represents a many-to-one mapping of hostnames to a certificate. That is, a certificate object can have many hostnames associated with it
- Upstream: 上游服务,指代Kong背后的API或者服务,也是客户端请求转发的目的端,The upstream object represents a virtual hostname and can be used to loadbalance incoming requests over multiple services (targets).
- Target: A target is an ip address/hostname with a port that identifies an instance of a backend service. Every upstream can have many targets, and the targets can be dynamically added. Changes are effectuated on the fly.
- Admin API -用于管理Kong配置,端点,使用者,插件等的RESTful API端点
下图展示了Kong和其他传统架构的区别,可以帮助我们理解为什么有Kong:
大概有鉴权、监控、日志、安全审计、ACL、缓存、限流、serverless等等。
3.Setup
官方文档提供了多种环境下详细的安装说明。我们这里使用docker进行安装(docker安装过程略):
#1.create docker network
$ docker network create kong-net
#2.run PostgreSQL database
$ docker run -d --name kong-database \
--network=kong-net \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_DB=kong" \
postgres:9.6
#3.prepare database
$ docker run --rm \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:latest kong migrations bootstrap
Unable to find image 'kong:latest' locally
latest: Pulling from library/kong
59265c40e257: Pull complete
6389eff8e6ff: Pull complete
f58488256be6: Pull complete
Digest: sha256:f7ed033bb9955da0fcefa034d07fee324cad6d01c12ebf54268dfe825ba2e92c
Status: Downloaded newer image for kong:latest
bootstrapping database...
migrating core on database 'kong'...
core migrated up to: 000_base (executed)
core migrated up to: 001_14_to_15 (executed)
core migrated up to: 002_15_to_1 (executed)
migrating oauth2 on database 'kong'...
oauth2 migrated up to: 000_base_oauth2 (executed)
oauth2 migrated up to: 001_14_to_15 (executed)
oauth2 migrated up to: 002_15_to_10 (executed)
migrating acl on database 'kong'...
acl migrated up to: 000_base_acl (executed)
acl migrated up to: 001_14_to_15 (executed)
migrating jwt on database 'kong'...
jwt migrated up to: 000_base_jwt (executed)
jwt migrated up to: 001_14_to_15 (executed)
migrating basic-auth on database 'kong'...
basic-auth migrated up to: 000_base_basic_auth (executed)
basic-auth migrated up to: 001_14_to_15 (executed)
migrating key-auth on database 'kong'...
key-auth migrated up to: 000_base_key_auth (executed)
key-auth migrated up to: 001_14_to_15 (executed)
migrating rate-limiting on database 'kong'...
rate-limiting migrated up to: 000_base_rate_limiting (executed)
rate-limiting migrated up to: 001_14_to_15 (executed)
rate-limiting migrated up to: 002_15_to_10 (executed)
migrating hmac-auth on database 'kong'...
hmac-auth migrated up to: 000_base_hmac_auth (executed)
hmac-auth migrated up to: 001_14_to_15 (executed)
migrating response-ratelimiting on database 'kong'...
response-ratelimiting migrated up to: 000_base_response_rate_limiting (executed)
response-ratelimiting migrated up to: 001_14_to_15 (executed)
response-ratelimiting migrated up to: 002_15_to_10 (executed)
22 migrations processed
22 executed
database is up-to-date
#4.start Kong
$ docker run -d --name kong \
--network=kong-net \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:latest
999a5cf1db1a8c23ca870933b73407d7ae5f0fd2d9a895a78627a9c27e08045c
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
999a5cf1db1a kong:latest "/docker-entrypoint.…" 8 seconds ago Up 7 seconds 0.0.0.0:8000-8001->8000-8001/tcp, 0.0.0.0:8443-8444->8443-8444/tcp kong
ecb50c2f7307 postgres:9.6 "docker-entrypoint.s…" About an hour ago Up About an hour 0.0.0.0:5432->5432/tcp
容器启动完毕后,尝试
curl -i http://localhost:8001/
得到如下:
{
"plugins": {
"enabled_in_cluster": [ ],
"available_on_server": {
"response-transformer": true,
"oauth2": true,
"acl": true,
"correlation-id": true,
"pre-function": true,
"jwt": true,
"cors": true,
"ip-restriction": true,
"basic-auth": true,
"key-auth": true,
"rate-limiting": true,
"request-transformer": true,
"http-log": true,
"file-log": true,
"hmac-auth": true,
"ldap-auth": true,
"datadog": true,
"tcp-log": true,
"zipkin": true,
"post-function": true,
"request-size-limiting": true,
"bot-detection": true,
"syslog": true,
"loggly": true,
"azure-functions": true,
"udp-log": true,
"response-ratelimiting": true,
"aws-lambda": true,
"statsd": true,
"prometheus": true,
"request-termination": true
}
},
"tagline": "Welcome to kong",
"configuration": {
"plugins": [
"bundled"
],
"admin_ssl_enabled": true,
"lua_ssl_verify_depth": 1,
"trusted_ips": { },
"prefix": "/usr/local/kong",
"loaded_plugins": {
"response-transformer": true,
"request-termination": true,
"prometheus": true,
"ip-restriction": true,
"pre-function": true,
"jwt": true,
"cors": true,
"statsd": true,
"basic-auth": true,
"key-auth": true,
"ldap-auth": true,
"aws-lambda": true,
"http-log": true,
"response-ratelimiting": true,
"hmac-auth": true,
"request-size-limiting": true,
"datadog": true,
"tcp-log": true,
"zipkin": true,
"post-function": true,
"bot-detection": true,
"acl": true,
"loggly": true,
"syslog": true,
"azure-functions": true,
"udp-log": true,
"file-log": true,
"request-transformer": true,
"correlation-id": true,
"rate-limiting": true,
"oauth2": true
},
"cassandra_username": "kong",
"ssl_cert_key": "/usr/local/kong/ssl/kong-default.key",
"admin_ssl_cert_key": "/usr/local/kong/ssl/admin-kong-default.key",
"dns_resolver": { },
"pg_user": "kong",
"mem_cache_size": "128m",
"ssl_ciphers": "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256",
"nginx_admin_directives": { },
"nginx_http_directives": [
{
"value": "prometheus_metrics 5m",
"name": "lua_shared_dict"
}
],
"pg_host": "kong-database",
"nginx_acc_logs": "/usr/local/kong/logs/access.log",
"proxy_listen": [
"0.0.0.0:8000",
"0.0.0.0:8443 ssl"
],
"client_ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
"ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
"db_update_frequency": 5,
"db_update_propagation": 0,
"stream_listen": [
"off"
],
"nginx_err_logs": "/usr/local/kong/logs/error.log",
"cassandra_port": 9042,
"dns_order": [
"LAST",
"SRV",
"A",
"CNAME"
],
"dns_error_ttl": 1,
"headers": [
"server_tokens",
"latency_tokens"
],
"cassandra_lb_policy": "RequestRoundRobin",
"nginx_optimizations": true,
"pg_timeout": 5000,
"database": "postgres",
"pg_database": "kong",
"nginx_worker_processes": "auto",
"lua_package_cpath": "",
"admin_acc_logs": "/usr/local/kong/logs/admin_access.log",
"lua_package_path": "./?.lua;./?/init.lua;",
"nginx_pid": "/usr/local/kong/pids/nginx.pid",
"upstream_keepalive": 60,
"client_ssl": false,
"admin_access_log": "/dev/stdout",
"cassandra_data_centers": [
"dc1:2",
"dc2:3"
],
"cassandra_ssl": false,
"proxy_listeners": [
{
"transparent": false,
"ssl": false,
"ip": "0.0.0.0",
"proxy_protocol": false,
"port": 8000,
"http2": false,
"listener": "0.0.0.0:8000"
},
{
"transparent": false,
"ssl": true,
"ip": "0.0.0.0",
"proxy_protocol": false,
"port": 8443,
"http2": false,
"listener": "0.0.0.0:8443 ssl"
}
],
"proxy_ssl_enabled": true,
"client_max_body_size": "0",
"proxy_error_log": "/dev/stderr",
"enabled_headers": {
"latency_tokens": true,
"X-Kong-Proxy-Latency": true,
"Via": true,
"server_tokens": true,
"Server": true,
"X-Kong-Upstream-Latency": true,
"X-Kong-Upstream-Status": false
},
"dns_stale_ttl": 4,
"lua_socket_pool_size": 30,
"db_resurrect_ttl": 30,
"origins": { },
"cassandra_consistency": "ONE",
"db_cache_ttl": 0,
"admin_error_log": "/dev/stderr",
"pg_ssl_verify": false,
"dns_not_found_ttl": 30,
"pg_ssl": false,
"nginx_daemon": "off",
"nginx_kong_stream_conf": "/usr/local/kong/nginx-kong-stream.conf",
"cassandra_repl_strategy": "SimpleStrategy",
"error_default_type": "text/plain",
"dns_no_sync": false,
"nginx_proxy_directives": { },
"proxy_access_log": "/dev/stdout",
"nginx_kong_conf": "/usr/local/kong/nginx-kong.conf",
"cassandra_schema_consensus_timeout": 10000,
"dns_hostsfile": "/etc/hosts",
"admin_listeners": [
{
"transparent": false,
"ssl": false,
"ip": "0.0.0.0",
"proxy_protocol": false,
"port": 8001,
"http2": false,
"listener": "0.0.0.0:8001"
},
{
"transparent": false,
"ssl": true,
"ip": "0.0.0.0",
"proxy_protocol": false,
"port": 8444,
"http2": false,
"listener": "0.0.0.0:8444 ssl"
}
],
"ssl_cipher_suite": "modern",
"ssl_cert": "/usr/local/kong/ssl/kong-default.crt",
"cassandra_timeout": 5000,
"admin_ssl_cert_key_default": "/usr/local/kong/ssl/admin-kong-default.key",
"cassandra_ssl_verify": false,
"cassandra_contact_points": [
"kong-database"
],
"real_ip_header": "X-Real-IP",
"real_ip_recursive": "off",
"cassandra_repl_factor": 1,
"client_ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
"admin_ssl_cert": "/usr/local/kong/ssl/admin-kong-default.crt",
"anonymous_reports": true,
"log_level": "notice",
"kong_env": "/usr/local/kong/.kong_env",
"pg_port": 5432,
"admin_ssl_cert_default": "/usr/local/kong/ssl/admin-kong-default.crt",
"client_body_buffer_size": "8k",
"ssl_preread_enabled": true,
"ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
"stream_listeners": { },
"cassandra_keyspace": "kong",
"ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
"nginx_conf": "/usr/local/kong/nginx.conf",
"admin_listen": [
"0.0.0.0:8001",
"0.0.0.0:8444 ssl"
]
},
"version": "1.0.3",
"node_id": "3ccef799-3037-4a8f-8ccd-2e60326b4444",
"lua_version": "LuaJIT 2.1.0-beta3",
"prng_seeds": {
"pid: 36": 229762112224,
"pid: 37": 131951181922,
"pid: 1": 136391662351
},
"timers": {
"pending": 5,
"running": 0
},
"hostname": "999a5cf1db1a"
}
上面几个端口,分别是:
- :8000 on which Kong listens for incoming HTTP traffic from your clients, and forwards it to your upstream services.
- :8443 on which Kong listens for incoming HTTPS traffic. This port has a similar behavior as the :8000 port, except that it expects HTTPS traffic only. This port can be disabled via the configuration file.
- :8001 on which the Admin API used to configure Kong listens.
- :8444 on which the Admin API listens for HTTPS traffic.
4.API Management
在本地配置Kong完毕后,我们来感受一下Kong强大的特性。首先我们有一个简单的API服务,之前已经写好的一个flavors的增删改查,以flavors的查询为例,我们将GET /flavors/detail添加到Kong中。
我们的API server地址是http://127.0.0.1:8080/flavors/detail,于是我们有:
- route path: /flavors/detail
- service host: http://127.0.0.1:8080
4.1 Add a service
curl -i -X POST \ --url http://localhost:8001/services/ \ --data 'name=example-flavors'\ --data 'url=http://127.0.0.1:8080/flavors/detail'
得到的响应:
HTTP/1.1 201 CreatedDate: Wed, 27 Feb 2019 06:08:25 GMTContent-Type: application/json; charset=utf-8Connection: keep-aliveAccess-Control-Allow-Origin: *Server: kong/1.0.3Content-Length: 273 { "host": "127.0.0.1", "created_at": 1551247705, "connect_timeout": 60000, "id": "abba6d52-b239-4b8f-ad11-1e7389d4cf71", "protocol": "http", "name": "example-flavors", "read_timeout": 60000, "port": 8080, "path": "/flavors/detail", "updated_at": 1551247705, "retries": 5, "write_timeout": 60000 }
4.2 List current services
curl -i -X GET \ --url http://localhost:8001/services/
得到的响应:
HTTP/1.1 200 OKDate: Wed, 27 Feb 2019 06:11:07 GMTContent-Type: application/json; charset=utf-8Connection: keep-aliveAccess-Control-Allow-Origin: *Server: kong/1.0.3Content-Length: 296 { "next": null, "data": [ { "host": "127.0.0.1", "created_at": 1551247705, "connect_timeout": 60000, "id": "abba6d52-b239-4b8f-ad11-1e7389d4cf71", "protocol": "http", "name": "example-flavors", "read_timeout": 60000, "port": 8080, "path": "/flavors/detail", "updated_at": 1551247705, "retries": 5, "write_timeout": 60000 } ] }
可以看到目前就我们前面添加的一个。
4.3 Add a route to service
有了服务之后,我们为服务填一个转发路由:
curl -i -X POST \ --url http://localhost:8001/services/example-flavors/routes \ --data 'hosts[]=hb.ctyun.com' \ --data 'paths[]=/flavors/detail' \ --data 'name=flavor-detail'
得到的响应是:
HTTP/1.1 201 CreatedDate: Wed, 27 Feb 2019 06:24:00 GMTContent-Type: application/json; charset=utf-8Connection: keep-aliveAccess-Control-Allow-Origin: *Server: kong/1.0.3Content-Length: 377 { "created_at": 1551248640, "methods": null, "id": "11dbb4a1-7452-4d40-a45a-de3f3cad5275", "service": { "id": "abba6d52-b239-4b8f-ad11-1e7389d4cf71" }, "name": "flavor-detail", "hosts": [ "hb.ctyun.com" ], "updated_at": 1551248640, "preserve_host": false, "regex_priority": 0, "paths": [ "/flavors/detail" ], "sources": null, "destinations": null, "snis": null, "protocols": [ "http", "https" ], "strip_path": true }
原先获取flavors列表,我们是通过:
curl -X GET http://localhost:8080/flavors/detail
而我们现在可以直接通过Kong进行访问,注意,我们必须修改Header,添加指定的Host信息:
curl -X GET http://localhost:8000/flavors/detail -H 'Host:hb.ctyun.com'
结果遇到了报错,提示:
172.18.0.1 - - [27/Feb/2019:06:43:17 +0000] "GET /flavors/detail HTTP/1.1" 502 69 "-" "curl/7.54.0"2019/02/27 06:43:17 [error] 36#0: *35879 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: kong, request: "GET /flavors/detail HTTP/1.1", upstream: "http://127.0.0.1:8080/flavors/detail", host: "hb.ctyun.com"
可以看到能够按照路由规则进行转发,但是由于网络问题(kong部署在了docker容器中),所以没有办法进行访问。
我们重新创建service、route,并使用kennethreitz/httpbin来验证:
# 运行一个容器,将本地的8080的请求转发到容器的80端口
docker run -d --name simple-web-server \
--network kong-net \
-p 8080:80 kennethreitz/httpbin
# 创建名为demo的service
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=demo'\
--data 'url=http://simple-web-server/get'
HTTP/1.1 201 Created
Date: Wed, 27 Feb 2019 07:51:45 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Access-Control-Allow-Origin: *
Server: kong/1.0.3
Content-Length: 256
{
"host": "simple-web-server",
"created_at": 1551253905,
"connect_timeout": 60000,
"id": "978de8a6-6767-4741-baca-a25c9a131f9d",
"protocol": "http",
"name": "demo",
"read_timeout": 60000,
"port": 80,
"path": "/get",
"updated_at": 1551253905,
"retries": 5,
"write_timeout": 60000
}
# 为service demo配置route规则
curl -i -X POST \
--url http://localhost:8001/services/demo/routes \
--data 'hosts[]=api.ctyun.com' \
--data 'paths[]=/get' \
--data 'name=demo-get'
HTTP/1.1 201 Created
Date: Wed, 27 Feb 2019 07:52:40 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
Access-Control-Allow-Origin: *
Server: kong/1.0.3
Content-Length: 361
{
"created_at": 1551253960,
"methods": null,
"id": "06d6754e-a4ae-4be6-9b87-b64ccfe6c920",
"service": {
"id": "978de8a6-6767-4741-baca-a25c9a131f9d"
},
"name": "demo-get",
"hosts": [
"api.ctyun.com"
],
"updated_at": 1551253960,
"preserve_host": false,
"regex_priority": 0,
"paths": [
"/get"
],
"sources": null,
"destinations": null,
"snis": null,
"protocols": [
"http",
"https"
],
"strip_path": true
}
然后我们尝试通过访问kong,转发到httpbin:
curl -i -X GET http://localhost:8000/get -H 'Host:api.ctyun.com' HTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 266Connection: keep-aliveServer: gunicorn/19.9.0Date: Wed, 27 Feb 2019 07:56:50 GMTAccess-Control-Allow-Origin: *Access-Control-Allow-Credentials: trueX-Kong-Upstream-Latency: 9X-Kong-Proxy-Latency: 84Via: kong/1.0.3 { "args": {}, "headers": { "Accept": "*/*", "Connection": "keep-alive", "Host": "simple-web-server", "User-Agent": "curl/7.54.0", "X-Forwarded-Host": "api.ctyun.com" }, "origin": "172.18.0.1", "url": "http://api.ctyun.com/get"}
至此,我们已经可以通过来源host、route将请求换发到指定的目标host,并且得到了返回值,这就算完成了基本API转发流程。
4.4 Plugins
Kong提供了非常丰富的插件,都可以在Kong Hub找得到。这里我们简单为我们的服务配置一个Key Authentication的plugin。
在服务demo上启用key-auth的插件:
curl -X POST http://localhost:8001/services/demo/plugins \ --data "name=key-auth" { "created_at": 1551256029, "config": { "key_names": [ "apikey" ], "run_on_preflight": true, "anonymous": null, "hide_credentials": false, "key_in_body": false }, "id": "4eaa000f-0fa2-4b3e-8c13-2db4c6b7ce49", "service": { "id": "978de8a6-6767-4741-baca-a25c9a131f9d" }, "enabled": true, "run_on": "first", "consumer": null, "route": null, "name": "key-auth"}
也可以在具体的route上启用插件,比如:
curl -X POST http://<host>:8001/routes/{route_id}/plugins \ --data "name=key-auth"
我们这里就不再赘述了。
开启插件后,再次访问前面的simple-web-server,则有:
curl -i -X GET http://localhost:8000/get -H 'Host:api.ctyun.com'HTTP/1.1 401 UnauthorizedDate: Wed, 27 Feb 2019 08:27:13 GMTContent-Type: application/json; charset=utf-8Connection: keep-aliveWWW-Authenticate: Key realm="kong"Content-Length: 41Server: kong/1.0.3 {"message":"No API key found in request"}
此时插件key-auth已经开启了,开启之后怎么用呢?要想使用鉴权插件,离不开Consumer。如何创建Consumer并使用指定的插件,我们放到4.5 Add Consumers中尽心更详细的介绍。
4.5 Add Consumers
添加一个consumer,username和custom_id指定任一即可:
curl -i -X POST \ --url http://localhost:8001/consumers/ \ --data "username=<USERNAME>" \ --data "custom_id=<CUSTOM_ID>"
如:
curl -i -X POST \ --url http://localhost:8001/consumers/ \ --data "username=elbarco" HTTP/1.1 201 CreatedDate: Wed, 27 Feb 2019 08:47:50 GMTContent-Type: application/json; charset=utf-8Connection: keep-aliveAccess-Control-Allow-Origin: *Server: kong/1.0.3Content-Length: 107 { "custom_id": null, "created_at": 1551257270, "username": "elbarco", "id": "738627ae-57e9-4b20-9d1d-fb12998d5296"}
为用户提供一个key:
curl -i -X POST \ --url http://localhost:8001/consumers/elbarco/key-auth/ \ --data 'key=hola-elbarco' HTTP/1.1 201 CreatedDate: Wed, 27 Feb 2019 09:12:12 GMTContent-Type: application/json; charset=utf-8Connection: keep-aliveAccess-Control-Allow-Origin: *Server: kong/1.0.3Content-Length: 147 { "key": "hola-elbarco", "created_at": 1551258732, "consumer": { "id": "738627ae-57e9-4b20-9d1d-fb12998d5296" }, "id": "b9cb021d-cb37-4841-b172-40ff2dcacb5e"}
此时,我们就可以带着鉴权访问前面的simple-web-server了,有两种方式:
curl http://kong:8000/{proxy path}?apikey=<some_key> curl http://kong:8000/{proxy path} \ -H 'apikey: <some_key>'
我们这里任选一种即可:
curl -i -X GET http://localhost:8000/get -H 'Host:api.ctyun.com' -H 'apikey:hola-elbarco' HTTP/1.1 200 OKContent-Type: application/jsonContent-Length: 398Connection: keep-aliveServer: gunicorn/19.9.0Date: Wed, 27 Feb 2019 09:19:31 GMTAccess-Control-Allow-Origin: *Access-Control-Allow-Credentials: trueX-Kong-Upstream-Latency: 68X-Kong-Proxy-Latency: 26Via: kong/1.0.3 { "args": {}, "headers": { "Accept": "*/*", "Apikey": "hola-elbarco", "Connection": "keep-alive", "Host": "simple-web-server", "User-Agent": "curl/7.54.0", "X-Consumer-Id": "738627ae-57e9-4b20-9d1d-fb12998d5296", "X-Consumer-Username": "elbarco", "X-Forwarded-Host": "api.ctyun.com" }, "origin": "172.18.0.1", "url": "http://api.ctyun.com/get"}
4.6 Rate limiting
额外的,我们再看一下限流插件:Rate Limiting。
5.Advanced Features
5.1 Load balancing
6.Kong Dashboard (From community)
Kong的商业版中,提供了一个可视化界面工具,叫做Kong Manager,功能很是强大,比如:
试用需要申请,我们转而在社区中寻求替代工具,于是在Github上搜到了Kong Dashboard,提供了使用npm和docker安装两种方式,这里采用docker的方式安装一下,看看效果:
# Start Kong Dashboard
docker run --rm -p 9090:8080 pgbi/kong-dashboard start --kong-url http://locahost:8001
# Start Kong Dashboard on a custom port
docker run --rm -p [port]:8080 pgbi/kong-dashboard start --kong-url http://kong:8001
# Start Kong Dashboard with basic auth
docker run --rm -p 8080:8080 pgbi/kong-dashboard start \ --kong-url http://kong:8001 --basic-auth user1=password1 user2=password2
# See full list of start options
docker run --rm -p 8080:8080 pgbi/kong-dashboard start --help
docker run --rm --name kong-dashboard -p 9090:808i0 pgbi/kong-dashboard start --kong-url http://locahost:8001
docker run --rm --network kong-net --name kong-dashboard -p 9090:8080 pgbi/kong-dashboard start --kong-url http://kong:8001
Connecting to Kong on http://kong:8001 ...
This version of Kong dashboard doesn't support Kong v0.15 and higher.
受限于Kong的版本:
docker container exec 999a5cf1db1a kong version1.0.3
我们没办法接入kong-dashboard,后面再进行调研吧。
7.Summary
kong的模型比较清晰,从service、route、plugin到upstream、consumer,通用性比较强,因为插件的存在,功能扩展性也很高。从我们的实际业务触发,也可以参考借鉴这种模型方式,先从核心功能出发。