Spelling
This commit is contained in:
parent
fe65e9d22f
commit
a8728f3d2c
38 changed files with 1120 additions and 1120 deletions
|
@ -53,7 +53,7 @@ This guide refers to chapters in the CIS Benchmark. For full explanation you sho
|
|||
| ||| |
|
||||
| __2.4 Network Configuration__ ||| |
|
||||
| 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored)| OK | Ensured by automatic nginx.conf configuration| |
|
||||
| 2.4.2 Ensure requests for unknown host names are rejected (Not Scored)| OK | They are not rejected but send to the "default backend" delivering approriate errors (mostly 404)| |
|
||||
| 2.4.2 Ensure requests for unknown host names are rejected (Not Scored)| OK | They are not rejected but send to the "default backend" delivering appropriate errors (mostly 404)| |
|
||||
| 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored)| ACTION NEEDED| Default is 75s | configure keep-alive to 10 seconds [according to this documentation](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#keep-alive) |
|
||||
| 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored)| RISK TO BE ACCEPTED| Not configured, however the nginx default is 60s| Not configurable|
|
||||
| ||| |
|
||||
|
@ -68,7 +68,7 @@ This guide refers to chapters in the CIS Benchmark. For full explanation you sho
|
|||
| 3.1 Ensure detailed logging is enabled (Not Scored) | OK | nginx ingress has a very detailed log format by default | |
|
||||
| 3.2 Ensure access logging is enabled (Scored) | OK | Access log is enabled by default | |
|
||||
| 3.3 Ensure error logging is enabled and set to the info logging level (Scored)| OK | Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway | |
|
||||
| 3.4 Ensure log files are rotated (Scored) | OBSOLETE | Log file handling is not part of the nginx ingress and should be handled separatly | |
|
||||
| 3.4 Ensure log files are rotated (Scored) | OBSOLETE | Log file handling is not part of the nginx ingress and should be handled separately | |
|
||||
| 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) | OBSOLETE | See previous answer| |
|
||||
| 3.6 Ensure access logs are sent to a remote syslog server (Not Scored)| OBSOLETE | See previous answer| |
|
||||
| 3.7 Ensure proxies pass source IP information (Scored)| OK | Headers are set by default | |
|
||||
|
@ -85,8 +85,8 @@ This guide refers to chapters in the CIS Benchmark. For full explanation you sho
|
|||
| 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) | ACTION NEEDED | Not enabled | set via [this configuration parameter](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-ocsp) |
|
||||
| 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored)| OK | HSTS is enabled by default | |
|
||||
| 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored)| ACTION NEEDED / RISK TO BE ACCEPTED | HKPK not enabled by default | If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown |
|
||||
| 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) | DEPENDS ON BACKEND | Highly dependend on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [manual is here](https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/)|
|
||||
| 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) | DEPENDS ON BACKEND | Highly dependend on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [see configuration here](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#backend-certificate-authentication) |
|
||||
| 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [manual is here](https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/)|
|
||||
| 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) | DEPENDS ON BACKEND | Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh| If backend allows it, [see configuration here](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#backend-certificate-authentication) |
|
||||
| 4.1.12 Ensure your domain is preloaded (Not Scored) | ACTION NEEDED| Preload is not active by default | Set controller.config.hsts-preload to true|
|
||||
| 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored)| OK | Session tickets are disabled by default | |
|
||||
| 4.1.14 Ensure HTTP/2.0 is used (Not Scored) | OK | http2 is set by default| |
|
||||
|
@ -98,7 +98,7 @@ This guide refers to chapters in the CIS Benchmark. For full explanation you sho
|
|||
| 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) | OK/ACTION NEEDED | Depends on use case| If required it can be set via config snippet|
|
||||
| ||| |
|
||||
| __5.2 Request Limits__||| |
|
||||
| 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) | ACTION NEEDED| Default timeout is 60s | Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#client-header-timeout) and respective body aequivalent|
|
||||
| 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) | ACTION NEEDED| Default timeout is 60s | Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#client-header-timeout) and respective body equivalent|
|
||||
| 5.2.2 Ensure the maximum request body size is set correctly (Scored)| ACTION NEEDED| Default is 1m| set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#proxy-body-size)|
|
||||
| 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) | ACTION NEEDED| Default is 4 8k| Set via [this configuration parameter](https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#large-client-header-buffers)|
|
||||
| 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) | OK/ACTION NEEDED| No limit set| Depends on use case, limit can be set via [these annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting)|
|
||||
|
|
|
@ -540,7 +540,7 @@
|
|||
|
||||
### [[Service] Type ExternalName](https://github.com/kubernetes/ingress-nginx/tree/master/test/e2e/servicebackend/service_externalname.go#L37)
|
||||
|
||||
- [works with external name set to incomplete fdqn](https://github.com/kubernetes/ingress-nginx/tree/master/test/e2e/servicebackend/service_externalname.go#L40)
|
||||
- [works with external name set to incomplete fqdn](https://github.com/kubernetes/ingress-nginx/tree/master/test/e2e/servicebackend/service_externalname.go#L40)
|
||||
- [should return 200 for service type=ExternalName without a port defined](https://github.com/kubernetes/ingress-nginx/tree/master/test/e2e/servicebackend/service_externalname.go#L73)
|
||||
- [should return 200 for service type=ExternalName with a port defined](https://github.com/kubernetes/ingress-nginx/tree/master/test/e2e/servicebackend/service_externalname.go#L107)
|
||||
- [should return status 502 for service type=ExternalName with an invalid host](https://github.com/kubernetes/ingress-nginx/tree/master/test/e2e/servicebackend/service_externalname.go#L148)
|
||||
|
|
|
@ -18,7 +18,7 @@ see-also:
|
|||
replaces:
|
||||
- "/docs/enhancements/20181231-replaced-kep.md"
|
||||
superseded-by:
|
||||
- "/docs/enhancements/20190104-superceding-kep.md"
|
||||
- "/docs/enhancements/20190104-superseding-kep.md"
|
||||
---
|
||||
|
||||
# Title
|
||||
|
@ -81,7 +81,7 @@ Ensure the TOC is wrapped with <code><!-- toc --&rt;<!-- /toc --&rt;</code
|
|||
## Summary
|
||||
|
||||
The `Summary` section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap.
|
||||
It should be possible to collect this information before implementation begins in order to avoid requiring implementors to split their attention between writing release notes and implementing the feature itself.
|
||||
It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.
|
||||
|
||||
A good summary is probably at least a paragraph in length.
|
||||
|
||||
|
@ -122,7 +122,7 @@ The goal here is to make this feel real for users without getting bogged down.
|
|||
What are the caveats to the implementation?
|
||||
What are some important details that didn't come across above.
|
||||
Go in to as much detail as necessary here.
|
||||
This might be a good place to talk about core concepts and how they releate.
|
||||
This might be a good place to talk about core concepts and how they relate.
|
||||
|
||||
### Risks and Mitigations
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ Session affinity can be configured using the following annotations:
|
|||
|Name|Description|Value|
|
||||
| --- | --- | --- |
|
||||
|nginx.ingress.kubernetes.io/affinity|Type of the affinity, set this to `cookie` to enable session affinity|string (NGINX only supports `cookie`)|
|
||||
|nginx.ingress.kubernetes.io/affinity-mode|The affinity mode defines how sticky a session is. Use `balanced` to redistribute some sessions when scaling pods or `persistent` for maximum stickyness.|`balanced` (default) or `persistent`|
|
||||
|nginx.ingress.kubernetes.io/affinity-mode|The affinity mode defines how sticky a session is. Use `balanced` to redistribute some sessions when scaling pods or `persistent` for maximum stickiness.|`balanced` (default) or `persistent`|
|
||||
|nginx.ingress.kubernetes.io/session-cookie-name|Name of the cookie that will be created|string (defaults to `INGRESSCOOKIE`)|
|
||||
|nginx.ingress.kubernetes.io/session-cookie-path|Path that will be set on the cookie (required if your [Ingress paths][ingress-paths] use regular expressions)|string (defaults to the currently [matched path][ingress-paths])|
|
||||
|nginx.ingress.kubernetes.io/session-cookie-samesite|SameSite attribute to apply to the cookie|Browser accepted values are `None`, `Lax`, and `Strict`|
|
||||
|
|
|
@ -37,7 +37,7 @@ They are set in the container spec of the `nginx-ingress-controller` Deployment
|
|||
| `--metrics-per-host` | Export metrics per-host (default true) |
|
||||
| `--profiler-port` | Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245) |
|
||||
| `--profiling` | Enable profiling via web interface host:port/debug/pprof/ (default true) |
|
||||
| `--publish-service` | Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it atisfies. |
|
||||
| `--publish-service` | Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. |
|
||||
| `--publish-status-address` | Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. |
|
||||
| `--report-node-internal-ip-address`| Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. |
|
||||
| `--skip_headers` | If true, avoid header prefixes in the log messages |
|
||||
|
|
|
@ -157,7 +157,7 @@ If the Application Root is exposed in a different path and needs to be redirecte
|
|||
The annotation `nginx.ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.
|
||||
The only affinity type available for NGINX is `cookie`.
|
||||
|
||||
The annotation `nginx.ingress.kubernetes.io/affinity-mode` defines the stickyness of a session. Setting this to `balanced` (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to `persistent` will not rebalance sessions to new servers, therefore providing maximum stickyness.
|
||||
The annotation `nginx.ingress.kubernetes.io/affinity-mode` defines the stickiness of a session. Setting this to `balanced` (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to `persistent` will not rebalance sessions to new servers, therefore providing maximum stickiness.
|
||||
|
||||
!!! attention
|
||||
If more than one Ingress is defined for a host and at least one Ingress uses `nginx.ingress.kubernetes.io/affinity: cookie`, then only paths on the Ingress using `nginx.ingress.kubernetes.io/affinity` will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.
|
||||
|
@ -248,7 +248,7 @@ The annotations are:
|
|||
* `off`: Don't request client certificates and don't do client certificate verification. (default)
|
||||
* `on`: Request a client certificate that must be signed by a certificate that is included in the secret key `ca.crt` of the secret specified by `nginx.ingress.kubernetes.io/auth-tls-secret: secretName`. Failed certificate verification will result in a status code 400 (Bad Request).
|
||||
* `optional`: Do optional client certificate validation against the CAs from `auth-tls-secret`. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
|
||||
* `optional_no_ca`: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from `auth-tls-secret`. Certificate verification result is sent to the usptream service.
|
||||
* `optional_no_ca`: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from `auth-tls-secret`. Certificate verification result is sent to the upstream service.
|
||||
* `nginx.ingress.kubernetes.io/auth-tls-error-page`:
|
||||
The URL/Page that user should be redirected in case of a Certificate Authentication Error
|
||||
* `nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream`:
|
||||
|
|
|
@ -259,7 +259,7 @@ Enables the OWASP ModSecurity Core Rule Set (CRS). _**default:**_ is disabled
|
|||
|
||||
## modsecurity-snippet
|
||||
|
||||
Adds custom rules to modsecurity section of nginx configration
|
||||
Adds custom rules to modsecurity section of nginx configuration
|
||||
|
||||
## client-header-buffer-size
|
||||
|
||||
|
|
|
@ -147,7 +147,7 @@ func TestIngressAuth(t *testing.T) {
|
|||
|
||||
i, err := NewParser(dir, &mockSecret{}).Parse(ing)
|
||||
if err != nil {
|
||||
t.Errorf("Uxpected error with ingress: %v", err)
|
||||
t.Errorf("Unexpected error with ingress: %v", err)
|
||||
}
|
||||
auth, ok := i.(*Config)
|
||||
if !ok {
|
||||
|
|
|
@ -97,7 +97,7 @@ func TestAnnotations(t *testing.T) {
|
|||
fakeSecret := &mockSecret{}
|
||||
i, err := NewParser(fakeSecret).Parse(ing)
|
||||
if err != nil {
|
||||
t.Errorf("Uxpected error with ingress: %v", err)
|
||||
t.Errorf("Unexpected error with ingress: %v", err)
|
||||
}
|
||||
|
||||
u, ok := i.(*Config)
|
||||
|
@ -163,7 +163,7 @@ func TestInvalidAnnotations(t *testing.T) {
|
|||
|
||||
i, err := NewParser(fakeSecret).Parse(ing)
|
||||
if err != nil {
|
||||
t.Errorf("Uxpected error with ingress: %v", err)
|
||||
t.Errorf("Unexpected error with ingress: %v", err)
|
||||
}
|
||||
u, ok := i.(*Config)
|
||||
if !ok {
|
||||
|
|
|
@ -100,7 +100,7 @@ func TestAnnotations(t *testing.T) {
|
|||
fakeSecret := &mockSecret{}
|
||||
i, err := NewParser(fakeSecret).Parse(ing)
|
||||
if err != nil {
|
||||
t.Errorf("Uxpected error with ingress: %v", err)
|
||||
t.Errorf("Unexpected error with ingress: %v", err)
|
||||
}
|
||||
|
||||
u, ok := i.(*Config)
|
||||
|
@ -175,7 +175,7 @@ func TestInvalidAnnotations(t *testing.T) {
|
|||
|
||||
i, err := NewParser(fakeSecret).Parse(ing)
|
||||
if err != nil {
|
||||
t.Errorf("Uxpected error with ingress: %v", err)
|
||||
t.Errorf("Unexpected error with ingress: %v", err)
|
||||
}
|
||||
u, ok := i.(*Config)
|
||||
if !ok {
|
||||
|
|
|
@ -134,28 +134,28 @@ func TestRateLimiting(t *testing.T) {
|
|||
t.Errorf("expected a RateLimit type")
|
||||
}
|
||||
if rateLimit.Connections.Limit != 5 {
|
||||
t.Errorf("expected 5 in limit by ip but %v was returend", rateLimit.Connections)
|
||||
t.Errorf("expected 5 in limit by ip but %v was returned", rateLimit.Connections)
|
||||
}
|
||||
if rateLimit.Connections.Burst != 5*5 {
|
||||
t.Errorf("expected %d in burst limit by ip but %v was returend", 5*3, rateLimit.Connections)
|
||||
t.Errorf("expected %d in burst limit by ip but %v was returned", 5*3, rateLimit.Connections)
|
||||
}
|
||||
if rateLimit.RPS.Limit != 100 {
|
||||
t.Errorf("expected 100 in limit by rps but %v was returend", rateLimit.RPS)
|
||||
t.Errorf("expected 100 in limit by rps but %v was returned", rateLimit.RPS)
|
||||
}
|
||||
if rateLimit.RPS.Burst != 100*5 {
|
||||
t.Errorf("expected %d in burst limit by rps but %v was returend", 100*3, rateLimit.RPS)
|
||||
t.Errorf("expected %d in burst limit by rps but %v was returned", 100*3, rateLimit.RPS)
|
||||
}
|
||||
if rateLimit.RPM.Limit != 10 {
|
||||
t.Errorf("expected 10 in limit by rpm but %v was returend", rateLimit.RPM)
|
||||
t.Errorf("expected 10 in limit by rpm but %v was returned", rateLimit.RPM)
|
||||
}
|
||||
if rateLimit.RPM.Burst != 10*5 {
|
||||
t.Errorf("expected %d in burst limit by rpm but %v was returend", 10*3, rateLimit.RPM)
|
||||
t.Errorf("expected %d in burst limit by rpm but %v was returned", 10*3, rateLimit.RPM)
|
||||
}
|
||||
if rateLimit.LimitRateAfter != 100 {
|
||||
t.Errorf("expected 100 in limit by limitrateafter but %v was returend", rateLimit.LimitRateAfter)
|
||||
t.Errorf("expected 100 in limit by limitrateafter but %v was returned", rateLimit.LimitRateAfter)
|
||||
}
|
||||
if rateLimit.LimitRate != 10 {
|
||||
t.Errorf("expected 10 in limit by limitrate but %v was returend", rateLimit.LimitRate)
|
||||
t.Errorf("expected 10 in limit by limitrate but %v was returned", rateLimit.LimitRate)
|
||||
}
|
||||
|
||||
data = map[string]string{}
|
||||
|
@ -177,27 +177,27 @@ func TestRateLimiting(t *testing.T) {
|
|||
t.Errorf("expected a RateLimit type")
|
||||
}
|
||||
if rateLimit.Connections.Limit != 5 {
|
||||
t.Errorf("expected 5 in limit by ip but %v was returend", rateLimit.Connections)
|
||||
t.Errorf("expected 5 in limit by ip but %v was returned", rateLimit.Connections)
|
||||
}
|
||||
if rateLimit.Connections.Burst != 5*3 {
|
||||
t.Errorf("expected %d in burst limit by ip but %v was returend", 5*3, rateLimit.Connections)
|
||||
t.Errorf("expected %d in burst limit by ip but %v was returned", 5*3, rateLimit.Connections)
|
||||
}
|
||||
if rateLimit.RPS.Limit != 100 {
|
||||
t.Errorf("expected 100 in limit by rps but %v was returend", rateLimit.RPS)
|
||||
t.Errorf("expected 100 in limit by rps but %v was returned", rateLimit.RPS)
|
||||
}
|
||||
if rateLimit.RPS.Burst != 100*3 {
|
||||
t.Errorf("expected %d in burst limit by rps but %v was returend", 100*3, rateLimit.RPS)
|
||||
t.Errorf("expected %d in burst limit by rps but %v was returned", 100*3, rateLimit.RPS)
|
||||
}
|
||||
if rateLimit.RPM.Limit != 10 {
|
||||
t.Errorf("expected 10 in limit by rpm but %v was returend", rateLimit.RPM)
|
||||
t.Errorf("expected 10 in limit by rpm but %v was returned", rateLimit.RPM)
|
||||
}
|
||||
if rateLimit.RPM.Burst != 10*3 {
|
||||
t.Errorf("expected %d in burst limit by rpm but %v was returend", 10*3, rateLimit.RPM)
|
||||
t.Errorf("expected %d in burst limit by rpm but %v was returned", 10*3, rateLimit.RPM)
|
||||
}
|
||||
if rateLimit.LimitRateAfter != 100 {
|
||||
t.Errorf("expected 100 in limit by limitrateafter but %v was returend", rateLimit.LimitRateAfter)
|
||||
t.Errorf("expected 100 in limit by limitrateafter but %v was returned", rateLimit.LimitRateAfter)
|
||||
}
|
||||
if rateLimit.LimitRate != 10 {
|
||||
t.Errorf("expected 10 in limit by limitrate but %v was returend", rateLimit.LimitRate)
|
||||
t.Errorf("expected 10 in limit by limitrate but %v was returned", rateLimit.LimitRate)
|
||||
}
|
||||
}
|
||||
|
|
|
@ -42,7 +42,7 @@ func NewParser(r resolver.Resolver) parser.IngressAnnotation {
|
|||
// rule used to indicate if the upstream servers should use SSL
|
||||
func (a su) Parse(ing *networking.Ingress) (secure interface{}, err error) {
|
||||
if ca, _ := parser.GetStringAnnotation("secure-verify-ca-secret", ing); ca != "" {
|
||||
klog.Warningf("NOTE! secure-verify-ca-secret is not suppored anymore. Please use proxy-ssl-secret instead")
|
||||
klog.Warningf("NOTE! secure-verify-ca-secret is not supported anymore. Please use proxy-ssl-secret instead")
|
||||
}
|
||||
return
|
||||
}
|
||||
|
|
|
@ -154,7 +154,7 @@ func (a affinity) Parse(ing *networking.Ingress) (interface{}, error) {
|
|||
at = ""
|
||||
}
|
||||
|
||||
// Check the afinity mode that will be used
|
||||
// Check the affinity mode that will be used
|
||||
am, err := parser.GetStringAnnotation(annotationAffinityMode, ing)
|
||||
if err != nil {
|
||||
am = ""
|
||||
|
|
|
@ -301,7 +301,7 @@ func (n *NGINXController) getStreamServices(configmapName string, proto apiv1.Pr
|
|||
nginx.StreamPort,
|
||||
}
|
||||
|
||||
reserverdPorts := sets.NewInt(rp...)
|
||||
reservedPorts := sets.NewInt(rp...)
|
||||
// svcRef format: <(str)namespace>/<(str)service>:<(intstr)port>[:<("PROXY")decode>:<("PROXY")encode>]
|
||||
for port, svcRef := range configmap.Data {
|
||||
externalPort, err := strconv.Atoi(port) // #nosec
|
||||
|
@ -309,7 +309,7 @@ func (n *NGINXController) getStreamServices(configmapName string, proto apiv1.Pr
|
|||
klog.Warningf("%q is not a valid %v port number", port, proto)
|
||||
continue
|
||||
}
|
||||
if reserverdPorts.Has(externalPort) {
|
||||
if reservedPorts.Has(externalPort) {
|
||||
klog.Warningf("Port %d cannot be used for %v stream services. It is reserved for the Ingress controller.", externalPort, proto)
|
||||
continue
|
||||
}
|
||||
|
@ -1451,7 +1451,7 @@ func extractTLSSecretName(host string, ing *ingress.Ingress,
|
|||
return ""
|
||||
}
|
||||
|
||||
// getRemovedHosts returns a list of the hostsnames
|
||||
// getRemovedHosts returns a list of the hostnames
|
||||
// that are not associated anymore to the NGINX configuration.
|
||||
func getRemovedHosts(rucfg, newcfg *ingress.Configuration) []string {
|
||||
old := sets.NewString()
|
||||
|
|
|
@ -1404,7 +1404,7 @@ func TestGetBackendServers(t *testing.T) {
|
|||
}
|
||||
|
||||
if upstreams[0].Name != "example-http-svc-1-80" {
|
||||
t.Errorf("example-http-svc-1-80 should be frist upstream, got %s", upstreams[0].Name)
|
||||
t.Errorf("example-http-svc-1-80 should be first upstream, got %s", upstreams[0].Name)
|
||||
return
|
||||
}
|
||||
if upstreams[0].NoServer {
|
||||
|
|
|
@ -86,7 +86,7 @@ var (
|
|||
true,
|
||||
false,
|
||||
},
|
||||
"when secure backend, stickeness and dynamic config enabled": {
|
||||
"when secure backend, stickiness and dynamic config enabled": {
|
||||
"/",
|
||||
"/",
|
||||
"/",
|
||||
|
@ -874,10 +874,10 @@ func TestEscapeLiteralDollar(t *testing.T) {
|
|||
t.Errorf("Expected %v but returned %v", expected, escapedPath)
|
||||
}
|
||||
|
||||
leaveUnchagned := "/leave-me/unchagned"
|
||||
escapedPath = escapeLiteralDollar(leaveUnchagned)
|
||||
if escapedPath != leaveUnchagned {
|
||||
t.Errorf("Expected %v but returned %v", leaveUnchagned, escapedPath)
|
||||
leaveUnchanged := "/leave-me/unchanged"
|
||||
escapedPath = escapeLiteralDollar(leaveUnchanged)
|
||||
if escapedPath != leaveUnchanged {
|
||||
t.Errorf("Expected %v but returned %v", leaveUnchanged, escapedPath)
|
||||
}
|
||||
|
||||
escapedPath = escapeLiteralDollar(false)
|
||||
|
|
|
@ -32,8 +32,8 @@ type scrapeRequest struct {
|
|||
done chan struct{}
|
||||
}
|
||||
|
||||
// Stopable defines a prometheus collector that can be stopped
|
||||
type Stopable interface {
|
||||
// Stoppable defines a prometheus collector that can be stopped
|
||||
type Stoppable interface {
|
||||
prometheus.Collector
|
||||
Stop()
|
||||
}
|
||||
|
|
|
@ -63,7 +63,7 @@ func buildSimpleClientSet() *testclient.Clientset {
|
|||
Name: "foo1",
|
||||
Namespace: apiv1.NamespaceDefault,
|
||||
Labels: map[string]string{
|
||||
"lable_sig": "foo_pod",
|
||||
"label_sig": "foo_pod",
|
||||
},
|
||||
},
|
||||
Spec: apiv1.PodSpec{
|
||||
|
@ -96,7 +96,7 @@ func buildSimpleClientSet() *testclient.Clientset {
|
|||
Name: "foo2",
|
||||
Namespace: apiv1.NamespaceDefault,
|
||||
Labels: map[string]string{
|
||||
"lable_sig": "foo_no",
|
||||
"label_sig": "foo_no",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
@ -105,7 +105,7 @@ func buildSimpleClientSet() *testclient.Clientset {
|
|||
Name: "foo3",
|
||||
Namespace: metav1.NamespaceSystem,
|
||||
Labels: map[string]string{
|
||||
"lable_sig": "foo_pod",
|
||||
"label_sig": "foo_pod",
|
||||
},
|
||||
},
|
||||
Spec: apiv1.PodSpec{
|
||||
|
@ -301,7 +301,7 @@ func TestStatusActions(t *testing.T) {
|
|||
Name: "foo_base_pod",
|
||||
Namespace: apiv1.NamespaceDefault,
|
||||
Labels: map[string]string{
|
||||
"lable_sig": "foo_pod",
|
||||
"label_sig": "foo_pod",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
@ -379,7 +379,7 @@ func TestKeyfunc(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRunningAddresessWithPublishService(t *testing.T) {
|
||||
func TestRunningAddressesWithPublishService(t *testing.T) {
|
||||
testCases := map[string]struct {
|
||||
fakeClient *testclient.Clientset
|
||||
expected []string
|
||||
|
@ -559,7 +559,7 @@ func TestRunningAddresessWithPublishService(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRunningAddresessWithPods(t *testing.T) {
|
||||
func TestRunningAddressesWithPods(t *testing.T) {
|
||||
fk := buildStatusSync()
|
||||
fk.PublishService = ""
|
||||
|
||||
|
@ -577,7 +577,7 @@ func TestRunningAddresessWithPods(t *testing.T) {
|
|||
}
|
||||
}
|
||||
|
||||
func TestRunningAddresessWithPublishStatusAddress(t *testing.T) {
|
||||
func TestRunningAddressesWithPublishStatusAddress(t *testing.T) {
|
||||
fk := buildStatusSync()
|
||||
fk.PublishStatusAddress = "127.0.0.1"
|
||||
|
||||
|
|
|
@ -86,7 +86,7 @@ type Backend struct {
|
|||
SSLPassthrough bool `json:"sslPassthrough"`
|
||||
// Endpoints contains the list of endpoints currently running
|
||||
Endpoints []Endpoint `json:"endpoints,omitempty"`
|
||||
// StickySessionAffinitySession contains the StickyConfig object with stickyness configuration
|
||||
// StickySessionAffinitySession contains the StickyConfig object with stickiness configuration
|
||||
SessionAffinity SessionAffinityConfig `json:"sessionAffinityConfig"`
|
||||
// Consistent hashing by NGINX variable
|
||||
UpstreamHashBy UpstreamHashByConfig `json:"upstreamHashByConfig,omitempty"`
|
||||
|
|
|
@ -381,7 +381,7 @@ func encodePrivateKeyPEM(key *rsa.PrivateKey) []byte {
|
|||
return pem.EncodeToMemory(&block)
|
||||
}
|
||||
|
||||
// encodeCertPEM returns PEM-endcoded certificate data
|
||||
// encodeCertPEM returns PEM-encoded certificate data
|
||||
func encodeCertPEM(cert *x509.Certificate) []byte {
|
||||
block := pem.Block{
|
||||
Type: certutil.CertificateBlockType,
|
||||
|
|
|
@ -66,7 +66,7 @@ func TestEnqueueSuccess(t *testing.T) {
|
|||
stopCh := make(chan struct{})
|
||||
// run queue
|
||||
go q.Run(5*time.Second, stopCh)
|
||||
// mock object whichi will be enqueue
|
||||
// mock object which will be enqueue
|
||||
mo := mockEnqueueObj{
|
||||
k: "testKey",
|
||||
v: "testValue",
|
||||
|
@ -89,7 +89,7 @@ func TestEnqueueFailed(t *testing.T) {
|
|||
stopCh := make(chan struct{})
|
||||
// run queue
|
||||
go q.Run(5*time.Second, stopCh)
|
||||
// mock object whichi will be enqueue
|
||||
// mock object which will be enqueue
|
||||
mo := mockEnqueueObj{
|
||||
k: "testKey",
|
||||
v: "testValue",
|
||||
|
@ -115,7 +115,7 @@ func TestEnqueueKeyError(t *testing.T) {
|
|||
stopCh := make(chan struct{})
|
||||
// run queue
|
||||
go q.Run(5*time.Second, stopCh)
|
||||
// mock object whichi will be enqueue
|
||||
// mock object which will be enqueue
|
||||
mo := mockEnqueueObj{
|
||||
k: "testKey",
|
||||
v: "testValue",
|
||||
|
@ -137,7 +137,7 @@ func TestSkipEnqueue(t *testing.T) {
|
|||
atomic.StoreUint32(&sr, 0)
|
||||
q := NewCustomTaskQueue(mockSynFn, mockKeyFn)
|
||||
stopCh := make(chan struct{})
|
||||
// mock object whichi will be enqueue
|
||||
// mock object which will be enqueue
|
||||
mo := mockEnqueueObj{
|
||||
k: "testKey",
|
||||
v: "testValue",
|
||||
|
|
|
@ -199,7 +199,7 @@ local function route_to_alternative_balancer(balancer)
|
|||
|
||||
local traffic_shaping_policy = alternative_balancer.traffic_shaping_policy
|
||||
if not traffic_shaping_policy then
|
||||
ngx.log(ngx.ERR, "traffic shaping policy is not set for balanacer ",
|
||||
ngx.log(ngx.ERR, "traffic shaping policy is not set for balancer ",
|
||||
"of backend: ", tostring(backend_name))
|
||||
return false
|
||||
end
|
||||
|
|
|
@ -192,7 +192,7 @@ end
|
|||
-- This design tradeoffs lack of OCSP response in the first request with better latency.
|
||||
--
|
||||
-- Serving stale response ensures that we don't serve another request without OCSP response
|
||||
-- when the cache entry expires. Instead we serve the signle request with stale response
|
||||
-- when the cache entry expires. Instead we serve the single request with stale response
|
||||
-- and enqueue fetch_and_cache_ocsp_response for refetch.
|
||||
local function ocsp_staple(uid, der_cert)
|
||||
local response, _, is_stale = ocsp_response_cache:get_stale(uid)
|
||||
|
|
|
@ -83,7 +83,7 @@ local function handle_servers()
|
|||
for server, uid in pairs(configuration.servers) do
|
||||
if uid == EMPTY_UID then
|
||||
-- notice that we do not delete certificate corresponding to this server
|
||||
-- this is becase a certificate can be used by multiple servers/hostnames
|
||||
-- this is because a certificate can be used by multiple servers/hostnames
|
||||
certificate_servers:delete(server)
|
||||
else
|
||||
local success, set_err, forcible = certificate_servers:set(server, uid)
|
||||
|
|
|
@ -310,7 +310,7 @@ describe("Balancer", function()
|
|||
|
||||
it("resolves external name to endpoints when service is of type External name", function()
|
||||
backend = {
|
||||
name = "exmaple-com", service = { spec = { ["type"] = "ExternalName" } },
|
||||
name = "example-com", service = { spec = { ["type"] = "ExternalName" } },
|
||||
endpoints = {
|
||||
{ address = "example.com", port = "80", maxFails = 0, failTimeout = 0 }
|
||||
}
|
||||
|
@ -329,7 +329,7 @@ describe("Balancer", function()
|
|||
}
|
||||
})
|
||||
expected_backend = {
|
||||
name = "exmaple-com", service = { spec = { ["type"] = "ExternalName" } },
|
||||
name = "example-com", service = { spec = { ["type"] = "ExternalName" } },
|
||||
endpoints = {
|
||||
{ address = "192.168.1.1", port = "80" },
|
||||
{ address = "1.2.3.4", port = "80" },
|
||||
|
@ -347,14 +347,14 @@ describe("Balancer", function()
|
|||
|
||||
it("wraps IPv6 addresses into square brackets", function()
|
||||
local backend = {
|
||||
name = "exmaple-com",
|
||||
name = "example-com",
|
||||
endpoints = {
|
||||
{ address = "::1", port = "8080", maxFails = 0, failTimeout = 0 },
|
||||
{ address = "192.168.1.1", port = "8080", maxFails = 0, failTimeout = 0 },
|
||||
}
|
||||
}
|
||||
local expected_backend = {
|
||||
name = "exmaple-com",
|
||||
name = "example-com",
|
||||
endpoints = {
|
||||
{ address = "[::1]", port = "8080", maxFails = 0, failTimeout = 0 },
|
||||
{ address = "192.168.1.1", port = "8080", maxFails = 0, failTimeout = 0 },
|
||||
|
|
|
@ -54,7 +54,7 @@ describe("Monitor", function()
|
|||
end)
|
||||
|
||||
describe("flush", function()
|
||||
it("short circuits when premmature is true (when worker is shutting down)", function()
|
||||
it("short circuits when premature is true (when worker is shutting down)", function()
|
||||
local tcp_mock = mock_ngx_socket_tcp()
|
||||
mock_ngx({ var = {} })
|
||||
local monitor = require("monitor")
|
||||
|
|
File diff suppressed because it is too large
Load diff
|
@ -145,7 +145,7 @@ var _ = framework.DescribeAnnotation("affinity session-cookie-name", func() {
|
|||
},
|
||||
},
|
||||
{
|
||||
Path: "/somewhereelese",
|
||||
Path: "/somewhereelse",
|
||||
Backend: networking.IngressBackend{
|
||||
ServiceName: framework.EchoService,
|
||||
ServicePort: intstr.FromInt(80),
|
||||
|
@ -172,11 +172,11 @@ var _ = framework.DescribeAnnotation("affinity session-cookie-name", func() {
|
|||
Header("Set-Cookie").Contains("Path=/something")
|
||||
|
||||
f.HTTPTestClient().
|
||||
GET("/somewhereelese").
|
||||
GET("/somewhereelse").
|
||||
WithHeader("Host", host).
|
||||
Expect().
|
||||
Status(http.StatusOK).
|
||||
Header("Set-Cookie").Contains("Path=/somewhereelese")
|
||||
Header("Set-Cookie").Contains("Path=/somewhereelse")
|
||||
})
|
||||
|
||||
ginkgo.It("should set cookie with expires", func() {
|
||||
|
|
|
@ -100,7 +100,7 @@ var _ = framework.DescribeAnnotation("influxdb-*", func() {
|
|||
func createInfluxDBService(f *framework.Framework) *corev1.Service {
|
||||
service := &corev1.Service{
|
||||
ObjectMeta: metav1.ObjectMeta{
|
||||
Name: "inflxudb",
|
||||
Name: "influxdb",
|
||||
Namespace: f.Namespace,
|
||||
},
|
||||
Spec: corev1.ServiceSpec{Ports: []corev1.ServicePort{
|
||||
|
|
|
@ -156,12 +156,12 @@ var _ = framework.DescribeAnnotation("proxy-*", func() {
|
|||
|
||||
ginkgo.It("should turn on proxy-buffering", func() {
|
||||
proxyBuffering := "on"
|
||||
proxyBufersNumber := "8"
|
||||
proxyBuffersNumber := "8"
|
||||
proxyBufferSize := "8k"
|
||||
|
||||
annotations := make(map[string]string)
|
||||
annotations["nginx.ingress.kubernetes.io/proxy-buffering"] = proxyBuffering
|
||||
annotations["nginx.ingress.kubernetes.io/proxy-buffers-number"] = proxyBufersNumber
|
||||
annotations["nginx.ingress.kubernetes.io/proxy-buffers-number"] = proxyBuffersNumber
|
||||
annotations["nginx.ingress.kubernetes.io/proxy-buffer-size"] = proxyBufferSize
|
||||
|
||||
ing := framework.NewSingleIngress(host, "/", host, f.Namespace, framework.EchoService, 80, annotations)
|
||||
|
@ -171,7 +171,7 @@ var _ = framework.DescribeAnnotation("proxy-*", func() {
|
|||
func(server string) bool {
|
||||
return strings.Contains(server, fmt.Sprintf("proxy_buffering %s;", proxyBuffering)) &&
|
||||
strings.Contains(server, fmt.Sprintf("proxy_buffer_size %s;", proxyBufferSize)) &&
|
||||
strings.Contains(server, fmt.Sprintf("proxy_buffers %s %s;", proxyBufersNumber, proxyBufferSize)) &&
|
||||
strings.Contains(server, fmt.Sprintf("proxy_buffers %s %s;", proxyBuffersNumber, proxyBufferSize)) &&
|
||||
strings.Contains(server, fmt.Sprintf("proxy_request_buffering %s;", proxyBuffering))
|
||||
})
|
||||
})
|
||||
|
|
|
@ -385,11 +385,11 @@ func (f *Framework) waitForReload(fn func()) {
|
|||
}
|
||||
|
||||
func getReloadCount(pod *corev1.Pod, namespace string, client kubernetes.Interface) int {
|
||||
evnts, err := client.CoreV1().Events(namespace).Search(scheme.Scheme, pod)
|
||||
events, err := client.CoreV1().Events(namespace).Search(scheme.Scheme, pod)
|
||||
assert.Nil(ginkgo.GinkgoT(), err, "obtaining NGINX Pod")
|
||||
|
||||
reloadCount := 0
|
||||
for _, e := range evnts.Items {
|
||||
for _, e := range events.Items {
|
||||
if e.Reason == "RELOAD" && e.Type == corev1.EventTypeNormal {
|
||||
reloadCount++
|
||||
}
|
||||
|
|
|
@ -61,7 +61,7 @@ func (f *Framework) EnsureConfigMap(configMap *api.ConfigMap) (*api.ConfigMap, e
|
|||
return cm, nil
|
||||
}
|
||||
|
||||
// GetIngress gets an Ingress object from the given namespace, name and retunrs it, throws error if it does not exists.
|
||||
// GetIngress gets an Ingress object from the given namespace, name and returns it, throws error if it does not exists.
|
||||
func (f *Framework) GetIngress(namespace string, name string) *networking.Ingress {
|
||||
ing, err := f.KubeClientSet.NetworkingV1beta1().Ingresses(namespace).Get(context.TODO(), name, metav1.GetOptions{})
|
||||
assert.Nil(ginkgo.GinkgoT(), err, "getting ingress")
|
||||
|
@ -69,7 +69,7 @@ func (f *Framework) GetIngress(namespace string, name string) *networking.Ingres
|
|||
return ing
|
||||
}
|
||||
|
||||
// EnsureIngress creates an Ingress object and retunrs it, throws error if it already exists.
|
||||
// EnsureIngress creates an Ingress object and returns it, throws error if it already exists.
|
||||
func (f *Framework) EnsureIngress(ingress *networking.Ingress) *networking.Ingress {
|
||||
fn := func() {
|
||||
err := createIngressWithRetries(f.KubeClientSet, f.Namespace, ingress)
|
||||
|
@ -102,7 +102,7 @@ func (f *Framework) UpdateIngress(ingress *networking.Ingress) *networking.Ingre
|
|||
return ing
|
||||
}
|
||||
|
||||
// EnsureService creates a Service object and retunrs it, throws error if it already exists.
|
||||
// EnsureService creates a Service object and returns it, throws error if it already exists.
|
||||
func (f *Framework) EnsureService(service *core.Service) *core.Service {
|
||||
err := createServiceWithRetries(f.KubeClientSet, f.Namespace, service)
|
||||
assert.Nil(ginkgo.GinkgoT(), err, "creating service")
|
||||
|
@ -114,7 +114,7 @@ func (f *Framework) EnsureService(service *core.Service) *core.Service {
|
|||
return s
|
||||
}
|
||||
|
||||
// EnsureDeployment creates a Deployment object and retunrs it, throws error if it already exists.
|
||||
// EnsureDeployment creates a Deployment object and returns it, throws error if it already exists.
|
||||
func (f *Framework) EnsureDeployment(deployment *appsv1.Deployment) *appsv1.Deployment {
|
||||
err := createDeploymentWithRetries(f.KubeClientSet, f.Namespace, deployment)
|
||||
assert.Nil(ginkgo.GinkgoT(), err, "creating deployment")
|
||||
|
|
|
@ -54,7 +54,7 @@ var _ = framework.IngressNginxDescribe("[Ingress] [PathType] exact", func() {
|
|||
"nginx.ingress.kubernetes.io/configuration-snippet": `more_set_input_headers "pathType: prefix";`,
|
||||
}
|
||||
|
||||
ing = framework.NewSingleIngress("exact-sufix", "/exact", host, f.Namespace, framework.EchoService, 80, annotations)
|
||||
ing = framework.NewSingleIngress("exact-suffix", "/exact", host, f.Namespace, framework.EchoService, 80, annotations)
|
||||
f.EnsureIngress(ing)
|
||||
|
||||
f.WaitForNginxServer(host,
|
||||
|
@ -76,7 +76,7 @@ var _ = framework.IngressNginxDescribe("[Ingress] [PathType] exact", func() {
|
|||
assert.Contains(ginkgo.GinkgoT(), body, "pathtype=exact")
|
||||
|
||||
body = f.HTTPTestClient().
|
||||
GET("/exact/sufix").
|
||||
GET("/exact/suffix").
|
||||
WithHeader("Host", host).
|
||||
Expect().
|
||||
Status(http.StatusOK).
|
||||
|
@ -103,7 +103,7 @@ var _ = framework.IngressNginxDescribe("[Ingress] [PathType] exact", func() {
|
|||
})
|
||||
|
||||
body = f.HTTPTestClient().
|
||||
GET("/exact/sufix").
|
||||
GET("/exact/suffix").
|
||||
WithHeader("Host", host).
|
||||
Expect().
|
||||
Status(http.StatusOK).
|
||||
|
|
|
@ -69,7 +69,7 @@ var _ = framework.IngressNginxDescribe("[Memory Leak] Dynamic Certificates", fun
|
|||
})
|
||||
})
|
||||
|
||||
func privisionIngress(hostname string, f *framework.Framework) {
|
||||
func provisionIngress(hostname string, f *framework.Framework) {
|
||||
ing := f.EnsureIngress(framework.NewSingleIngressWithTLS(hostname, "/", hostname, []string{hostname}, f.Namespace, framework.EchoService, 80, nil))
|
||||
_, err := framework.CreateIngressTLSSecret(f.KubeClientSet,
|
||||
ing.Spec.TLS[0].Hosts,
|
||||
|
@ -114,7 +114,7 @@ func run(host string, f *framework.Framework) pool.WorkFunc {
|
|||
}
|
||||
|
||||
ginkgo.By(fmt.Sprintf("\tcreating ingress for host %v", host))
|
||||
privisionIngress(host, f)
|
||||
provisionIngress(host, f)
|
||||
|
||||
framework.Sleep(100 * time.Millisecond)
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ import (
|
|||
var _ = framework.IngressNginxDescribe("[Service] Type ExternalName", func() {
|
||||
f := framework.NewDefaultFramework("type-externalname")
|
||||
|
||||
ginkgo.It("works with external name set to incomplete fdqn", func() {
|
||||
ginkgo.It("works with external name set to incomplete fqdn", func() {
|
||||
f.NewEchoDeployment()
|
||||
|
||||
host := "echo"
|
||||
|
|
|
@ -285,7 +285,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -392,7 +392,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -470,7 +470,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -582,7 +582,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -660,7 +660,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
|
|
@ -285,7 +285,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -392,7 +392,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -470,7 +470,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -582,7 +582,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -631,7 +631,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1m",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
|
|
@ -135,7 +135,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1g",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
@ -235,7 +235,7 @@
|
|||
},
|
||||
"proxy": {
|
||||
"bodySize": "1g",
|
||||
"conectTimeout": 5,
|
||||
"connectTimeout": 5,
|
||||
"sendTimeout": 60,
|
||||
"readTimeout": 60,
|
||||
"bufferSize": "4k",
|
||||
|
|
Loading…
Reference in a new issue