client closed prematurely connection while sending to client, in nginx

I have error in nginx error.log: 2010/12/05 17:11:49 [info] 7736#0: *1108 client closed prematurely connection while sending to client, client: 188.72.80.201, server:***.biz, request: "GET /forum/ HTTP/1.1", upstream:"http://***:3000/forum/", host: "***.biz" I have 500 response code on site everytime. How can I fix this? Thank you.

Nginx fastcgi multiplexing?

I'm in process of implementation of a fastcgi application, after reading fastCGI spec I've found a feature called "request multiplexing". It reminded me Adobe RTMP multiplexing back in the days that protocol was proprietary and closed. As far as I understand, multiplexing allows to reduce overhead of creating new connections to FCGI clients effectively interweaving requests chunks, and at the same time enabling "keep-alive" model to connection. Latter allows sending several requests over a sin

Nginx serving a directory as an alias

I am trying to use ResourceSpace as an alias on Nginx. The page scripts load well but static files do not load. Access to subdirectories gives an error of undefined index eximmanger loads with all scripts plus static files while resourcespace fails to load static files, loading only the scripts This is my config server { listen 80; server_name myserver.com www.myserver.com; server_name_in_redirect off; access_log /var/log/nginx/myserver.access_log main; error_log /var/lo

VHosts Nginx Config for Playframework Websockets

The config below seemed to have worked but now its failing. I followed this article to download and install the tcp_proxy_module. #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request"

Pinnect's htaccess rules to nginx

so I've been playing around with the rules trying to transform them, the pinnect script's rules are: Options +FollowSymLinks IndexIgnore */* RewriteEngine on #RewriteBase / # if a directory or a file exists, use it directly RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d # otherwise forward it to index.php RewriteRule . index.php # fix for uploadify 302, 406 errors SetEnvIfNoCase Content-Type "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads"

Nginx rewrite rule

I need a rewrite rule for an nginx server. I'm using joomla 1.5 with sh404sef component to make clean urls. Now i have installed gtranslate module to make the website multi language. So after installing the module my url's will change. for example My orginal url: http://mywebsite.com/index.php?page=shop.product_details&flypage=flypage.tpl&product_id=511&option=com_virtuemart the sh404sef component will change this to http://mywebsite.com/men-s/coverall-shirt-in-grey.html But

Proxying Jenkins with nginx

I'd like to proxy Jenkins using nginx. I already have a working version of this using this configuration file in /etc/sites-available/jenkins: server { listen 80; listen [::]:80 default ipv6only=on; location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_pass http://127.0.0.1:8080; } } However, what I'd like to do is host Jenkins at a relative url, like /jenkins/. However, when I change my location direct

nginx rewrite / redirect

Assuming I have 1000 URLs that look like these http://www.mydomain.com/pet/cat/info http://www.mydomain.com/pet/dog/info ... http://www.mydomain.com/pet/fish/info Each URL should return a corresponding html file in the /usr/local/nginx/data directory. /usr/local/nginx/data/cat.html /usr/local/nginx/data/dog.html ... /usr/local/nginx/data/fish.html What construct should I use to map all of them at once? Can you please provide a code snippet?

Non-www to www domain using Nginx on Ubuntu 12.04 LTS on Ec2 instance

After seeing this post http://www.ewanleith.com/blog/900/10-million-hits-a-day-with-wordpress-using-a-15-server I changed my server from apache2 to nginx. I am no computer geek just, savvy. I followed the steps. After that, the site was perfect, except for one thing: non-www to www thing. I searched all over the net on how to do this. I tried the modrewrite thing they said but just getting worst. For now, it is directed to www because I use wordpress and set it in general settings http://www.pag

Why does this dynamic Nginx proxy configuration not work?

I have a nginx configuration that looks like this: location /textproxy { proxy_pass http://$arg_url; proxy_connect_timeout 1s; proxy_redirect off; proxy_hide_header Content-Type; proxy_hide_header Content-Disposition; add_header Content-Type "text/plain"; proxy_set_header Host $host; } The idea is that this proxies to a remote url, and rewrites the content header to be text/plain. For example I would call: http://nx/textproxy?url=http://foo:50070/a/b/c?arg=abc:123 And it wou

Nginx giving 400 error:

I am using Nginx to handle hits of API. I checked the access log and found that sometimes Nginx is giving 400 error. GET /url to hit/ HTTP/1.1" **400 172** "-" "-" What is 172 in above log ? and how to solve this error in Nginx ?

Fix duplicate query string in nginx

I've got a strange case where my app is now for some reason appending a duplicate query string onto a URL that already has one. I don't have the access or the time to dig into the application code right now, but I do have access to Nginx configs. What I need is a rewrite rule that will ignore the duplicate query string, ie http://foo.com?test=bar?test=bar will redirect to http://foo.com?test=bar Is this even possible? Thank you.

NGINX gzip not compressing JavaScript files

All JavaScript files are not compressed by nginx gzip. CSS files are working. In my nginx.conf I have the following lines: gzip on; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_proxied any; gzip_buffers 16 8k; gzip_types text/plain application/x-javascript text/xml text/css; gzip_vary on;

Serving static HTML files in Nginx without extension in url

root directory = /srv/myproject/xyz/main/ in the "main" folder I have few *.html files and I want all of them to point at a url say /test/ (which is quite different from the directory structure) this is my very basic nginx configuration server { listen 80; error_log /var/log/testc.error.log; location /test/ { root /srv/myproject/xyz/main/; #alias /srv/myproject/xyz/main/; default_type "text/html"; try_files $uri.html ;

Nginx Relative paths for assets in Play Framework

I'm running two Play 2.3.x applications behind nginx. In nginx, application A is configured to be accessed at "/". Application B is configured to be accessed at "/appB/". I'm having some problems resolving assets for application B when using the built in routes/assets functionality (<script src="@routes.Assets.at("someScriptfile.js")") type="text/javascript"></script>. The problem here is that the URL will be absolute, for example /assets/file.png. This will result in that the prox

Nginx Proxy a rtmp stream

How could I proxy a rtmp stream? I have two raspberry pi streaming live video from raspicams on my LAN. Each raspberry pi sends the video to ffmpeg which wraps in flv and sends to crtmpserver. A third server using nginx, has a static html page with two instances of jwplayer, each pointing to one raspberry pi. The setup is just like this one. The web server uses authentication and I'd like streams not to be public too. I'm thinking of trying nginx-rtmp-module, but I am not sure if it wo

Optimize Ubunto and nginx to handle static files

I am experimenting with nginx (first time) to serve static files (400 kb). I have installed ubuntu 14.4 on linode servers (2gb ram, 2 core 3tb transfer) and nginx. Open files are set to 9000 , gzip is on, processes =2, worker connections 4000 Using jmeter at 50 users and 10 sec ramp up I am achieving 800 ms sample times and cpu and mem obviously are not a factor, 100 users, this increases to 5/6 seconds, the transfer out speed should be 250 mbps which explains that. But is there optimization

conditional rewrite or try_files with NGINX?

I'm having trouble setting up a conditional rewrite, and I've been trying to use the if directive (despite all sources indicating it's "evil") with the -f switch to check for the presence of a file, but it's not working. I believe the issue/case is best explained by example, so here goes: Directory structure workspace/ myapp/ webroot/ index.php assets/ baz.js hello/ foo.js modules/ hello/ assets/ foo.js bar.js

Where is nginx configuration in centos?

I installed GitLab in centos 7 and now I'm trying to run GitLab on https. According to this article I need to change some configuration directives in /etc/nginx/sites-enabled/gitlab But there is no such file in my system. Could you please tell me where the config file is located in my case? I used this method to install the gitlab.

Nginx HHVM timing out due to large queue

I have HHVM setup using TCP upstreams and nginx. Currently there is one app doing two Guzzle Curl requests to two other apps on the same server. Eventually HHVM just starts to queue all of the requests and then stops responding. It does not give any errors, and its status is still running. The admin side is also still running. It is not consistent when it will decide to start queuing all of the requests.

Nginx rewrite not working (Kirby CMS Cachebuster)

Nginx shall rewrite /assets/css/main.1448958665.css to /assets/css/main.css. But trying to get that file, it returns a 404. This is my Nginx config for the site: server { listen 80 default_server; server_name example.com; root /var/www/example.com; index index.php index.html index.htm; client_max_body_size 10M; rewrite ^/(content|site|kirby)$ /error last; rewrite ^/content/(.*).(txt|md|mdown)$ /error last; rewrite ^/(site|kirby)/(.*)$ /error last; locat

How can I use nginx rewrite with try_files to get react router working?

I am using the try_files directive with nginx and react to force all paths to go to /index.html and be handled by the react router so that the paths don't need to be prefaced with /#/. This works for that: location / { try_files $uri /index.html; } So a url like /invitation will go to the right place. However, there are still some sources that are generating /#/ urls and I would like to have them rewritten to the equivalent urls without the /#/ (so /#/invitation would go to /invitation)

How to change the nginx process user of the official docker image nginx?

I'm using Docker Hub's official nginx image: https://hub.docker.com/_/nginx/ The user of nginx (as defined in /etc/nginx/nginx.conf) is nginx. Is there a way to make nginx run as www-data without having to extend the docker image? The reason for this is, I have a shared volume, that is used by multiple containers - php-fpm that I'm running as www-data and nginx. The owner of the files/directories in the shared volume is www-data:www-data and nginx has trouble accessing that - errors similar to

Nginx returns 502 if service not ready on start

I'm working on a webcam I built with a Raspberry Pi. It has these components: Node.js on port 3000 FFmpeg / FFserver on port 8090 Nginx in front to proxy these services together over ports 80/443 (HTTP/HTTPS) The problem that I am having is that if FFserver is not fully ready on port 8090 when Nginx starts up it will continually return a 502 for stream.mjpeg, even though the stream is running. It's as if Nginx determines that the host does not exist and then never tries again. Once I restart

How to set different 404 error pages for different directory in nginx?

Hi I have a website running on an domain lets say www.example.com. I also have a web application on the same website but in a directory lets say www.example.com/app/. Now the application also has different directory like www.example.com/app/a. I wanted to know how can I set different error 404 pages for the directories. For the first case I know I can do it using this:- error_page 404 /error404.html; location = /error404.html { root /usr/share/nginx/html; internal; } I wanted t

macos nginx configuration file deos not exists

I have install nginx to macOS Sierra with Homebrew brew install nginx I would like to change some configuration, but I could not find nginx.conf I have already check following locations. /usr/local/etc/nginx/nginx.conf /usr/local/Cellar/nginx/1.10.3 Where is nginx.conf located in macOS Sierra?

The last & break mechanism in nginx rewrite

Am now dealing with nginx rewrite for my project but got something out of my expectation, just share with you to see if any reasonable advice on this. The nginx server setting list below: location /download/ { rewrite ^(/download/.*)/media/(.*)\..*$ $1/mp3/$2.mp3; rewrite ^(/download/.*)/movie/(.*)\..*$ $1/avi/$2.mp3 break; rewrite ^(/download/.*)/avvvv/(.*)\..*$ $1/rmvb/$2.mp3; } From above section, you can see that I only add one download location with three rewrite rules.I ope

"no input file specified" with php7-fpm on nginx

I am new to nginx and installing phpmyadmin on it. I'm on a Debian stretch distro. The no input file specified error I'm getting when trying to visit /phpmyadmin comes up over and over on StackOverflow but they all seem to be older posts. Trying to piece together the proper config from many different suggestions on the web has been a nightmare. I tried some of the solutions mentioned but none worked for me. Here's my complete config file: server { gzip on; gzip_types text/plain text/html te

nginx reverse HTTP proxy - what behavior when all upstreams are unavailable?

In trying to measure and increase our nginx throughput, I noticed that there might be a problem with our configuration, but I'm not sure how to test for it. We use a simple upstream config, somewhat like this: upstream myapp1 { server srv1.example.com max_fails=1 fail_timeout=3s; server srv2.example.com max_fails=1 fail_timeout=3s; server srv3.example.com max_fails=1 fail_timeout=3s; } When our backends become overloaded, the first upstream may enter unavailable state, and the ad

Nginx configuration always takes default block

I'm setting up a nginx for mutiple (3) domains. As i understood, the server should take the correct server block, when the server_name matches. In my case i always end up in the default block. When i remove it, it takes the next block. Not regarding the used domain. Here is my config: server { listen *:80 default_server; listen *:443 default_server; server_name _; return 444; } server { root /app/app-cluster/public; index index.php; server_name domain1.com;

HLS files(.m3u8, .ts) does not created on nginx_rtmp_module

I want to treanscode RTSP to RTMP to HLS using ffmpeg and nginx_rtmp_module. But HLS files(.m3u8, .ts) do not created. I'm testing on Docker(amazonlinux image). I can access and play rtmp://localhost:1935/live/camera1 by VLC Player, but can not access http://localhost:8088/live/camera1.m3u8. docker run command: docker run -it -p 8088:8088 -p 1935:1935 -v $(pwd):/tmp/share amazonlinux bash nginx.conf: user root; worker_processes 1; error_log /var/log/nginx/error.log debug; events { w

Nginx Get response body in Kong access log

I want to retrieve request and response body logs from Kong api manager. I have added new plugin (udp-log plugin) to kong, to get stream logs in Graylog dashboard. According to this link I have tried to setup my nginx-kong.conf file in /usr/local/kong directory and I have added new variable in my log-serializers/basic.lua file like this: response = { status = ngx.status, headers = ngx.resp.get_headers(), size = ngx.var.bytes_sent, req_body = ngx.var.req_body, test = ngx.var.resp_body,

Nginx will not start with host not found in upstream

I use nginx to proxy and hold persistent connections to far away servers for me. I have configured about 15 blocks similar to this example: upstream rinu-test { server test.rinu.test:443; keepalive 20; } server { listen 80; server_name test.rinu.test; location / { proxy_pass https://rinu-test; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $http_host; } } The problem is if the hostname can not be resolved

Stop nginx from decoding URL

I run an nginx server which serves static files. Some filenames contain strings like %3a. /var/www/testfile%3a If I try to request those files, I get a 404 Not Found error. This seems to happen because nginx decodes the URL and replaces %3a with :, and then does not find a file named /var/www/testfile:. I infered this from the following debug output from nginx: 2018/06/21 10:03:21 [debug] 32523#0: *6 http process request line 2018/06/21 10:03:21 [debug] 32523#0: *6 http request line: "GET

nginx ISE from PNG Upload because Permissions

I am getting a 500 ISE over uploading specifically PNGs to nginx/1.15.0, with Passenger and Sinatra. The latter two aren't reporting anything wrong. It is specific to PNG files and I get no error with JPG trials. I have tried more than one example file per suffix, same behaviour. The nginx site error log indicates that I'm having permissions issues on the nginx receiving directory for uploaded files. The full log: 2018/06/26 14:05:05 [debug] 17367#0: accept on 0.0.0.0:8080, ready: 1 2018

Nginx Reverse Proxy WebSocket Timeout

I'm using java-websocket for my websocket needs, inside a wowza application, and using nginx for ssl, proxying the requests to java. The problem is that the connection seems to be cut after exactly 1 hour, server-side. The client-side doesn't even know that it was disconnected for quite some time. I don't want to just adjust the timeout on nginx, I want to understand why the connection is being terminated, as the socket is functioning as usual until it isn't. EDIT: Forgot to post the configura

Nginx How do I preserve the requested port when using proxy pass?

In the long run what I'm trying to do is to be able to connect to any domain through any port, for example, mysite.com:8000 and then through Nginx have it get routed to an internal ip through the same port. So for example to 192.168.1.114:8000. I looked into iptables although I'm planning on having multiple domains so that really doesn't work for me in this case (feel free to correct me if I'm wrong). I made sure that the internal ip and port that I'm trying to access is connectable and runnin

Nginx From inside of a Docker container, how do I connect to the localhost of the machine?

So I have a Nginx running inside a docker container, I have a mysql running on localhost, I want to connect to the MySql from within my Nginx. The MySql is running on localhost and not exposing a port to the outside world, so its bound on localhost, not bound on the ip address of the machine. Is there any way to connect to this MySql or any other program on localhost from within this docker container? This question is different from "How to get the IP address of the docker host from inside a d

Nginx returns incorrect date format to controller

I have problem with datetimepicker. I need to pass datetime in dd/mm/yyyy format to my controller but my nginx passes date in mm/dd/yyyy format. Because of this my controller returns error "The value '25/08/2019' is not valid for StartDate". How can I change date format of my nginx proxy? P.S I tried to debug this error on my computer but everything works fine, so problem is in nginx.

502 Bad Gateway - NGINX no resolver defined to resolve

I have created a proxy pass for multi URLs. listen 80; listen [::]:80; server_name ~^(.*)redzilla\.11\.75\.65\.21\.xip\.io$; location / { set $instname $1; proxy_pass http://${instname}redzilla.localhost:3000; } When I call to this service using chrome, It was triggered 502 error. http://test.redzilla.11.75.65.21.xip.io/ I put below location tag by hard coding the URL. location /redzilla {

Nginx configure root domain redirect to subdomain

my current setup of webpage is: forum.xyz.pl I need xyz.pl redirect to forum.xyz.pl current nginx.conf: nodebb.conf I am using aws route53, not sure what value should I put there for root domain also. thanks

How can I make nginx redirect subdomain to a folder?

How can I make nginx redirect all the requests to my subdomain to a folder? Example: http://sub2.sub1.domain.com/ that should indicate that sub2 is a folder in sub1.domain.com/sub2 How can I do this? The main objective is to hide the folder to the user. So, it should continue as http://sub2.sub1.domain.com/ My wish is to use a wildcard in sub2. UPDATE: I've tried: server { listen 80; listen [::]:80; server_name ~^(.*)\.sis\..*$; location / { proxy_pass http://sis.mydom

Nginx reverse proxy to backend gives an error ERR_CONNECTION_REFUSED

I have an application running on a server at port 6001(frontend, built by react.js) and port 6002(backend, built by node.js) in EC2 instance. When I send a request through ubuntu terminal in the instance by curl -X GET http://127.0.0.1:6002/api/works, It works fine, I get a proper data. Now I go to a browser with the domain (http://example.com). However, I only get the front end getting called. When I send a request on a browser to the backend server, It gives me an error GET http://127.0.0.1:

How to create subdomains automatically with Nginx ingress

We're searching a way to use subdomains in a master-minion nginx ingress implementation. We've tried a lot of different approaches but we haven't got it. The documentation example works fine (https://github.com/nginxinc/kubernetes-ingress/tree/v1.8.1/examples/mergeable-ingress-types), but this example is with paths. Is there anyway to do it with subdomains or it's not possible? I will have a different subdomain like the wordpress structure every time you create a new page, I would like to know w

Understanding Airflow and Nginx reverse proxy configuration on Kubernetes

I am having difficulty connecting Apache airflow to connect through a reverse proxy (Nginx) when running on Kubernetes. I have installed from the Helm stable/airflow chart. I enabled the Ingress resource to be created. What I am trying to do is get configure Nginx ingress controller to route public IP requests to the airflow-web ClusterIP service to the airflow-web pod. I have attempted to follow the official documentation and several other issues that have popped up on StackOverflow 1, 2, and 3

nginx load balancer not spreading the requests to all servers. only on 2

I have 10 servers configured in the nginx lb, only 2 are getting requests... events { worker_connections 1024;} http { upstream app { #least_conn; server test_1:3500; server test_2:3500; server test_3:3500; server test_4:3500; server test_5:3500; server test_6:3500; server test_7:3500; server test_8:3500; server test_9:3500; server test_10:3500; } server { listen 3600; location /

HTTP2 Nginx server 200 response for XHR POST Request to a Static FILE

Strange behavior with HTTP2 when trying to send POST request to a static file on a Nginx server. immediately getting 200 response without sending the whole data on HTTP2 and uploading the whole file without any issue on HTTP1.1. When i change upload path to "upload.php" everything working fine on HTTP2 and 1.1 . Problem occurs when we try to send POST request to "upload.bin" or "upload" a static file with or without extension on HTTP2. <body> <input type=

Nginx How can we block X-Fordward-For header IP (https request) with IPtables

Basic Overview We are trying to set up Rate Limiting on our server. we are using Nginx as a webserver and fail2ban for blocking IPs with Iptables. IPtables can block IPs if a request hits direct our Nginx server(in this case $remote_addr is client IP). But if it comes via some proxy server then proxy server passes client IP in X-Fordwarded-For header and Iptables unable to detect that(in this case $remote_addr is proxy server IP). Is their some other ways we can block X-Fordwarded-For header

Nginx Installing an SSL certificate on two Plesk ports

I have a VPS on Plesk. I have an Angular interface application on the standard port on another port, my Strapi API. The Angular application has an SSL certificate from Let's Encrypt, but Strapi API doesn't have it yet. How do I bind this SSL certificate to the Strapi API? So that there is https on two ports.

(newbie) socket = new WebSocket wss://mydomain.com/.... nginx or php or ruby how the receiving (or server) file looks like?

please note : I'm new as to websocket, but I know PHP 8 very well, have a few knowledge of nginx (but limited) I want to do a test (a get started kind of thing): data exchange in realtime via websocket (like a chat) => from javascript connect via wss: (the server will receive, do script and return infos)... In javascript there is this to establish a websocket : let socket = new WebSocket("wss://mydomain.com/websocket"); socket.onopen = function(e) { console.log("[open] Connecti

  1    2   3   4   5   6  ... 下一页 最后一页 共 51 页