Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

classic Classic list List threaded Threaded
24 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Hallo,

I am trying to configure nginx as a reverse proxy for multiple servers on my LAN. They should go out on my WAN with different subdomains.

Unlike the approach described in Use Nginx as Reverse Proxy for multiple servers I want to use UNIX socket for the interprocess communication on my server.

Based on

  1. the above post
  2. nginx reverse ssl proxy with multiple subdomains
  3. Using Nginx as Webserver
  4. Nginx to apache reverse proxy, instruct use of unix sockets
  5. Difference between socket- and port-based connection to outer NGINX?
  6. keeping in mind the solution given in How do I configure Nginx proxy_pass Node.js HTTP server via UNIX socket?

my configuration shall look something like this below, doesn't it? In order to keep the main file slim, I would like to outsource the location blocks.
I find all on the web more or less but nothing about wow I can reach the servers in within the LAN? Do I need to set up a local DNS server as described in Running DNS locally for home network?

main proxy file

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    #include letsencrypt.conf;
    server_app1 app1subdomain.domain.eu;
    *read app1location.file*
       }

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    #include letsencrypt.conf;
    server_app2 app2subdomain.domain.eu;
    *read app2location.file*
       }

location files for proxied web servers:

location / {            
    proxy_pass http://unix:/home/app1/app1.com.unix_socket;
    proxy_set_header X-Real-IP $remote_addr; #Authorization
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_buffering off;
    client_max_body_size 0;
    proxy_read_timeout 36000s;
    proxy_redirect off;
           }

-

   location / {            
        proxy_pass http://unix:/home/app2/app2.com.unix_socket;
        proxy_set_header X-Real-IP $remote_addr; #Authorization
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_buffering off;
        client_max_body_size 0;
        proxy_read_timeout 36000s;
        proxy_redirect off;
               }

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Hallo,

I added include for the location config files may it makes it better readable but still no clue hoiw to reach UNIX socket proxied webserver in LAN.


main proxy file

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    #include letsencrypt.conf;
    server_app1 app1subdomain.domain.eu;
    include app1location.conf
       }

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    #include letsencrypt.conf;
    server_app2 app2subdomain.domain.eu;
    include app1location.conf
       }

app1location.conf (location file for proxied web server)

location / {            
    proxy_pass http://unix:/home/app1/app1.com.unix_socket;
    proxy_set_header X-Real-IP $remote_addr; #Authorization
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_buffering off;
    client_max_body_size 0;
    proxy_read_timeout 36000s;
    proxy_redirect off;
           }

app2location.conf (location file for proxied web server)

   location / {            
        proxy_pass http://unix:/home/app2/app2.com.unix_socket;
        proxy_set_header X-Real-IP $remote_addr; #Authorization
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_buffering off;
        client_max_body_size 0;
        proxy_read_timeout 36000s;
        proxy_redirect off;
               }

On 21.09.2018 08:34, Stefan Müller wrote:

Hallo,

I am trying to configure nginx as a reverse proxy for multiple servers on my LAN. They should go out on my WAN with different subdomains.

Unlike the approach described in Use Nginx as Reverse Proxy for multiple servers I want to use UNIX socket for the interprocess communication on my server.

Based on

  1. the above post
  2. nginx reverse ssl proxy with multiple subdomains
  3. Using Nginx as Webserver
  4. Nginx to apache reverse proxy, instruct use of unix sockets
  5. Difference between socket- and port-based connection to outer NGINX?
  6. keeping in mind the solution given in How do I configure Nginx proxy_pass Node.js HTTP server via UNIX socket?

my configuration shall look something like this below, doesn't it? In order to keep the main file slim, I would like to outsource the location blocks.
I find all on the web more or less but nothing about wow I can reach the servers in within the LAN? Do I need to set up a local DNS server as described in Running DNS locally for home network?

main proxy file

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    #include letsencrypt.conf;
    server_app1 app1subdomain.domain.eu;
    *read app1location.file*
       }

server {
    listen 80;
    listen [::]:80;
    listen 443 ssl;
    listen [::]:443 ssl;
    #include letsencrypt.conf;
    server_app2 app2subdomain.domain.eu;
    *read app2location.file*
       }

location files for proxied web servers:

location / {            
    proxy_pass http://unix:/home/app1/app1.com.unix_socket;
    proxy_set_header X-Real-IP $remote_addr; #Authorization
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

    proxy_http_version 1.1;
    proxy_set_header Connection "";
    proxy_buffering off;
    client_max_body_size 0;
    proxy_read_timeout 36000s;
    proxy_redirect off;
           }

-

   location / {            
        proxy_pass http://unix:/home/app2/app2.com.unix_socket;
        proxy_set_header X-Real-IP $remote_addr; #Authorization
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_http_version 1.1;
        proxy_set_header Connection "";
        proxy_buffering off;
        client_max_body_size 0;
        proxy_read_timeout 36000s;
        proxy_redirect off;
               }

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

RE: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Reinis Rozitis
> I added include for the location config files may it makes it better readable but still no clue hoiw to reach UNIX socket proxied webserver in LAN.

It's a bit unclear what is the problem or what you want to achieve?
 
The nginx can't connect/proxy_pass to the socket files (what's the error)?


Also I'm not sure how LAN goes together with unix socket files which are ment for local process communication (IPC) inside a single server instance.
Is there a single server just with nginx and some other services (node/python etc) which create those socket files (/home/app1; /home/app2 ..) or you are trying to proxy some other applications which reside on other devices/servers inside LAN (to expose to WAN)?


rr



_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller
I've just entered office :(. I will try to give you more details later this day.

Le mer. 26 sept. 2018 à 12:52, Reinis Rozitis <[hidden email]> a écrit :
> I added include for the location config files may it makes it better readable but still no clue hoiw to reach UNIX socket proxied webserver in LAN.

It's a bit unclear what is the problem or what you want to achieve?

The nginx can't connect/proxy_pass to the socket files (what's the error)?


Also I'm not sure how LAN goes together with unix socket files which are ment for local process communication (IPC) inside a single server instance.
Is there a single server just with nginx and some other services (node/python etc) which create those socket files (/home/app1; /home/app2 ..) or you are trying to proxy some other applications which reside on other devices/servers inside LAN (to expose to WAN)?


rr



_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Hi,

it got quite late, so I'll try to keep it short and simple.

My question is the outcome on my discussion on reddit- one single user per web server (and delete default Web server user) - possible and consequences?.
I have a Synology NAS what runs a nginx as default web server to run all their apps. I would like to extend it to meet the following.
I have 1 nginx server running as root (in my understanding it is a reverse proxy), listening on port 80/443. this is your master nginx server. have each user account that needs a website run their own nginx server, they're not allowed to serve port 80/443 directly, let them serve a unix socket, that means the config looks something like shown in my previous email.
The purposes is that  if the useraccount webapp1 is compromised, it will only affect webaoos1's web server.. and repeat this for all accounts/websites/whatever you want to keep separated. this approach use some more ram than having a single nginx instance do everything directly.

Besides the question for the optimal setup to realize this, I'm wondering how I can call the web server locally, within my LAN if I call them by the NAS's IP.

Hope that makes it clearer.

Thank you

Stefan



On 26.09.2018 13:03, Stefan Mueller wrote:
I've just entered office :(. I will try to give you more details later this day.

Le mer. 26 sept. 2018 à 12:52, Reinis Rozitis <[hidden email]> a écrit :
> I added include for the location config files may it makes it better readable but still no clue hoiw to reach UNIX socket proxied webserver in LAN.

It's a bit unclear what is the problem or what you want to achieve?

The nginx can't connect/proxy_pass to the socket files (what's the error)?


Also I'm not sure how LAN goes together with unix socket files which are ment for local process communication (IPC) inside a single server instance.
Is there a single server just with nginx and some other services (node/python etc) which create those socket files (/home/app1; /home/app2 ..) or you are trying to proxy some other applications which reside on other devices/servers inside LAN (to expose to WAN)?


rr



_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

RE: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Reinis Rozitis
> I have a Synology NAS what runs a nginx as default web server to run all their apps. I would like to extend it to meet the following.
>
> The purposes is that  if the useraccount webapp1 is compromised, it will only affect webaoos1's web server.. and repeat this for all accounts/websites/whatever you want to keep separated. this approach use some more ram than having a single nginx instance do everything directly.
>
> Besides the question for the optimal setup to realize this


While technically you could run per-user nginx listening on an unix socket and then make a proxy on top of those while doable it feels a bit cumbersome (at least to me).

Usually what gets compromised is the (dynamic) backend application (php/python/perl/lua etc) not the nginx/webserver itself, also nginx by default doesn't run under root but 'nobody'. root is only needed on startup for the master process to open 80/443 (ports below 1024) then all the workers switch to an unprivileged user.

One way of doing this would be instead of launching several nginxes just run the backend processes (like php-fpm, gunicorns etc) under particular users and let nginx communicate to those via sockets.


I'm not familiar how Synology NAS  internally separates different user processes but it has Docker support ( https://www.synology.com/en-global/dsm/feature/docker ) and even Virtual Machine Manager which technically would be a better user / application isolation.


> I'm wondering how I can call the web server locally, within my LAN if I call them by the NAS's IP.

It depends on your network topology.

Does the Synology box has only LAN interface? Then you either need to configure portforwarding on your router or make a server/device which has both lan/wan interfaces (DMZ) and then can expose either on tcp level (for example via iptables) or via http proxy the internal websites/resources.

If you make a virtual machine for each user you can then assign a separate LAN or WAN ip for each instance.


But this kind of gets out of the scope of this mailing list.  

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Hoi Reinis,

I aswered inline and applied colors for my (#6633ff) and your (#cc9933) text for better readability

Thanks a lot for your input

> I have a Synology NAS what runs a nginx as default web server to run all their apps. I would like to extend it to meet the following. > > The purposes is that if the useraccount webapp1 is compromised, it will only affect webaoos1's web server.. and repeat this for all accounts/websites/whatever you want to keep separated. this approach use some more ram than having a single nginx instance do everything directly. > > Besides the question for the optimal setup to realize this ​ While technically you could run per-user nginx listening on an unix socket and then make a proxy on top of those while doable iit feels a bit cumbersome (at least to me).

how do I do it eaxtly regardless if it is cumbersome?. Be it only for informational purpose but it makes the entire conversation a bit easier. Combined with the outcome of the section it could outline all possbiel options (incl. pro and cons).

​ Usually what gets compromised is the (dynamic) backend application (php/python/perl/lua etc) not the nginx/webserver itself, also nginx by default doesn't run under root but 'nobody'. root is only needed on startup for the master process to open 80/443 (ports below 1024) then all the workers switch to an unprivileged user.

So far I assuemd that the worker start the backend application the access to php is configured in the server block (my reference is What is the easiest way to enable PHP on nginx? and Serve PHP with PHP-FPM and NGINX). My googling tells my that the PHP process usually runs with the permissions of the webserver. So I need to find a way that each webapplication (webapp1, webapp2, etc.) call its PHPs using a unique user account. When I read nginx + php run with different user id and changing php user to run as nginx user it must be somehow possible. Could share mor information how to achive that?

One way of doing this would be instead of launching several nginxes just run the backend processes (like php-fpm, gunicorns etc) under particular users and let nginx communicate to those via sockets. ​ I'm not familiar how Synology NAS internally separates different user processes but it has Docker support ( https://www.synology.com/en-global/dsm/feature/docker) and even Virtual Machine Manager which technically would be a better user / application isolation.

Unfortunettely, my NAS does not support it

> I'm wondering how I can call the web server locally, within my LAN if I call them by the NAS's IP. It depends on your network topology. ​ Does the Synology box has only LAN interface? Then you either need to configure portforwarding on your router or make a server/device which has both lan/wan interfaces (DMZ) and then can expose either on tcp level (for example via iptables) or via http proxy the internal websites/resources

The NAS has only one LAN interface. You suggest a more complex solution as just simple NAT port fowarding, as explained in Using router and internal LAN port forwarding device - Advice please :). I have simple router, the Zyxel NBG6616. it seems that is supports DMZ and if your refer to a static DHCP table by IP Table than it is supported as well but doens't look good for the http proxy. I still not understand how to forward to UNIX Sockets. Do I need custom ports entry in the prox part like NASIP:80001 -> Wepapp1ViaUNIXSocket NASIP:80002 -> Wepapp1ViaUNIXSocket

I could run a DNS server on the NAS if that simplifies it.

​ If you make a virtual machine for each user you can then assign a separate LAN or WAN ip for each instance.

VMs aren't supported, so it isn't an option

​ But this kind of gets out of the scope of this mailing list. ​ rr


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

RE: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Reinis Rozitis
> how do I do it eaxtly regardless if it is cumbersome?.

Well you configure each individual nginx to listen ( https://nginx.org/en/docs/http/ngx_http_core_module.html#listen ) on a unix socket:

Config on nginx1:
..
events { }
http {
  server {
     listen unix:/some/path/user1.sock;
     ..
 }
}

Config on nginx2:
..
server {
    listen unix:/some/path/user2.sock;
   ...
}


And then on the main server you configure the per-user virtualhosts to be proxied to particular socket:

server {
        listen 80;
        server_name     user1.domain;
        location / {
                proxy_pass http://unix:/some/path/user1.sock;
        }
}
server {
        listen 80;
        server_name     user2.domain;
        location / {
                proxy_pass http://unix:/some/path/user2.sock;
        }
}


(obviously it's just a mockup and you need to add everything else like http {} blocks, root paths, SSL certificates (if available) etc)


> So far I assuemd that the worker start the backend application the access to php is configured in the server block (my reference is What is the easiest way to enable PHP on nginx? and Serve PHP with PHP-FPM and NGINX). My googling tells my that the PHP process usually runs with the permissions of the webserver.

Not exactly.

php-fpm which is the typical way of running php under nginx are different processes/daemons each having their own configuration and communicate via FastCGI (http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html ) via tcp or unix socket and both can run under different system users (php-fpm can manage even multiple pools each under own user and different settings) .

The guide you linked on linode.com isn't fully correct "The listen.owner and listen.group variables are set to www-data by default, but they need to match the user and group NGINX is running as."

The users don't need to match but the nginx user needs read/write permissions on the socket file (setting the same user just makes the guide simpler and less error prone).
You can always put the nginx and php-fpm user in a group and make the socket file group writable (via listen.mode = 0660 in php-fpm.conf)


> Unfortunettely, my NAS does not support it

While the Synologies are Linux-based maybe running somewhat complicated setups (user/app isolation) and exposing to WAN are not the best option.

Also it beats the whole idea of DSM being userfriendly centralized GUI tool. A regular pc/server with some native linux distribution (Ubuntu, Debian, Fedora, Opensuse etc) might be a better choice (and imho easier to experiment on) and you can always attach the NAS to the linux box (via NFS, samba/cifs, webdav etc).

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

thx that gets me closer to the end :).

let's try to summarize it (and add some more info):

  1. proxy and unix socket,
    This allows permission management via user accounts but it can can get bulky as soon as you set up user accounts for permission management of each backend application, as they  pose a higher risk, as indicated in the previous email
    For the server you make use of that is all put in the same http{} block.
    Would there be any advantage to use separate http{} blocks as discussed some while ago in Disallowing multiple http {} blocks in nginx.conf? 

  2. harden nginx / php communication
    php-fpm is typical tool to communicate with one or more php interpretors. Nginx just starts php-fpm, what in turn takes about the php script interpretation by means of the interpretor processes. The interpretor processes run within a so called of pool (of processes).
    The good thing is, that you can setup multiple pools, each with its own configuration, running with a different user, allowing hardening php script execution.
    How do I tell the proxied servers or php-fpm to use a certain pool for a certain server?

  3. reach proxied servers within LAN
    what you originally described refers to operations described in
    1. pfSense - Reach webserver by public IP from within LAN
    2. pfSense - Can't reach internal web server / NAT Reflection, Split DNS  
    3. pfSnese - How to Nat a web server
    but nothing mentioned there or by you is supported by my router at least I can declare a fixed IP for the NAS and set the NAS as primary DNS Server to do:
    1. Running DNS locally for home network
    2. How To Configure BIND as a Private Network DNS Server
    so the nginx related question, to I need to add listener to NAS_IP:LANPort to proxy webserver within LAN?

  4. (new) how to debug
    In /etc/nginx/nginx.conf  as there is:
     access_log syslog:server=unix:/dev/log,facility=local7,tag=nginx_access,nohostname main;
     error_log   syslog:server=unix:/dev/log,facility=local7,tag=nginx_error,nohostname error;
    so I assume Debug Logging is available although $ nginx -V 2>&1 | grep -- '--with-debug' does not return anything.
    How can I debug points 1 to 3 best?

  5. syno setup  is complicated / get new hardware, what allows to run linux and dockers.
    I know but I'm still hoping that the will be an AMR processor for a home server
    I'll get new hardware in the long term but currently I'm trying to understand the Syno setup, at least I found, most likely, all relevant locations to configure nginx and php : 

    nginx

     /etc/nginx
       /etc/nginx/app.d           syslink2   /var/tmp/nginx/app.d
       /etc/nginx/conf.d          syslink2   /etc/nginx/conf.d
       /etc/nginx/sites-enabled   syslink2   /etc/nginx/sites-enabled
       nginx.conf    generated by nginx-conf-generator.sh
       ... 


     /etc.defaults/nginx

     /etc/init
          syslink2   /usr/share/init     (pre-start script)
       nginx.conf  

     /usr/local/etc/nginx
       /usr/local/etc/nginx/conf.d     emtpy
       /usr/local/etc/nginx/sites-enable
    d     emtpy 

     /usr/share/nginx
       /usr/share/nginx/html
       50x.html
       index.html
     
     /usr/syno/etc/rc.sysv   

       nginx-conf-generator.sh 

     /usr/syno/share/nginx
       /usr/syno/share/nginx/conf.d
         location configs
       *.mustache files properly used by
    nginx-conf-generator.sh

    /var/lib/nginx
       syslink2   /var/services/tmp/nginx

    /var/tmp/nginx
       /var/tmp/nginx/app
       /var/tmp/nginx/app.d
       /var/tmp/nginx/conf.d
         emtpy 
       /var/tmp/nginx/trusted_proxy


    php (php5 is used by phpMyAdmin)
     /etc/php
       php.ini    (extension_dir = "/usr/lib/php/modules" & sendmail_path = /usr/bin/ssmtp -t)

     /etc.defaults/php
       php.ini   
    (extension_dir = "/usr/lib/php/modules" & sendmail_path = /usr/bin/ssmtp -t)

     /etc/init       syslink2   /usr/share/init    (pre-start script) 
       php_timezone_update.conf

       pkgctl-PHP5.6.conf
       pkgctl-PHP7.0.conf
       pkg-php56-fpm.conf
       pkg-php70-fpm.conf
      
     
       pkg-WebStation-php56.conf
       pkg-WebStation-php70.conf

       ...

     /lib  
    syslink2   /usr/lib  

     /run/php-fpm
       php*-fpm*


     /usr/lib/php
       /usr/lib/php/modules     (same moduls as listed in /etc/php/php.ini)
       /usr/lib/php/phpmailer
       /usr/lib/php/phpoffice

     /usr/local/bin
       /usr/local/bin/feasibilitycheck
       ...
       php70-cgi
      syslink2  /var/packages/PHP7.0/target/usr/local/bin/php70-cgi
          php70-fpm  syslink2  /var/packages/PHP7.0/target/usr/local/bin/php70-fpm
       ...

     /usr/local/etc

       /usr/local/etc/php56
         /usr/local/etc/php56/conf.d
         /usr/local/etc/php56/fpm.d
         /usr/local/etc/php56/freetds
         php.ini
         php-fpm.conf
      syslink2  /volume1/@appstore/PHP5.6//usr/local/etc/php56/php-fpm.conf 
       /usr/local/etc/php70
         /usr/local/etc/php70/conf.d
         /usr/local/etc/
    php70/fpm.d
         /usr/local/etc/
    php70/freetds  syslink2  /volume1/@appstore/PHP7.0//usr/local/etc/php70/freetds
         php.ini
         php-fpm.conf
      syslink2  /volume1/@appstore/PHP7.0//usr/local/etc/php70/php-fpm.conf 

     
    /usr/local/lib
        /usr/local/lib/php56

     
       /usr/local/lib/php56/modules  
    emtpy
        /usr/local/lib/php70
      
        /usr/local/lib/php70/modules   emtpy

     /var/packages

       /var/packages/PHP5.6
         /var/packages/PHP5.6/conf
         /var/packages/PHP5.6/etc 
        syslink2   /usr/syno/etc/packages/PHP5.6
         /var/packages/PHP5.6/scripts
         /var/packages/PHP5.6/target 
    syslink2   /volume1/@appstore/PHP5.6
       /var/packages/PHP7.0
         /var/packages/PHP7.0/conf
         /var/packages/PHP
    7.0/etc     syslink2   /usr/syno/etc/packages/PHP7.0
         /var/packages/PHP
    7.0/scripts
         /var/packages/PHP
    7.0/target  syslink2   /volume1/@appstore/PHP7.0


    php managed by WebStation (Synology's web site hosting package)
     /var/packages/WebStation/target/misc
       /var/packages/WebStation/target/misc/WebStation-php56
         /var/packages/WebStation/target/misc/WebStation-php56/conf.d
         extension.ini
       /var/packages/WebStation/target/misc/WebStation-php56
         /var/packages/WebStation/target/misc/WebStation-php56/conf.d
         extension.ini
       ...
       php56.ini
       php56_fpm.conf
       php70.ini

       php70_fpm.conf
       ...

     



On 28.09.2018 20:49, Reinis Rozitis wrote:
how do I do it eaxtly regardless if it is cumbersome?. 
Well you configure each individual nginx to listen ( https://nginx.org/en/docs/http/ngx_http_core_module.html#listen ) on a unix socket:

Config on nginx1:
..
events { }
http {
  server {
     listen unix:/some/path/user1.sock;
     ..
 } 
}

Config on nginx2:
..
server {
    listen unix:/some/path/user2.sock;
   ...
}


And then on the main server you configure the per-user virtualhosts to be proxied to particular socket:

server {
	listen 80;
	server_name     user1.domain;
	location / {
		proxy_pass http://unix:/some/path/user1.sock;
	}
}
server {
	listen 80;
	server_name     user2.domain;
	location / {
		proxy_pass http://unix:/some/path/user2.sock;
	}
}


(obviously it's just a mockup and you need to add everything else like http {} blocks, root paths, SSL certificates (if available) etc)


So far I assuemd that the worker start the backend application the access to php is configured in the server block (my reference is What is the easiest way to enable PHP on nginx? and Serve PHP with PHP-FPM and NGINX). My googling tells my that the PHP process usually runs with the permissions of the webserver. 
Not exactly.

php-fpm which is the typical way of running php under nginx are different processes/daemons each having their own configuration and communicate via FastCGI (http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html ) via tcp or unix socket and both can run under different system users (php-fpm can manage even multiple pools each under own user and different settings) .

The guide you linked on linode.com isn't fully correct "The listen.owner and listen.group variables are set to www-data by default, but they need to match the user and group NGINX is running as."

The users don't need to match but the nginx user needs read/write permissions on the socket file (setting the same user just makes the guide simpler and less error prone).
You can always put the nginx and php-fpm user in a group and make the socket file group writable (via listen.mode = 0660 in php-fpm.conf)


Unfortunettely, my NAS does not support it
While the Synologies are Linux-based maybe running somewhat complicated setups (user/app isolation) and exposing to WAN are not the best option. 

Also it beats the whole idea of DSM being userfriendly centralized GUI tool. A regular pc/server with some native linux distribution (Ubuntu, Debian, Fedora, Opensuse etc) might be a better choice (and imho easier to experiment on) and you can always attach the NAS to the linux box (via NFS, samba/cifs, webdav etc).

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

RE: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Reinis Rozitis
> This allows permission management via user accounts but it can can get bulky as soon as you set up user accounts for permission management of each backend application, as they  pose a higher risk, as indicated in the previous email

Well you asked how to proxy unix sockets...


> that is all put in the same http{} block.

If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation).
It's more simple then just to make virtualhosts without the sockets and without the proxy.


> Nginx just starts php-fpm

No.
Depending on distribution there might be some init and/or systemd scripts which start both daemons but on its own nginx doesn’t do that.



> 4. (new) how to debug
> In /etc/nginx/nginx.conf  as there is:
> access_log syslog:server=unix:/dev/log,facility=local7,tag=nginx_access,nohostname main;
> error_log   syslog:server=unix:/dev/log,facility=local7,tag=nginx_error,nohostname error;
> so I assume Debug Logging is available although $ nginx -V 2>&1 | grep -- '--with-debug' does not return anything.

It means that nginx is logging to syslog (which then usually writes somewhere under /var/log). You can change/point both logs also directly to a file.

--with-debug is only present when nginx is compiled in debug mode to log internal things and provide more detailed information in case of bugs. I doubt it will give any benefit in this case.


In general you are mixing a lot of things together, like asking about a BSD firewall, NATs, Bind and then trying to implement it on a specific linux-based ARM blackbox.
I would suggest to start experimenting/researching different technologies one by one rather than trying to achieve everything at once.


rr


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

This allows permission management via user accounts but it can can get bulky as soon as you set up user accounts for permission management of each backend application, as they  pose a higher risk, as indicated in the previous email
Well you asked how to proxy unix sockets...


and that is the explantation why you could/should to do instead of TCP sockets or did I miss something?


If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block

so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf  for each, the user unix sockets and also the parent proxy server?


It's more simple then just to make virtualhosts without the sockets and without the proxy.
server {
listen ...
}

a server block is the counterpart of a virtualhost in Apache isn't it?
You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy.


In general you are mixing a lot of things together,
No worries, those things get solved elsewhere. I mentioned them as they interfere with the proxy/virtualhost setup.



On 02.10.2018 18:41, Reinis Rozitis wrote:
This allows permission management via user accounts but it can can get bulky as soon as you set up user accounts for permission management of each backend application, as they  pose a higher risk, as indicated in the previous email
Well you asked how to proxy unix sockets...


that is all put in the same http{} block.
If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation). 
It's more simple then just to make virtualhosts without the sockets and without the proxy.


Nginx just starts php-fpm
No.
Depending on distribution there might be some init and/or systemd scripts which start both daemons but on its own nginx doesn’t do that.



4.	(new) how to debug
In /etc/nginx/nginx.conf  as there is:
access_log syslog:server=unix:/dev/log,facility=local7,tag=nginx_access,nohostname main;
error_log   syslog:server=unix:/dev/log,facility=local7,tag=nginx_error,nohostname error;
so I assume Debug Logging is available although $ nginx -V 2>&1 | grep -- '--with-debug' does not return anything.
It means that nginx is logging to syslog (which then usually writes somewhere under /var/log). You can change/point both logs also directly to a file.

--with-debug is only present when nginx is compiled in debug mode to log internal things and provide more detailed information in case of bugs. I doubt it will give any benefit in this case.


In general you are mixing a lot of things together, like asking about a BSD firewall, NATs, Bind and then trying to implement it on a specific linux-based ARM blackbox.
I would suggest to start experimenting/researching different technologies one by one rather than trying to achieve everything at once.


rr


_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

RE: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Reinis Rozitis
> so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf  for each, the user unix sockets and also the parent proxy server?

A typical nginx configuration has only one http {} block.

You can look at some examples:
https://nginx.org/en/docs/http/request_processing.html
https://nginx.org/en/docs/http/server_names.html 
https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/


> You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy.

Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.

Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

thank you again for you quick answer  but I'm getting lost


A typical nginx configuration has only one http {} block.

You can look at some examples:
I'm aware of those and other examples. What confuses me that you say that but also said in the email before that one:

If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation). 

so how must be the setup to the the whole user/app isolation

nginx.pid  - master process
\_nginx.conf
  \_http{}  - master server
  \_http{}  -
proxied/app servers

or

nginx.pid  - master process
\_nginx1.conf - master server
  \_http{}   - reverse proxy server

\_nginx2.conf - proxied servers
  \_http{}   -
proxied/app servers  

or?

If it is only one nginx.pid, how to I need to configure it to run  nginx1.conf  and nginx2.conf?



Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.
I mean my fibre router and I'm aware that unix sockets  work only inside a single server/machine. I'll use it only to redirect to the DNS Server what will run on the Synology box


 
Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

you got me, I have mistaken that, it got to late last night


On 03.10.2018 02:09, Reinis Rozitis wrote:
so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf  for each, the user unix sockets and also the parent proxy server?
A typical nginx configuration has only one http {} block.

You can look at some examples:
https://nginx.org/en/docs/http/request_processing.html
https://nginx.org/en/docs/http/server_names.html 
https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/


You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy.
Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.

Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller
good evening,

in the past we were mailing each other on a daily base but now it is
silent. Anything alright?

On 03.10.2018 23:02, Stefan Müller wrote:

>
> thank you again for you quick answer  but I'm getting lost
>
>
>> A typical nginx configuration has only one http {} block.
>>
>> You can look at some examples:
> I'm aware of those and other examples. What confuses me that you say
> that but also said in the email before that one:
>
>> If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation).
>
> so how must be the setup to the the whole user/app isolation
>
> nginx.pid  - master process
> \_nginx.conf
>   \_http{}  - master server
>   \_http{}  - proxied/app servers
>
> or
>
> nginx.pid  - master process
> \_nginx1.conf - master server
>   \_http{}   - reverse proxy server
> \_nginx2.conf - proxied servers
>   \_http{}   - proxied/app servers
>
> or?
>
> If it is only one nginx.pid, how to I need to configure it to run
> nginx1.conf and nginx2.conf?
>
>
>
>> Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.
> I mean my fibre router and I'm aware that unix sockets  work only
> inside a single server/machine. I'll use it only to redirect to the
> DNS Server what will run on the Synology box
>
>
>  
>> Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.
>
> you got me, I have mistaken that, it got to late last night
>
>
> On 03.10.2018 02:09, Reinis Rozitis wrote:
>>> so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf  for each, the user unix sockets and also the parent proxy server?
>> A typical nginx configuration has only one http {} block.
>>
>> You can look at some examples:
>> https://nginx.org/en/docs/http/request_processing.html
>> https://nginx.org/en/docs/http/server_names.html 
>> https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/
>>
>>
>>> You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy.
>> Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.
>>
>> Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.
>>
>> rr
>>
>> _______________________________________________
>> nginx mailing list
>> [hidden email]
>> http://mailman.nginx.org/mailman/listinfo/nginx
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

hallo,

mostly all question are answered

  1. local DNS Server
    using DHCP server of the router and run a DNS Server on the NAS, all unersolved queries are solved in by the means of the routers WAN0's DNS settings
  2. debug logging
  3. php isolation
    create a pool per webage and rund them as seperate users by creating a php.conf per pool
  4. nginx
    this is the only one remaining. How can I isolate the servers?

thx a lot

Stefan

On 07.10.2018 21:42, Stefan Müller wrote:
good evening,

in the past we were mailing each other on a daily base but now it is silent. Anything alright?

On 03.10.2018 23:02, Stefan Müller wrote:

thank you again for you quick answer  but I'm getting lost


A typical nginx configuration has only one http {} block.

You can look at some examples:
I'm aware of those and other examples. What confuses me that you say that but also said in the email before that one:

If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation).

so how must be the setup to the the whole user/app isolation

nginx.pid  - master process
\_nginx.conf
  \_http{}  - master server
  \_http{}  - proxied/app servers

or

nginx.pid  - master process
\_nginx1.conf - master server
  \_http{}   - reverse proxy server
\_nginx2.conf - proxied servers
  \_http{}   - proxied/app servers

or?

If it is only one nginx.pid, how to I need to configure it to run nginx1.conf and nginx2.conf?



Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.
I mean my fibre router and I'm aware that unix sockets  work only inside a single server/machine. I'll use it only to redirect to the DNS Server what will run on the Synology box


 
Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

you got me, I have mistaken that, it got to late last night


On 03.10.2018 02:09, Reinis Rozitis wrote:
so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf  for each, the user unix sockets and also the parent proxy server?
A typical nginx configuration has only one http {} block.

You can look at some examples:
https://nginx.org/en/docs/http/request_processing.html
https://nginx.org/en/docs/http/server_names.html  https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/


You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy.
Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.

Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Hallo Reinis and others,

I still not get it as the information are not consistent rather inconsistent.I find a plenty of information to run separate PHP-FPM pools with unique user accounts for each but I haven't found anything similar for nginx.

How do make sure put the entire server is at risk if a web app/virtual host is compromised? If I understand the nginx worker processes correctly, a new worker process is started for each .conf file read by the nginx master process by means of include.

If I want to run the virtual host under a unique (and lmited) user account to avoid cross server hacks, the way to get there is to put the .conf of each virtual host in the user folder of each dedicated virtual host user folder. In addition I put the unique user directive (the virtual host user) in each .conf file of the virtual hosts. Is that assumption correct?

thank you

Stefan




On 12.10.2018 23:59, Stefan Müller wrote:

hallo,

mostly all question are answered

  1. local DNS Server
    using DHCP server of the router and run a DNS Server on the NAS, all unersolved queries are solved in by the means of the routers WAN0's DNS settings
  2. debug logging
  3. php isolation
    create a pool per webage and rund them as seperate users by creating a php.conf per pool
  4. nginx
    this is the only one remaining. How can I isolate the servers?

thx a lot

Stefan

On 07.10.2018 21:42, Stefan Müller wrote:
good evening,

in the past we were mailing each other on a daily base but now it is silent. Anything alright?

On 03.10.2018 23:02, Stefan Müller wrote:

thank you again for you quick answer  but I'm getting lost


A typical nginx configuration has only one http {} block.

You can look at some examples:
I'm aware of those and other examples. What confuses me that you say that but also said in the email before that one:

If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation).

so how must be the setup to the the whole user/app isolation

nginx.pid  - master process
\_nginx.conf
  \_http{}  - master server
  \_http{}  - proxied/app servers

or

nginx.pid  - master process
\_nginx1.conf - master server
  \_http{}   - reverse proxy server
\_nginx2.conf - proxied servers
  \_http{}   - proxied/app servers

or?

If it is only one nginx.pid, how to I need to configure it to run nginx1.conf and nginx2.conf?



Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.
I mean my fibre router and I'm aware that unix sockets  work only inside a single server/machine. I'll use it only to redirect to the DNS Server what will run on the Synology box


 
Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

you got me, I have mistaken that, it got to late last night


On 03.10.2018 02:09, Reinis Rozitis wrote:
so all goes in the same nginx.conf but in different http{} block or do I need one nginx.conf  for each, the user unix sockets and also the parent proxy server?
A typical nginx configuration has only one http {} block.

You can look at some examples:
https://nginx.org/en/docs/http/request_processing.html
https://nginx.org/en/docs/http/server_names.html  https://www.nginx.com/resources/wiki/start/topics/examples/server_blocks/


You suggesting to setup virtualhosts what listen to a port whereto traffic is forwarded from the router. I don't to have multiple ports open at the router, so I would like to stick with UNIX Sockets and proxy.
Unless by "router" you mean the same Synology box you can't proxy unix sockets over TCP, they work only inside a single server/machine.

Also you don't need to forward multiple ports, just 80 and 443 (if ssl) and have name-based virtualhosts.

rr

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Francis Daly
In reply to this post by Stefan Mueller
On Fri, Oct 12, 2018 at 11:59:48PM +0200, Stefan Müller wrote:

Hi there,

I've read over this mail thread, and I confess that I'm quite confused
as to what your remaining specific nginx question is.

If it's not too awkward, could you repeat just exactly what you now wish
to know?

It may make it easier for others to give a useful direct response.

> 4. *nginx*
>    this is the only one remaining. How can I isolate the servers?

I'm not sure what you mean by "isolate the servers", that was not
already answered.

("already answered" was approximately: for each server, run one nginx as
user this-server-user, listening on a unix domain socket. Then run one
nginx initially as user root, which does proxy_pass to the appropriate
unix-domain-socket-server.)

Have I missed something; or are you asking how to do it; or are you
asking why to do it?

Thanks,

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Good morning Francis,

thank you coming back on this.


In the very beginning Reinis wrote:


Well you configure each individual nginx to listen ( https://nginx.org/en/docs/http/ngx_http_core_module.html#listen ) on a unix socket:

Config on nginx1:
..
events { }
http {
  server {
     listen unix:/some/path/user1.sock;
     ..
 } 
}

Config on nginx2:
..
server {
    listen unix:/some/path/user2.sock;
   ...
}


And then on the main server you configure the per-user virtualhosts to be proxied to particular socket:

server {
	listen 80;
	server_name     user1.domain;
	location / {
		proxy_pass http://unix:/some/path/user1.sock;
	}
}
server {
	listen 80;
	server_name     user2.domain;
	location / {
		proxy_pass http://unix:/some/path/user2.sock;
	}
}


so I asked


that is all put in the same http{} block.


and he answered


If you put everything (both the user unix sockets and also the parent proxy server) under the same http{} block then it makes no sense since a single instance of nginx always runs under the same user (and beats the whole user/app isolation).

so I wonder, if I need to work with multiple .conf files or shall I put multiple http{} blocks in the general configuration of nginx /etc/nginx/nginx.conf? I assume that Reinis told me indirectly to run multiple instances of nginx, but I haven't understood yet how. There is the master process, properly taking care about the proxy server but how to I start the instance (if I need to work with instances) per virtual host?


Stefan



On 15.10.2018 22:23, Francis Daly wrote:
On Fri, Oct 12, 2018 at 11:59:48PM +0200, Stefan Müller wrote:

Hi there,

I've read over this mail thread, and I confess that I'm quite confused
as to what your remaining specific nginx question is.

If it's not too awkward, could you repeat just exactly what you now wish
to know?

It may make it easier for others to give a useful direct response.

4. *nginx*
   this is the only one remaining. How can I isolate the servers?
I'm not sure what you mean by "isolate the servers", that was not
already answered.

("already answered" was approximately: for each server, run one nginx as
user this-server-user, listening on a unix domain socket. Then run one
nginx initially as user root, which does proxy_pass to the appropriate
unix-domain-socket-server.)

Have I missed something; or are you asking how to do it; or are you
asking why to do it?

Thanks,

	f

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Francis Daly
On Tue, Oct 16, 2018 at 09:20:33AM +0200, Stefan Müller wrote:

Hi there,

> so I wonder, if I need to work with multiple .conf files or shall I put
> multiple http{} blocks in the general configuration of nginx
> /etc/nginx/nginx.conf? I assume that Reinis told me indirectly to run
> multiple instances of nginx, but I haven't understood yet how. There is the
> master process, properly taking care about the proxy server but how to I
> start the instance (if I need to work with instances) per /virtual host/?

In this design, you run multiple instances of nginx. That is: multiple
individual system processes that are totally independent of each other.

So: nginx-user1.conf includes something like

  http {
    server {
      listen unix:/some/path/user1.sock;
    }
  }

and refers to log files and tmp files and a pid file that user1 can write,
and to a document root that user1 can read (if necessary), and you run
the command "/usr/sbin/nginx -c nginx-user1.conf" as system user user1.

And then you do the same for user2, user3, etc.

And then you have one other "nginx-main.conf" which includes "listen 443
ssl" and includes proxy_pass to the individual unix:/some/path/userN.sock
"backend" servers; and you run the command "/usr/sbin/nginx -c
nginx-main.conf" as user root.


Note: the actual file names involved are irrelevant. All that matters
is that when the nginx binary is run with a "-c" option, it can read
the named file which contains the config that this instance will use.

If the nginx process starts as user root, it will change itself to run as
the other configured user-id as soon as it can; if it starts as non-root,
it will not. In the above design, all of the user-specific backend nginx
servers run as non-root.


And - the term "virtual host" usually refers to different server{} blocks
within the configuration of a single nginx instance. You (generally) don't
care about those -- the nginx binary will start the appropriate child
system-level processes to deal with the configuration that it was given.

If you are running multiple nginx system-level processes, each one has
its own idea of the virtual hosts from its configuration. With the above
design, all of the "user" nginx instances have just one server{} block,
while the "root" nginx instance probably has multiple server{} blocks.


Good luck with it,

        f
--
Francis Daly        [hidden email]
_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
Reply | Threaded
Open this post in threaded view
|

Re: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN

Stefan Mueller

Hallo Francis,
thank you for  the liberating response :).

Unfortunately that rise some questions:

  1. documentation
    Is there any additional document for the -c command. I find only:
    1. http://nginx.org/en/docs/switches.html
    2. https://stackoverflow.com/questions/19910042/locate-the-nginx-conf-file-my-nginx-is-actually-using
    but none of them says that it will start an independent instances of nginx.

  2. command line
    I assume, that the command line parameters refer to a single instance environment. How do I use the command line parameters for a specific instance? Is it like this nginx -V "pid /var/run/nginx-user1.pid"?

  3. root and non-root
    only the master / proxy server instance need root access in order to bind to ports <1024 and change its user-id to the one defined in the user directive in the main context of its .conf file.
    The other / backend instances don't have to be started as root as they don't need to bind to ports, they communicate via UNIX sockets so all permission are managed by the user account management. 
    That is the same, what you said, isn't it?

  4. all in all there two layers of isolation
    1. dynamic content provide such as PHP
      each "virtual host" / server{} blocks has its own PHP pool. So the user for pool server{}user1 cannot see  the pool server{}user2. If user1 gets hacked, the hacker won't get immidate acceass to user2 or the nginx  master process, correct?
    2. independent instances of nginx.
      In case the master process is breach for what ever reason, the hacker cannot see the other serves as long as he won't get root privileges of the machine and there is the same exploit in the other servers, correct?

Stefan

On 16.10.2018 09:56, Francis Daly wrote:
On Tue, Oct 16, 2018 at 09:20:33AM +0200, Stefan Müller wrote:

Hi there,

so I wonder, if I need to work with multiple .conf files or shall I put
multiple http{} blocks in the general configuration of nginx
/etc/nginx/nginx.conf? I assume that Reinis told me indirectly to run
multiple instances of nginx, but I haven't understood yet how. There is the
master process, properly taking care about the proxy server but how to I
start the instance (if I need to work with instances) per /virtual host/?
In this design, you run multiple instances of nginx. That is: multiple
individual system processes that are totally independent of each other.

So: nginx-user1.conf includes something like

  http {
    server {
      listen unix:/some/path/user1.sock;
    }
  }

and refers to log files and tmp files and a pid file that user1 can write,
and to a document root that user1 can read (if necessary), and you run
the command "/usr/sbin/nginx -c nginx-user1.conf" as system user user1.

And then you do the same for user2, user3, etc.

And then you have one other "nginx-main.conf" which includes "listen 443
ssl" and includes proxy_pass to the individual unix:/some/path/userN.sock
"backend" servers; and you run the command "/usr/sbin/nginx -c
nginx-main.conf" as user root.


Note: the actual file names involved are irrelevant. All that matters
is that when the nginx binary is run with a "-c" option, it can read
the named file which contains the config that this instance will use.

If the nginx process starts as user root, it will change itself to run as
the other configured user-id as soon as it can; if it starts as non-root,
it will not. In the above design, all of the user-specific backend nginx
servers run as non-root.


And - the term "virtual host" usually refers to different server{} blocks
within the configuration of a single nginx instance. You (generally) don't
care about those -- the nginx binary will start the appropriate child
system-level processes to deal with the configuration that it was given.

If you are running multiple nginx system-level processes, each one has
its own idea of the virtual hosts from its configuration. With the above
design, all of the "user" nginx instances have just one server{} block,
while the "root" nginx instance probably has multiple server{} blocks.


Good luck with it,

	f

_______________________________________________
nginx mailing list
[hidden email]
http://mailman.nginx.org/mailman/listinfo/nginx
12