Search Results

Search found 28222 results on 1129 pages for 'machine config'.

Page 270/1129 | < Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • Windows 7 Users unable to add Windows 2003 server printers

    - by TravBrack
    Hi there I just rolled out a few Windows 7 x64 machines and ran into this issue where non-admin users are unable to add printers hosted on a windows 2003 server. It works fine on a 2008 server. The issue appears to be with the point and print system. A user will attempt to add the printer, a prompt will come up requiring the user to elevate privileges in order to install a driver, and will fail citing 'access denied'. I found the group policy setting Point and Print Restrictions: When the policy setting is disabled: -Windows Vista computers will not show a warning or an elevated command prompt when users create a printer connection to any server using Point and Print. So I disabled it, verified that the policy was being picked up using rsop, but it still does the same thing. I've also tried the following: Recreating the printers using newer drivers Adding the printer using 32 bit drivers on the 2003 machine, then adding the 64 bit drivers on a Windows 7 machine Adding the printer from a windows 7 machine using print management None of these things work. The security settings are no different than the working printers. Help?

    Read the article

  • Make server unavailable gracefully using Powershell in ARR

    - by Carl Bergquist
    We are using ARR as reverse proxy and I would like to make a server unavailable for various reasons. How can this be done using Powershell? Edit 1: I found this http://blogs.iis.net/anilr/archive/2009/11/09/using-arr-config-extensibility-to-gracefully-stop-server.aspx tutorial for using JScript. But I'm not able to translate it to powershell. Edit 2: Using the Set-WebConfigurationProperty in WebAdministration module I'm able to changes settings for a server. I found SetState in %windir%\system32\inetsrv\config\schema\arr_schema.xml but I don't know how to invoke that method.

    Read the article

  • VMware vSwitches and Dell PowerConnect BPDU guard

    - by dunxd
    I am using two Dell PowerConnect 6248 switches to connect a VMware host vSwitch. A discussion of config of Cisco switches for use with VMware advises to set physical ports connected to vSwitch with bpduguard and portfast. However, Dell switches don't have the bpduguard setting for individual ports. I can switch it off globally for all portfast ports, but I don't think I want to do that. Should I: Disable STP on the vSwitch connected ports? Leave STP on and enable portfast on the ports, and forget about bpduguard? Disable bpduguard on all portfast ports via global config Do something else? See also: VMware vSwitches and spanning tree

    Read the article

  • RRAS VPN on windows 2k3 AD, can access rras server only.

    - by nopsax
    I'm setting up a test lab and here is the current configuration: 192.168.86.201 - a windows 2003 machine acting as PDC with AD/DNS/DHCP/WINS. 192.168.86.62 - windows 2003 machine is the RRAS server with IAS, also a file/print server. 192.168.86.6 - gateway/router to internet 192.168.86.21 - Windows XP Workstation Everything works on the internal network, File/Print/AD etc. Whenever a user connects via vpn to the RRAS server remotely using their domain credentials, they are assigned an ip address from the 192.168.86.201 machine along with the wins server address etc. The vpn user can then ping/access resources on the RRAS server, but cannot ping/access resources of any other machines by name or ip. However, if I ping by name, it does resolve to the correct ip address, just no replies. I did notice that on the RRAS server the 'internal' interface gets an ip address of 192.168.86.75 when a remote user connects, and the remote user is assigned, for example 192.168.86.71 . The RRAS server responds on both the .62 and .75 ip addresses. The client also unchecks the 'use remote default gateway option'. Also, I tried connecting a laptop to the physical network, joining the domain, then going remote and dialing the connection before domain login, and everything seems to work, e.g. browse-able shares via network neighborhood. But I can't really join the domain remotely if I cannot access any other resources. I really need to monitor traffic to see whats happening to those packets but won't be able to until this weekend. Any help is appreciated, will provide whatever configurations are needed.

    Read the article

  • Intel Rapid Storage Technology (pre-OS) driver installation

    - by Nero theZero
    My desktop machine is built on Gigabyte GA-Z87-UD3H and Gigabyte provides the latest driver for Intel Rapid Storage Technology (IRST), which I installed after installing the OS. Same goes for my Lenovo Thinkpad-T420. And for both machine, checking the controller device under the IDE ATA/ATAPI Controllers section in Device Manager I see the driver has been updated to the latest version. I set the SATA controller to AHCI from BIOS On the desktop machine I have one WD 2TB BLACK & one WD 3TB Green I don’t use RAID, & no chance of using in near future, but according to Intel IRST improves performance in single disk scenario too. Now I have the following questions – What is the actual purpose of IRST (pre-OS install) driver that doesn’t get served with a post-OS driver that I installed? There must be some difference, otherwise there wouldn’t be a pre-OS version of the driver. Right? In the pre-OS procedure (loading the drivers at OS-installation time) after successfully completing the OS installation, do I need that post-OS driver? Because after installing from that one I got a quick launch icon that runs the IRST configuration application. Where do get that after installing the pre-OS driver? As it is “pre-OS”, when I load it at OS-installation time, does it updates anything at BIOS level or anywhere other than HDD? That’s because I’m going to dual boot Windows 7 with Windows 8.1, and after installing Windows 7 when I install Windows 8.1 & load the IRST driver for that, is there any chance of any “overwriting” or OS-incompatibility? In short, is there anything specific to follow while installing the second OS?

    Read the article

  • Setting up fail2ban to ban failed phpMyAdmin login attempts

    - by Michael Robinson
    We've been using fail2ban to block failed ssh attempts. I would like to setup the same thing for phpMyAdmin as well. As phpMyAdmin doesn't log authentication attempts to a file (that I know of), I'm unsure of how best to go about this. Does a plugin / config exist that makes phpMyAdmin log authentication attempts to a file? Or is there some other place I should look for such an activity log? Ideally I will be able to find a solution that involved modifying fail2ban config only, as I have to configure fail2ban with the same options on multiple servers, and would prefer not to also modify the various phpMyAdmin installations on said servers.

    Read the article

  • How can I exceed the 60% Memory Limit of IIS7

    - by evilknot
    Pardon if this is more stackoverflow vs. serverfault. It seems to be on the border. We have an application that caches a large amount of product data for an e-commerce application using ASP.NET caching. This is a dictionary object with 65K elements, and our calculations put the object's size at ~10GB. Problem: The amount of memory the object consumes seems to be far in excess of our 10GB calculation. BIGGEST CONCERN: We can't seem to use over 60% of the 32GB in the server. What we've tried so far: In machine.config/system.web (sf doesn't allow the tags, pardon the formatting): processModel autoConfig="true" memoryLimit="80" In web.config/system.web/caching/cache (sf doesn't allow the tags, pardon the formatting): privateBytesLimit = "20000000000" (and 0, the default of course) percentagePhysicalMemoryUsedLimit = "90" Environment: Windows 2008R2 x64 32GB RAM IIS7 Nothing seems to allow us to exceed the 60% value. See attached screenshot of taskman.

    Read the article

  • Steps after installing vCenter Server?

    - by goober
    I'm working with: Two new ESX servers that I'm configuring A new Server 2008 R2 machine that I'm using for vCenter. I took the following steps: Installed the Hypervisor on the 2 ESX machines Checked their setup/connectivity (appears to be fine; can ping, etc.) Installed vCenter Server on the Win2k8R2 box. This included the install of a SQL Express database (we're a small shop) FYI, I changed some of the ports (443 -- 8443, 80 --8080, etc.) Installed vCenter Web Client Server on the Win2k8R2 box Problems my vSphere Client on my Desktop fails to connect. Part of this is that it asks me for a username and password, but I don't recall specifying one when I set up the install. I receive the error "vSphere Client could not connect to [machinename]. An unknown connection error occurred. (The request failed because of a connection failure. (Unable to connect to the remote server))" I have also tried to use local machine admin credentials, including the format machinename\localuseracct. I have also tried using my domain credentials which are an admin for that box. I have also checked and the service is running. I also tried to connect via vSphere client locally installed on the server. It translates "localhost" to the correct name but gives the same error. I cannot register the vCenter server from the vCenter Web Client Server. I'm not sure if this is necessary, as they're both on the same machine, but it seems like the logical next step. I also receive a "failed to connect" error in this case as well. FYI, both the vCenter server and the vCenter Web Client Server are installed on the same Win2k8R2 server. What am I missing here? What is the best way to test in this case?

    Read the article

  • All traffic is passed through OpenVPN although not requested

    - by BFH
    I have a bash script on a Ubuntu box which searches for the fastest openvpn server, connects, and binds one program to the tun0 interface. Unfortunately, all traffic is being passed through the VPN. Does anybody know what's going on? The relevant line follows: openvpn --daemon --config $cfile --auth-user-pass ipvanish.pass --status openvpn-status.log There don't seem to be any entries in iptables when I enter sudo iptables --list. The config files look like this: client dev tun proto tcp remote nyc-a04.ipvanish.com 443 resolv-retry infinite nobind persist-key persist-tun persist-remote-ip ca ca.ipvanish.com.crt tls-remote nyc-a04.ipvanish.com auth-user-pass comp-lzo verb 3 auth SHA256 cipher AES-256-CBC keysize 256 tls-cipher DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:AES256-SHA There is nothing in there that would direct everything through tun0, so maybe it's a new vagary of Ubuntu? I don't remember this happening in the past.

    Read the article

  • second ip address on the same interface but on a different subnet

    - by fptstl
    Is it possible in CentOS 5.7 64bit to have a second IP address on one interface (eg. eth0) - alias interface configuration - in a different subnet? Here is the original config for eth0 more etc/sysconfig/network-scripts/ifcfg-eth0 # Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express DEVICE=eth0 BOOTPROTO=static BROADCAST=192.168.91.255 HWADDR=00:1D:09:FE:DA:04 IPADDR=192.168.91.250 NETMASK=255.255.255.0 NETWORK=192.168.91.0 ONBOOT=yes And here is the config for eth0:0 more etc/sysconfig/network-scripts/ifcfg-eth0:0 # Broadcom Corporation NetXtreme BCM5721 Gigabit Ethernet PCI Express DEVICE=eth0:0 BOOTPROTO=static BROADCAST=10.10.191.255 DNS1=10.10.15.161 DNS2=10.10.18.36 GATEWAY=10.10.191.254 HWADDR=00:1D:09:FE:DA:04 IPADDR=10.10.191.210 NETMASK=255.255.255.0 NETWORK=10.39.191.0 ONPARENT=yes How would the resolv.conf file should change since there are two different gateways? Any other change needed?

    Read the article

  • NGINX + PHP FPM connect() failed (110: Connection timed out) while connecting to upstream

    - by Leonard Teo
    We're running a fairly large site using nginx and PHP-FPM and we're getting a lot of errors as the site load is quite high. We're getting "connect() failed (110: Connection timed out) while connecting to upstream"...upstream: "fastcgi://127.0.0.1:9000" Here's my config file for PHP-FPM. PHP-FPM: [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 100 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 100 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on What's the recommended config/number of servers/children for a high traffic site? We tried using Unix Sockets instead of TCP and got no noticeable improvements. Right now the errors are: connect() to unix:/var/run/php-fcgi.sock failed (11: Resource temporarily unavailable) while connecting to upstream...upstream: "fastcgi://unix:/var/run/php-fcgi.sock:"... Thanks, Leonard

    Read the article

  • Why can't I unblock postgres with shorewall?

    - by ryeguy
    I can't seem to unblock the port needed for postgres using Shorewall. I am developing a PHP app on my windows machine here, and then I upload it on my linux box to actually use it. The linux box runs the php files as well as hosts the db server. Since I need it working from both machines, in my PHP code I am referring to the database as the full IP instead of localhost. I can easily connect to postgres from my windows machine, but ironically, my PHP app can't connect to postgres even though it's on the same box. Here's what I have in /etc/shorewall/rules: #macro/action src dest PostgreSQL/ACCEPT net $FW PostgreSQL/ACCEPT loc $FW PostgreSQL/ACCEPT loc dmz PostgreSQL/ACCEPT net dmz PostgreSQL/ACCEPT loc net PostgreSQL/ACCEPT dmz $FW PostgreSQL/ACCEPT dmz loc PostgreSQL/ACCEPT dmz net PostgreSQL/ACCEPT dmz dmz Clearly I have a ton of crap there. The first line is all I needed to make it allow a connection from my windows machine. All the lines after it are me just trying everything to get it to work. What am I missing?

    Read the article

  • Solaris Non global Zone x server

    - by ankimal
    I am not even sure if this is possible but how can I start an X server on a non-global zone? If I run startx from within my zone. I created the xorg.conf by running /usr/X11/bin/xorgconfig root@foo:/usr/X11/bin# startx xauth: creating new authority file /root/.serverauth.20957 X.Org X Server 1.5.3 Release Date: 5 November 2008 X Protocol Version 11, Revision 0 Build Operating System: SunOS 5.11 snv_108 i86pc Current Operating System: SunOS dsol101 5.11 snv_111b i86pc Build Date: 07 May 2009 04:44:56PM Solaris ABI: 64-bit SUNWxorg-server package version: 6.9.0.5.11.11100,REV=0.2009.05.07 SUNWxorg-mesa package version: 6.9.0.5.11.11100,REV=0.2009.04.02 Before reporting problems, check http://sunsolve.sun.com/ to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.0.log", Time: Tue Nov 10 19:17:53 2009 (==) Using config file: "/etc/X11/xorg.conf" Fatal server error: xf86OpenConsole: Cannot open /dev/fb (No such file or directory)

    Read the article

  • rsync over ssh backup failing after relocation of server

    - by OlduvaiHand
    I've got two FreeBSD machines set up; one serves video data and the other is the backup for the first. At this point I've got around 4TB of data. I add files to the video server a few at a time, and was planning to use rsync over ssh to keep the backup machine up to date. I did the initial, large backup with both machines hooked up to the same subnet at the lab with no problems using rsync. Then, when I moved the backup machine off-site (but still on the university network), I attempted a sync without changing anything other than the IP (as the machine is now on a different subnet) and got the following error: 2010/03/22 15:55:21 [1260] rsync: connection unexpectedly closed (6340840244 bytes received so far) [receiver] 2010/03/22 15:55:21 [1260] rsync error: error in rsync protocol data stream (code 12) at io.c(601) [receiver=3.0.7] 2010/03/22 15:55:21 [1258] rsync: connection unexpectedly closed (60 bytes received so far) [generator] 2010/03/22 15:55:21 [1258] rsync error: unexplained error (code 255) at io.c(601) [generator=3.0.7] The script that handles the backup hasn't been changed, nor has the crontab that invokes it. Does anyone have any ideas about what might be causing the hiccup? I was under the impression that it might have something to do with the ssh connection timing out or something along those lines, but am not entirely clear on how to diagnose the cause of the problem.

    Read the article

  • Looking for a short term solution to improve website performance with additional server

    - by Tanim Mirza
    I am working with a small team to run an internal website running with PHP 5.3.9, MySQL 5.0.77. All the files and database are hosted on a dedicated Linux machine with the following configuration: Intel Xeon E5450 8 CPU cores @3.00GHz, 2992.498 MHz, Cache 6148 KB, Cent OS – Red Hat Enterprise Linux Server release 5.4 We started small and then the database got bigger and now the website performance degraded significantly. We often get server space overrun, mysql overloaded with too many calls, etc. We don't have much experience dealing with these issues. We recently got another server that we were thinking to use to improve performance. Since it has better configuration, some of us wanted to completely move everything to the new machine. But I am trying to find out how we can utilize both machine for optimized performance. I found options such as MySQL clustering, Load balancer, etc. I was wondering if I could get any suggestion for this situation "How to utilize two machines in short term for best performance", that would be great. By short term we are looking for something that we can deploy in a month or so. Thanks in advance for your time.

    Read the article

  • __modver_version_show undefined error when building linux kernel 3.0.4 version

    - by Jie Liu
    I tried to build the linux kernel 3.0.4 on ubuntu 11.10 in virtualbox. Here are my steps: Download the source code tar xjvf linux-source-3.0.0.tar.bz2 cd linux-source-3.0.0 make menuconfig, changed nothing but used the default config and save to .config make Actually I think it should be 3.0.4 because from the Makefile I could see VERSION = 3 PATCHLEVEL = 0 SUBLEVEL = 4 EXTRAVERSION = Then at stage 2 which is to make modules, an error happened: ERROR: "__modver_version_show" [drivers/staging/rts5139/rts5139.ko] undefined! make[1]: *** [__modpost] Error 1 make: *** [modules] Error 2 Perhaps because 3.0.4 is a new release so that I can not find any same problem asked nor any solution to it.

    Read the article

  • Window 7 image in vmware will allow network connection out but not http

    - by Ormis
    I am currently trying to create a set of images to deploy on my network, but I've run in to a snag. When I create my own Windows 7 image I can successfully use NAT for connecting to the network but whenever I try to access a webpage I get nothing. To be more specific, All firewalls/iptables are disabled on my host machine, my virtual machine, and my network. I can do lookups and all addresses respond correctly (i'm even using Google's DNS). On the host OS i have full connectivity. On the virtual machine I can ping any device I want and all addresses resolve correctly. Within a browser I cannot reach any page via hostname or IP. I feel almost like port 80 is being blocked but i can't find any reason this would be the case. If anyone has had this occur before, I would love some insight to the problem. I understand this question is a bit out of the norm for stackoverflow, but I've run out of ideas. Thank you for any help you can provide.

    Read the article

  • Setting Up Multiple Domains (plus wildcard subdomains) to Point to the Same Site/VirtualHost

    - by Derek Reynolds
    I have my primary domain with wildcard subdomains setup already. username.maindomain.com and maindomain.com I want to provide my users with additional domains that they can select. additional1.com, additional2.com, additional3.com... These additional domains would also need to support wildcard subdomains (as the subdomains route to a username). Anyone know how to properly configure this in DNS and VirtualHost config? Currently I have the additional domains as A records pointing to the same IP as my main domain (with a wildcard subdomain A record for each as well). In my VirtualHost config I am placing the additional domain names in the ServerAlias directive. Let me know if any more detail is needed.

    Read the article

  • eee box "drobpox" server 24/7

    - by microspino
    I'd like to create a mini dropbox and print server on a small soho network of 5 users (all of them use windows XP desktops). The device need to run 24/7 or at least 12/7 (I can accept just workday hours too but the other two options would be better). Dropbox mini server: I mean I will have a 90gb dropbox on every computer on my network LAN syncing with It and the one onto It syncing to the web. Print Server: I have a Samsung A4 small laser printer, a HP500 Designjet Plotter, a Samsung Multifunction Machine (fax/print/scan/copy), a modern HP color A3 Deskjet printer and a HP laserjet A4 color printer. All of them need to be connected to this mini server. Fax/Scan server: since I have the above mentioned fax/print/scan/copy machine I would like to make people use It from/to their computers through the mini server. I thought to a recent EEEBOX machine because I heard good things about ATOM cpus and because It seems that a recent BIOS version could switch It off and on autonomously. I'd like to listen some advice from You. Best of all would be: - If You have something similar running for a long time - If You disagree with this hardware choice and If You would suggest some other device. - If You see any issues with my printing setup - Anything else ;) My budget is from Zero (using right sw to build something on top on a old PC) to 500€ max.

    Read the article

  • How do I install a newer version of GTK in Ubuntu without replacing the current one?

    - by William Friesen
    I am trying to compile file-roller from git, but running autogen.sh gives me this error configure: error: Package requirements (gtk+-3.0 >= 2.91.1) were not met: No package 'gtk+-3.0' found Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix. Alternatively, you may set the environment variables GTK_CFLAGS and GTK_LIBS to avoid the need to call pkg-config. See the pkg-config man page for more details. I am running Ubuntu Maverick and don't wish to completely replace my current version of gtk, glib, etc. I have tried to compile GTK using the --prefix argument of autogen.sh, but this gives me a similar error about my version of glib. How can I successfully compile file-roller using these new libraries without borking my install?

    Read the article

  • Mysql can not resolve hostnames when checking privileges

    - by Fabio
    I'm going crazy to solve this. I have a mysql installation (on machine db.example.org) which doesn't resolve a given hostname. I gave privileges using hostnames i.e. GRANT USAGE ON *.* TO 'user'@'host1.example.org' IDENTIFIED BY PASSWORD 'secret' GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, INDEX ON `my_database`.* TO 'user'@'host1.example.org' However when I try to connect using mysql -u user -p -h db.example.org I obtain ERROR 1045 (28000): Access denied for user 'user'@'192.168.11.244' (using password: YES) I already checked for correct name resolution in the dns system: $ dig -x 192.168.11.244 ;; ANSWER SECTION: 244.11.168.192.in-addr.arpa. 68900 IN PTR host1.example.org. I've also checked for skip-name-resolve option in mysql variables in fact if I can access from another machine on the same subnet using hostname privileges. The only difference is that host1.example.org and db.example.org point the same ip on the same machine i.e. both db.example.org and host1.example.org have ip 192.168.11.244. In this way all the applications using that database can use the name db.example.org and we can move the data on other hosts (if needed) just by changing the dns record, leaving the application code unchanged. What should I do to solve this or at least to understand what's happening?

    Read the article

  • PHP make install seems to end abruptly and does not update libphp5.so

    - by matt74tm
    I'm trying to compile PHP 5.3.3 and after a lot of ups and downs, I finally did 'make' it followed by 'make install' which just shows this: root@server [/tmp/php-5.3.3]# make install Installing PHP SAPI module: cgi Installing PHP CGI binary: /usr/bin/ Installing PHP CLI binary: /usr/bin/ Installing PHP CLI man page: /usr/share/man/man1/ Installing shared extensions: /usr/lib64/20090626/ Installing build environment: /usr/lib64/build/ Installing header files: /usr/include/php/ Installing helper programs: /usr/bin/ program: phpize program: php-config Installing man pages: /usr/share/man/man1/ page: phpize.1 page: php-config.1 /tmp/php-5.3.3/build/shtool install -c ext/phar/phar.phar /usr/bin ln -s -f /usr/bin/phar.phar /usr/bin/phar Installing PDO headers: /usr/include/php/ext/pdo/ It does not look like its done, because /usr/lib64/httpd/modules/libphp5.so still shows an old date: -rwxr-xr-x 1 root root 3193768 Mar 31 2010 libphp5.so

    Read the article

  • puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work?

    Read the article

  • I am unable to get the subdomain from the URL in NGINX

    - by Jean-Nicolas Boulay Desjardins
    I am unable to get the subdomain from the URL in NGINX. Here is my config: server { listen 80; server_name ~^(?<appname>)\.example\.com$; rewrite ^ https://$appname.example.com$request_uri? permanent; } When I do: http://bob.example.com/ I am sent to: https://.example.com/ I don't know what I am doing wrong. I am using NGiNX 1.2.7. I have another config for the: http://example.com/ So I have one server block for the domain without the subdomain and the second with the subdomain... This is about the subdomain.

    Read the article

< Previous Page | 266 267 268 269 270 271 272 273 274 275 276 277  | Next Page >