Search Results

Search found 4834 results on 194 pages for 'nsswitch conf'.

Page 18/194 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Mac OS X 10.6.3: how does Apache config work?

    - by w-
    Just got a MacBook Pro 15" so I'm unfamiliar with how the filesystem is laid out. I noticed when in my filesystem that I've got a few paths specifying httpd.conf: /etc/apache2/httpd.conf /opt/local/apache2/conf/httpd.conf /private/etc/apache2/httpd.conf The config files are different in lots of ways (user, group, server_root, modules that are loaded, etc.) The apache2 folders themselves also greatly differ. It seems that the one getting used is either /etc/apache2/httpd.conf or /private/etc/apache2/httpd.conf I'm wondering if I might have messed up my system after installing some packages (php5, django, etc) via macports and maybe ended up with 2 apache2 instances. My questions are hence: which httpd.conf is the one being used ? what are the other files for? thanks --update-- To clarify, I didn't explicitly install apache2 via macports. I'm wondering if it was installed because it was a dependency. After more hunting around I'm learning I never should've installed php to begin with because Snow Leopard already includes php 5.3 from the get go. http://serverfault.com/questions/82410/apache-2-and-php-5-3-via-macports I'll need to open another question that asks about how the Mac filesystem works. Thanks all for replies.

    Read the article

  • Subversion all or nothing access to repo tree

    - by Glader
    I'm having some problems setting up access to my Subversion repositories on a Linux server. The problem is that I can only seem to get an all-or-nothing structure going. Either everyone gets read access to everything or noone gets read or write access to anything. The setup: SVN repos are located in /www/svn/repoA,repoB,repoC... Repositories are served by Apache, with Locations defined in etc/httpd/conf.d/subversion.conf as: <Location /svn/repoA> DAV svn SVNPath /var/www/svn/repoA AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> <Location /svn/repoB> DAV svn SVNPath /var/www/svn/repoB AuthType Basic AuthName "svn repo" AuthUserFile /var/www/svn/svn-auth.conf AuthzSVNAccessFile /var/www/svn/svn-access.conf Require valid-user </Location> ... svn-access.conf is set up as: [/] * = [/repoA] * = userA = rw [/repoB] * = userB = rw But checking out URL/svn/repoA as userA results in Access Forbidded. Changing it to [/] * = userA = r [/repoA] * = userA = rw [/repoB] * = userB = rw gives userA read access to ALL repositories (including repoB) but only read access to repoA! so in order for userA to get read-write access to repoB i need to add [/] userA = rw which is mental. I also tried changing Require valid-user to Require user userA for repoA in subversion.conf, but that only gave me read access to it. I need a way to default deny everyone access to every repository, giving read/write access only when explicitly defined. Can anyone tell me what I'm doing wrong here? I have spent a couple of hours testing and googling but come up empty, so now I'm doing the post of shame.

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: Index of / Index of / Links/ ..... .... But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • CentOS tftp server is broken

    - by Mike Pennington
    I'm trying to run tftpd from xinetd on CentOS 6; however, I can only tftp from localhost. I have a file in /opt/tftpboot/fw.test.conf that I can retrieve if I tftp to localhost: [mpenning@localhost ~]$ tftp localhost tftp> get fw.test.conf tftp> quit [mpenning@localhost ~]$ ls fw.test.conf [mpenning@localhost ~]$ However, I cannot receive this file if I tftp to eth1 on this server (the address on eth1 is 172.16.1.4). [mpenning@localhost ~]$ sudo tshark -i eth1 udp and host 172.16.1.5 Running as user "root" and group "root". This could be dangerous. Capturing on eth1 0.000000 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 5.000133 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 10.000184 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 15.000297 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 20.000331 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 ^C5 packets captured [mpenning@localhost ~]$ I have the following xinetd configuration: [root@localhost mpenning]# cat /etc/xinetd.d/tftp # default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /opt/tftpboot disable = no per_source = 11 cps = 100 2 flags = IPv4 } [root@localhost mpenning]#

    Read the article

  • What is ltmain.sh, and why does automake say it is missing? What is a good auto (make/conf/etc) gene

    - by gersh
    I just want to develop a C app in linux with the auto(make/conf/...) stuff automatically generated. I tried generating it with ede and anjuta, but it doesn't seem to generate Makefile.am. So, I tried running automake, and it says "ltmain.sh" isn't found. Is there some easy to generate the basic build files for linux C/C++ apps. What is the standard practice? Do most people write these files themselves?

    Read the article

  • Redmine with Apache 2 + Passenger nightmare --- site is up and available, but Redmine doesn't execute

    - by CptSupermrkt
    I was determined to figure this out myself, but I've been at it for a total of more than 10 hours, and I just can't figure this out. First, let me detail my environment (which I cannot change): Server version: Apache/2.2.15 (Unix) Ruby version: ruby 1.9.3p448 Rails version: Rails 4.0.1 Passenger version: Phusion Passenger version 4.0.5 Redmine version: 2.3.3 I have followed the Redmine instructions all the way through the test webserver to check that installation was successful with this command: ruby script/rails server webrick -e production The roadblock which I cannot overcome is getting Apache and Passenger to interpret and properly serve Redmine. I have searched pretty much every possible link within the first 10 pages or so of Google results. Everywhere I go I come across conflicting/contradicting/outdated information. We have a "weird" setup with Apache (which I inherited and cannot change). Redmine needs to be served through SSL, but Apache already has another website it's serving through SSL called Twiki. By "weird", what I mean is that our file structure is entirely different from all the tutorials out there on this version of Apache which have directories like "available-sites" and such. Here are the abbreviated versions of some of our config files: /etc/httpd/conf/httpd.conf (the global configuration file --- note that NO VirtualHost is defined here): ServerRoot "/etc/httpd" ... LoadModule passenger_module /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5/libout/apache2/mod_passenger.so PassengerRoot /usr/local/pkg/ruby/1.9.3-p448/lib/ruby/gems/1.9.1/gems/passenger-4.0.5 PassengerDefaultRuby /usr/local/pkg/ruby/1.9.3-p448/bin/ruby Include conf.d/*.conf ... User apache Group apache ... DocumentRoot "/var/www/html" So just to clarify, the above httpd.conf file does NOT have a VirtualHost section. /etc/httpd/conf.d/ssl.conf (defines the VirtualHost for ssl): Listen 443 <VirtualHost _default_:443> SSLEngine on ... SSLCertificateFile /etc/pki/tls/certs/localhost.crt </VirtualHost> /etc/httpd/conf.d/twiki.conf (this works just fine --- note this does NOT define a VirtualHost): ScriptAlias /twiki/bin/ "/var/www/twiki/bin/" Alias /twiki/ "/var/www/twiki/" <Directory "/var/www/twiki/bin"> AllowOverride None Order Deny,Allow Deny from all AuthType Basic AuthName "our team" AuthBasicProvider ldap ...a lot of ldap and authorization stuff Options ExecCGI FollowSymLinks SetHandler cgi-script </Directory> /etc/httpd/conf.d/redmine.conf: Alias /redmine/ "/var/www/redmine/public/" <Directory "/var/www/redmine/public"> Options Indexes ExecCGI FollowSymLinks Order allow,deny Allow from all AllowOverride all </Directory> The amazing thing is that this doesn't completely NOT work: I can successfully open up https://someserver/redmine/ with SSL and the https://someserver/twiki/ site remains unaffected. This tells me that it IS possible to have two separate sites up with one SSL configuration, so I don't think that's the problem. The problem is is that it opens up to the file index. I can navigate around my Redmine file structure, but no code ever gets executed. For example, there is a file included with Redmine called dispatch.fcgi in the public folder. https://someserver/redmine/dispatch.fcgi opens, but just as plain text code in the browser. As I understand it, in the case of using Passenger, CGI and FastCGI stuff is irrelevant/unused.

    Read the article

  • Disabled FRS replication on a DFS link, but the targets still list the replica set in their FRS conf

    - by Graeme Donaldson
    It's been a while since I've had to deal with the wonders of FRS, so I'm doing some testing to refresh my memory. This is what I've done so far. I am stuck with FRS rather than DFS-R for the moment since not all of my link targets are running R2. Created a domain-based DFS root, hosted on 4 servers. Created a DFS link under the root, targeted at 2 servers. The shares on both servers were empty. Dropped about 500MB of data into the target folder on one server and waited for replication to complete. Added/removed/modified files on both targets and confirmed that changes are replicated within a few seconds. Deleted the contents of the target folder on 1 server and waited for the other server to replicate the deletion. All of this worked perfectly, so now I want to remove my DFS link since I only created it for testing purposes. This is where it gets weird. I'm pretty sure that in the past I've disabled replication on the DFS link and after a short amount of time each target would log an info event in the FRS event log, something along the lines of "this server is no longer a member of replica set X". I have waited about 3 hours and I haven't seen this happen. ntfrsutl ds tells me that the server is not a member of any set, which is expected because when I disable replication on the link, the AD attributes on the computer object are removed. The weird part is... ntfrsutl sets still shows me the replica set, with all the properties, etc. So it seems like the FRS-related attributes of the target server's AD object are gone, but the FRS service for some reason hasn't removed the replica set. Can anyone see what I have done wrong?

    Read the article

  • "success=n" control syntax in pam.conf / pam.d/* files ...

    - by Jamie
    After sucessfully configuring Kerberos, this is what I've found in /etc/pam.d/common-auth file: auth [success=2 default=ignore] pam_unix.so nullok_secure auth [success=1 default=ignore] pam_winbind.so krb5_auth krb5_ccache_type=FILE cached_login try_first_pass auth requisite pam_deny.so auth required pam_permit.so Does the success=2 control value mean that if the pam_unix.so fails, the authentication skips to the auth requisite pam_deny.so line or to the last line?

    Read the article

  • php include path problem:Same code works on Ubuntu default Apache and php conf, but not on CentOS

    - by Neo
    So the same code works on my ubuntu server but when I upload it to my dedicated hosting server running CentOS it seems to add an extra prefix of .:/usr/share/pear:/usr/share/php: I tried setting includepath to different things but it just doesn't work. the file is in a directory called language in the same folder as the file that is including it and I'm using : include dirname(FILE).DIRECTORY_SEPARATOR."language".DIRECTORY_SEPARATOR."storage.inc"; and include dirname(__FILE__)."/language/language.php"; and include "language/language.php"; and alot of other combinations but I can't get it to find the file. Fatal error: require_once() [function.require]: Failed opening required '/home/neo/public_html/migration/include/class/core/storage.inc' (include_path='.:/usr/share/pear:/usr/share/php:/home/neo/public_html/migration') in /home/neo/public_html/migration/include/class/core/class_lang.inc on line 153

    Read the article

  • OpenVZ container is running but does not show in vzlist nor can I find the private/conf files for the container

    - by Kakeakeai
    I was creating a new OpenVZ container on one of our VPS Nodes while the power went out for that machine. After bringing the machine back online I could no longer access the container CTID=101. I could not destroy it using "vzctl destroy 101", I can not enter or control it, and "vzlist -a" does NOT display any containers at all (this was a fresh node and the first container was being created). I decided to create a new container at this point assuming that the old container just was not saved for some reason. However when I go to add the ip/host to the new container I get a warning that the IP is already in use. After doing a ping to the IP I realized there was a machine on that IP. I SSH into the machine and discover it is the OLD container that some how is orphaned. I can not find it on the filesystem, I can not find it using VZ commands, and It is set to start on Node boot so it is impossible to shutdown (even ssh in and typing the "shutdown now" command just reboots the container not shut it down). Is this a flaw in OpenVZ or am I missing something? I have all the outputs and logs if needed. Thank you all so much in advance.

    Read the article

  • "lo: Disabled Privacy Extensions" and ipv6 disabling?

    - by Smartkid
    There are lots of "lo: Disabled Privacy Extensions" in var/log/messages . I googled and find it is ipv6 releated, so I tried to disable ipv6. I added the following lines to /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 and blacklist ipv6 to /etc/modprobe.d/blacklist.conf after that, I restarted the network by /etc/init.d/networking restart . My question is: The ip addr still shows inet6 address attached to eth0 in forms like inet6 fe80::212:79ff:fecf:edaf/64 scope link Does it means my ipv6 not disabled?

    Read the article

  • Customizing configuration with Dependency Injection

    - by mathieu
    I'm designing a small application infrastructure library, aiming to simplify development of ASP.NET MVC based applications. Main goal is to enforce convention over configuration. Hovewer, I still want to make some parts "configurable" by developpers. I'm leaning towards the following design: public interface IConfiguration { SomeType SomeValue; } // this one won't get registered in container protected class DefaultConfiguration : IConfiguration { public SomeType SomeValue { get { return SomeType.Default; } } } // declared inside 3rd party library, will get registered in container protected class CustomConfiguration : IConfiguration { public SomeType SomeValue { get { return SomeType.Custom; } } } And the "service" class : public class Service { private IConfiguration conf = new DefaultConfiguration(); // optional dependency, if found, will be set to CustomConfiguration by DI container public IConfiguration Conf { get { return conf; } set { conf = value; } } public void Configure() { DoSomethingWith( Conf ); } } There, the "configuration" part is clearly a dependency of the service class, but it this an "overuse" of DI ?

    Read the article

  • RewriteRule not working at server level?

    - by Alexis Wilke
    I wanted to forbid some robots from doing certain things to my websites and decided to add a RewriteRule for that purpose. The rule works when put in one of my <VirtualHost *:80> tag and looks like this: RewriteEngine On RewriteCond %{HTTP_USER_AGENT} libwww-perl RewriteCond %{REQUEST_METHOD} POST RewriteRule . - [F,L] However, I wanted to apply that to all my websites instead of just one of them. So with the newest version of Apache2 settings, I decided to put that code in the security.conf file. This file is defined under /etc/apache2/conf-available/... (and yes, I have a softlink from the /etc/apache2/conf-enabled/... directory.) However, if the definition is only in the conf-available/security.conf files, it somehow gets ignored. From the documentation, it says that these Rewrite* commands all work at server level! Any idea of what I would be missing?

    Read the article

  • ipv6 just wont go away 12.10 server

    - by VladoPortos
    After very long time using Ubuntu old LTS version I have re-installed to new LTS 12.10, but I can't get rid of ipv6 ! I have did: in /etc/modprobe.d/blacklist.conf: blacklist ipv6 blacklist ip6table_filter blacklist ip6_tables in /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 But ufw happily use v6 protocol, and in dmesg: ip6_tables: (C) 2000-2006 Netfilter Core Team . . IPv6: ADDRCONF(NETDEV_CHANGE): em1: link becomes ready What is going to take to get rid of IPv6 ? I swear Terminator didn't put so much fight.

    Read the article

  • NVIDIA x server - "sudo nvidia config" does not generate a working 'xorg.config'

    - by Mike
    I am over 18 hours deep on this challenge. I got to this point and am stuck. very stuck. Maybe you can figure it out? Ubuntu Version 12.04 LTS with all the updates installed. Problem: The default settings in "etc/X11/xorg.conf" that are generated by the "nvidia-xconfig" tool, do not allow the NVIDIA x server to connect to the driver in my "System Settings Additional Driver window". (that's how I understand it. Lots of information below). Symptoms of Problem "System Settings Additional Driver" window has drivers, but the nvidia x server cannot connect/utilize any of the 4 drivers. the drivers are activated, but not in use. When I go to "System Tools Administration NVIDIA x server settings" I get an error that basically tells me to create a default file to initialize the NVIDIA X server (screen shot below). This is the messages the terminal gives after running a "sudo nvidia-xconfig" command for the first time. It seems that the generated file by the tool i just ran is generating a bad/unusable file: If I run the "sudo nvidia-xconfig" command again, I wont get an error the second time. However when I reboot, the default file that is generated (etc/X11/xorg.conf) simply puts the screen resolution at 800 x 600 (or something big like that). When I try to go to NVIDIA x server settings I am greeted with the same screen as the screen shot as in symptom 2 (no option to change the resolution). If I try to go to "system settings display" there are no other resolutions to choose from. At this point I must delete the newly minted "xorg.conf" and reinstate the original in its place. Here are the contents of the "xorg.conf" that is generated first (the one missing required information): # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 304.88 (buildmeister@swio-display-x86-rhel47-06) Wed Mar 27 15:32:58 PDT 2013 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Hardware: I ran the "lspci|grep VGA". There results are: 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) More Hardware info: Ram: 16GB CPU: Intel Core i7-2720QM @2.2GHz * 8 Other: 64 bit. This is a triple boot computer and not a VM. Attempts With Not Success on My End: 1) Tried to append the "xorg.conf" with what I perceive is missing information and obviously it didn't fly. 2) All the other stuff I tried got me to this point. 3) See if this link is helpful to you (I barely get it, but i get enough knowing that a smarter person might find this useful): http://manpages.ubuntu.com/manpages/lucid/man1/nvidia-xconfig.1.html 4) I am completely new to Linux (40 hours over past week), but not to programming. However I am very serious about changing over to Linux. When you respond (I hope someone responds...) please respond in a way that a person new to Linux can understand. 5) By the way, the reason I am in this mess is because I MUST have a second monitor running from my laptop, and "System Settings Display" doesn't recognize my second display. I know it is possible to make the second display work in my system, because when I boot from the install CD, I perform work on the native laptop monitor, but the second monitor shows a purple screen with Ubuntu in the middle, so I know the VGA port is sending a signal out. If this is too much for you to tackle please suggest an alternative method to get a second display. I don't want to go to windows but I cannot have a single display. I am really fudged here. I hope some smart person can help. Thanks in advance. Mike. **********************EDIT #1********************** More Details About Graphics Card I was asked "which brand of nvidia-card do you have exactly?" Here is what I did to provide more info (maybe relevant, maybe not, but here is everything): 1) Took my Lenovo W520 right apart to see if there is an identifier on the actual card. However I realized that if I get deep enough to take a look, the laptop "won't like it". so I put it back together. Figuring out the card this way is not an option for me right now. 2) (My computer is triple boot) I logged into Win7 and ran 'dxdiag' command. here is the screen shot: 3) I tried to look on the lenovo website for more details... but no luck. I took a look at my receipts and here is info form receipt: System Unit: W520 NVIDIA Quadro 1000M 2GB 4) In win7 I went to the NVIDIA website and used the option to have my card 'scanned' by a Java applet to determine the latest update for my card. I tried the same with Ubuntu but I can't get the applet to run. Here is the recommended driver from from the NVIDIA Applet for my card for Win7 (I hope this shines some light on the specifics of the card): Quadro/NVS/Tesla/GRID Desktop Driver Release R319 Version: 320.00 WHQL Release Date: 3.5.2013 5) Also I went on the NVIDIA driver search and looked through every possible combination of product type + product series + product to find all the combinations that yield a 1000M card. My card is: Product Type: Quadro Product Series: Quadro Series (Notebooks) Product: 1000M ***********************EDIT #2******************* Additional Symptoms Another question that generated more symptoms I previously didn't mention was: "After generating xorg.conf by nvidia-xconfig, go to additional drivers, do you see nvidia-304?" 1) I took a screen shot of the "additional drivers" right after generating xorg.conf by nvidia-xconfig. Here it is: 2) Then I did a reboot. Now Ubuntu is 600 x 800 resolution. When I logged in after the computer came up I got an error (which I always get after generating xorg.conf by nvidia-xconfig and rebooting) 3) To finally answer the question - No. There is no "NVIDIA-304" driver. Screen shot of additional drivers after generating xorg.conf by nvidia-xconfig and rebooting : At this point I revert to the original xorg.conf and delete the xorg.conf generated by Nvidia.

    Read the article

  • uwsgi_params not in nginx

    - by Halit Alptekin
    Firstly I setup nginx and uwsgi via apt-get. And,I add the line to nginx conf file(/etc/nginx/conf.d/default.conf) like below line; server { listen 80; server_name <replace with your hostname>; #Replace paths for real deployments... access_log /tmp/access.log; error_log /tmp/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:8889; } } I had a error; Starting nginx: [emerg]: open() "/etc/nginx/uwsgi_params" failed (2: No such file or directory) in /etc/nginx/conf.d/default.conf:11 configuration file /etc/nginx/nginx.conf test failed If I add uwsgi_params file from uwsgi's source;I had a simple error. Thanks

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • ignore ipv6 router advertisements for static addresses with bonded interfaces

    - by boran
    I need to attribute static IPv6 addresses (not use autoconfigured addresses, and ignore router advertisements). This can be done as follows for a standard interface like eth0 iface eth0 inet6 static address myprefix:mysubnet::myip gateway myprefix:mysubnet::mygatewayip netmask 64 pre-up /sbin/sysctl -q -w net.ipv6.conf.$IFACE.autoconf=0 pre-up /sbin/sysctl -q -w net.ipv6.conf.$IFACE.accept_ra=0 However, how can this be done for bonded interfaces? using the "all" interface does not work. Systems is Ubuntu 10.04, 2.6.24-24-server. If one uses the above sysctl command for the bond0, the networking hangs on boot, because /proc/sys/net/ipv6/conf/bond0 does not yet exist and cannot be written to. One the system has booted /proc/sys/net/ipv6/conf/bond0 exist, so one solution after booting is to add the following to /etc/rc.local: /sbin/sysctl -q -w net.ipv6.conf.bond0.autoconf=0 /sbin/sysctl -q -w net.ipv6.conf.bond0.accept_ra=0 /etc/init.d/networking restart and this has the desired effect, the autoconfig v6 address disappears. Seems like a bit of a hack though, are there better solutions?

    Read the article

  • Dovecot ignoring maximum number of IMAP connections

    - by Michelle
    I have a single mailbox mail server running Dovecot/Postfix and I have two IMAP clients, Thunderbird on the PC and K9 on Android. I keep on receiving this error in my logs even after I change the 'mail_max_userip_connections' variable to 50. puppet dovecot: imap-login: Maximum number of connections from user+IP exceeded (mail_max_userip_connections=10): user=<[email protected]>, method=PLAIN, rip=62.242.90.2, lip=198.29.31.229, TLS Why does it say that it is set to 10 in the log? Is that hardcoded? grep -r "mail_max_userip_connections" /etc/dovecot /etc/dovecot/conf.d/20-managesieve.conf: #mail_max_userip_connections = 10 /etc/dovecot/conf.d/20-pop3.conf: #mail_max_userip_connections = 3 /etc/dovecot/conf.d/20-imap.conf: mail_max_userip_connections = 50 I've restarted dovecot after making the changes but this error is still logged and I can't access the mailbox. Can anyone help me understand why I can't seem to raise the maximum limit?

    Read the article

  • How Do I Restrict Repository Access via WebSVN?

    - by kaybenleroll
    I have multiple subversion repositories which are served up through Apache 2.2 and WebDAV. They are all located in a central place, and I used this debian-administration.org article as the basis (I dropped the use of the database authentication for a simple htpasswd file though). Since then, I have also started using WebSVN. My issue is that not all users on the system should be able to access the different repositories, and the default setup of WebSVN is to allow anyone who can authenticate. According to the WebSVN documentation, the best way around this is to use subversion's path access system, so I looked to create this, using the AuthzSVNAccessFile directive. When I do this though, I keep getting "403 Forbidden" messages. My files look like the following: I have default policy settings in a file: <Location /svn/> DAV svn SVNParentPath /var/lib/svn/repository Order deny,allow Deny from all </Location> Each repository gets a policy file like below: <Location /svn/sysadmin/> Include /var/lib/svn/conf/default_auth.conf AuthName "Repository for sysadmin" require user joebloggs jimsmith mickmurphy </Location> The default_auth.conf file contains this: SVNParentPath /var/lib/svn/repository AuthType basic AuthUserFile /var/lib/svn/conf/.dav_svn.passwd AuthzSVNAccessFile /var/lib/svn/conf/svnaccess.conf I am not fully sure why I need the second SVNParentPath in default_auth.conf, but I just added that today as I was getting error messages as a result of adding the AuthzSVNAccessFile directive. With a totally permissive access file [/] joebloggs = rw the system worked fine (and was essentially unchanged), but as I soon as I start trying to add any kind of restrictions such as [sysadmin:/] joebloggs = rw instead, I get the 'Permission denied' errors again. The log file entries are: [Thu May 28 10:40:17 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET websvn:/ [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'joebloggs' GET svn:/sysadmin What do I need to do to get this to work? Have configured apache wrong, or is my understanding of the svnaccess.conf file incorrect? If I am going about this the wrong way, I have no particular attachment to my overall approach, so feel free to offer alternatives as well. UPDATE (20090528-1600): I attempted to implement this answer, but I still cannot get it to work properly. I know most of the configuration is correct, as I have added [/] joebloggs = rw at the start and 'joebloggs' then has all the correct access. When I try to go repository-specific though, doing something like [/] joebloggs = rw [sysadmin:/] mickmurphy = rw then I got a permission denied error for mickmurphy (joebloggs still works), with an error similar to what I already had previously [Thu May 28 10:40:20 2009] [error] [client 89.100.219.180] Access denied: 'mickmurphy' GET svn:/sysadmin Also, I forgot to explain previously that all my repositories are underneath /var/lib/svn/repository UPDATE (20090529-1245): Still no luck getting this to work, but all the signs seem to be pointing to the issue being with path-access control in subversion not working properly. My assumption is that I have not conf

    Read the article

  • Puppet causes endless restarts of CUPS (how does one prevent this)

    - by Purfideas
    Hi! It makes sense, and is in fact suggested on this site, to have a critical file change trigger a service restart with puppet meta-parameters (such as notify or subscribe). For example: ## file definition for printers.conf file { "/etc/cups/printers.conf": [snip], source => "puppet:///module/etc/cups/printers.conf" } ## service definition for sshd service { 'cups': ensure => running, subscribe => File['/etc/cups/printers.conf'] } But in the case of CUPS, this triggers and endless loop of restarts; the logic works like this: Change puppetmaster's version of /etc/cups/printers.conf puppetmaster pushes new version to client, triggering cups restart cupsd restart insists on putting its own time stamp at the top of printers.conf, 'Written by cupsd...' This change will be seen as out of date, so after runinterval, we return to (1). Is there a way to suppress cupsd's need to time stamp the file? Or is there a puppet trick that could help here? Thanks!

    Read the article

  • Read files from directory to create a ZIP hadoop

    - by Félix
    I'm looking for hadoop examples, something more complex than the wordcount example. What I want to do It's read the files in a directory in hadoop and get a zip, so I have thought to collect al the files in the map class and create the zip file in the reduce class. Can anyone give me a link to a tutorial or example than can help me to built it? I don't want anyone to do this for me, i'm asking for a link with better examples than the wordaccount. This is what I have, maybe it's useful for someone public class Testing { private static class MapClass extends MapReduceBase implements Mapper<LongWritable, Text, Text, BytesWritable> { // reuse objects to save overhead of object creation Logger log = Logger.getLogger("log_file"); public void map(LongWritable key, Text value, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { String line = ((Text) value).toString(); log.info("Doing something ... " + line); BytesWritable b = new BytesWritable(); b.set(value.toString().getBytes() , 0, value.toString().getBytes() .length); output.collect(value, b); } } private static class ReduceClass extends MapReduceBase implements Reducer<Text, BytesWritable, Text, BytesWritable> { Logger log = Logger.getLogger("log_file"); ByteArrayOutputStream bout; ZipOutputStream out; @Override public void configure(JobConf job) { super.configure(job); log.setLevel(Level.INFO); bout = new ByteArrayOutputStream(); out = new ZipOutputStream(bout); } public void reduce(Text key, Iterator<BytesWritable> values, OutputCollector<Text, BytesWritable> output, Reporter reporter) throws IOException { while (values.hasNext()) { byte[] data = values.next().getBytes(); ZipEntry entry = new ZipEntry("entry"); out.putNextEntry(entry); out.write(data); out.closeEntry(); } BytesWritable b = new BytesWritable(); b.set(bout.toByteArray(), 0, bout.size()); output.collect(key, b); } @Override public void close() throws IOException { // TODO Auto-generated method stub super.close(); out.close(); } } /** * Runs the demo. */ public static void main(String[] args) throws IOException { int mapTasks = 20; int reduceTasks = 1; JobConf conf = new JobConf(Prue.class); conf.setJobName("testing"); conf.setNumMapTasks(mapTasks); conf.setNumReduceTasks(reduceTasks); MultipleInputs.addInputPath(conf, new Path("/messages"), TextInputFormat.class, MapClass.class); conf.setOutputKeyClass(Text.class); conf.setOutputValueClass(BytesWritable.class); FileOutputFormat.setOutputPath(conf, new Path("/czip")); conf.setMapperClass(MapClass.class); conf.setCombinerClass(ReduceClass.class); conf.setReducerClass(ReduceClass.class); // Delete the output directory if it exists already JobClient.runJob(conf); } }

    Read the article

  • Steam freezes at login screen

    - by Snail284069
    I have just installed Steam on Xubuntu, and after it finished installing it went to the login screen, but the screen is frozen, and I cannot press the buttons. When running Steam though the terminal it says: alex@Craptop:~$ steam Running Steam on ubuntu 14.04 32-bit STEAM_RUNTIME is enabled automatically Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Gtk-Message: Failed to load module "overlay-scrollbar" Gtk-Message: Failed to load module "unity-gtk-module" Installing breakpad exception handler for appid(steam)/version(1400690891_client) [0522/174755:WARNING:proxy_service.cc(958)] PAC support disabled because there is no system implementation Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig error: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 70: non-double matrix element Fontconfig warning: "/etc/fonts/conf.d/10-scale-bitmap-fonts.conf", line 78: saw unknown, expected number Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) [HTTP Remote Control] HTTP server listening on port 35849. Installing breakpad exception handler for appid(steam)/version(1400690891_client) Installing breakpad exception handler for appid(steam)/version(1400690891_client) Process 2764 created /alex-ValveIPCSharedObjects5 Installing breakpad exception handler for appid(steam)/version(1400690891_client) Generating new string page texture 12: 48x256, total string texture memory is 49.15 KB Generating new string page texture 13: 256x256, total string texture memory is 311.30 KB Generating new string page texture 14: 128x256, total string texture memory is 442.37 KB Generating new string page texture 15: 384x256, total string texture memory is 835.58 KB and then the terminal gets stuck too, letting me type into it but not doing anything. I tried reinstalling and restarting the computer, but it still keeps happening.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >