Search Results

Search found 5559 results on 223 pages for 'httpd conf'.

Page 93/223 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Apache & SVN on Ubuntu - Post-commit hook fails silently, pre-commit hook “Permission Denied”

    - by 113169587962668775787
    I've been struggling for the past couple days to get post-commit email notifications working on my SVN server (running via HTTP with Apache2 on Ubuntu 9.10). SVN commits work fine, but for some reason the hooks are not being properly executed. Here are the configuration settings: - Users access the repo via HTTP with the apache dav_svn module (I created users/passwords via htpasswd in a dav_svn.passwd file). dav_svn.conf: <Location /svn/repos> DAV svn SVNPath /home/svn/repos AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/apache2/dav_svn.passwd Require valid-user </Location> I created a post-commit hook file that writes a simple message to a file in the repository root: /home/svn/repos/hooks/post-commit: #!/bin/sh REPOS="$1" REV="$2" /bin/echo 'worked' > ${REPOS}/postcommit.log I set the entire repository to be owned by www-data (the apache user), and assigned 755 permissions to the post-commit script when I test the post-commit script using the www-data user in an empty environment, it works: sudo -u www-data env - /home/svn/repos/hooks/post-commit /home/svn/repos 7 But when I commit on a client machine, the commit is successful, but the post-commit script does not seem to be executed. I also tried running a simple script for the pre-commit hook, and I get an error, even with an empty pre-commit script: "Commit failed (details follow): Can't create null stdout for hook '/home/svn/repos/hooks/pre-commit': Permission denied" I did a few searches on Google for this error and I presume that this is an issue with the apache user (www-data) not having adequate permissions, specifically to execute /dev/null. I also read that the reason post-commit fails silently is because that it doesn't report with stdout. Anyway, I've also tried giving the apache user (www-data) ownership of the entire repository, and edited the apache virtualhost to allow operations on the server root, and I'm still getting permission denied /etc/apache2/sites-available/primarydomain.conf <Directory /> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> Any ideas/suggestions would be greatly appreciated! Thanks

    Read the article

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • Apache/Mongrel/Redmine installation problem (VirtualHost/ProxyPass)

    - by Riddler
    I am installing Redmine as per this step-by-step instruction: http://justnotes.co.cc/2010/02/11/how-to-install-redmine-on-ubuntu/ I am using Ubuntu 10.04.1, Apache 2.2.14, Mongrel 1.1.5. On the VirtualHost configuration stage, I am using this: <VirtualHost *:80> ServerName myserver.lv ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000 ProxyPreserveHost on <Proxy *> Order allow,deny Allow from all </Proxy> </VirtualHost> But, when I direct my browser to http://<my-server's-ip>/redmine/ what I see is not the redmine web application but "Index of /redmine" with, well, index of the files from the root directory of Redmine. Any idea how to fix that? P.S. Tried removing the VirtualHost stuff alltogether and instead adding the following simple clauses to apache2.conf: <Proxy *> Order allow,deny Allow from all </Proxy> ProxyPass /redmine/ http://localhost:8000/ ProxyPassReverse /redmine/ http://localhost:8000/ ProxyPreserveHost on As a result, the behavior changes! Now http://<my-server's-ip>/redmine/ produces the source code of the Redmine's start page, so it is served, but apparently not rendered. At the same time, still, http://<my-server's-ip>:8000/ works perfectly fine, so Mongrel is serving the Redmine application as it should, it's just that something is wrong with my VirtualHost/proxying clauses in the .conf file.

    Read the article

  • on debian, lighttpd apache2 using 80 port, lighttpd throws :address already use error

    - by user1960581
    I bought the linode(linode.com) server the other day. I've been trying to run lighttpd and apache2 at the same port, using lighttpd for static files. As linode is only providing ONE ipv4 address, I tried to bind lighttpd on the ipv6 address. That's where I got the same error each and very single time: can't bind to port [ipv6] 80 Address already in use. I tried bind the ipv4 address. Everything worked. Please help me, this is driving me nuts for the last two days. my lighttpd.conf file:(the ipv6 address isn't true) server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", # "mod_rewrite", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" server.port = 80 server.bind = "2600:3c02::0000" server.use-ipv6 = "enable" #server.pid-file = "/var/run/lighttpd.pid" index-file.names = ( "index.php", "index.html", "index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/javascript", "text/css", "text/html", "text/plain" ) # default listening port for IPv6 falls back to the IPv4 port #include_shell "/usr/share/lighttpd/use-ipv6.pl " + server.port include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" ### ipv6 ### $SERVER["socket"] == "[2600:3c02::0000]:80" { # accesslog.filename = "var/log/lighttpd/ipv6/access.log" # server.document-root = "/var/www/" # server.error-handler-404 = "/index.php?error=404" } and the error message: can't bind to port, 2600:3c02::0000 Address already in use.

    Read the article

  • Bind9 zone files

    - by user42780
    Well for the better part of the last two hours I've tried to figure out what is actually wrong, but I can't seem to find anything obvious to me. What I'm trying to do is setup my DNS for say(per example) domain.com. This should include two NS records, namely ns1.domain.com and ns2.domain.com. With that there should be a mail record, as well as a CNAME record for www. I've been trough roughly 20 how to's in the last two hours, rewrote everything from scratch four times and I still can't seem to find whats wrong. My only suspicion to this might be two things; the error I get from the bind9 daemon when I stop the service, and the named.conf file. The error I get from the bind9 daemon when stopping the service is: * Stopping domain name service... bind9 rndc: connection to remote host closed This may indicate that * the remote server is using an older version of the command protocol, * this host is not authorized to connect, * the clocks are not syncronized, or * the key is invalid. I honestly doesn't know what this means, apart from the key defined in /etc/bind/rndc.key that's not in the named.conf file(yes, I did try to add it to no avail). Here's all the zone files, and configuration files; http://208.77.101.5/bind9/ If anyone could help, it would be greatly appreciated.

    Read the article

  • Multiple vlans access to shared pbx system

    - by Matt
    I'm new to networking and was looking for some assistance. First off I'm using packet tracer to diagram my scenario as I will be receiving my equipment next week to deploy. Hardware to be used: 2 catalyst 3560 switches all connect to a sonic wall router I have two companies that work in the same office space. I need to keep these companies separate on their own vlan. They will however need to share the phone system. (Packet tracer file uploaded to give those who have the time to see what I put together.) http://dl.dropbox.com/u/86234623/network%20build.pkt Here is my current test scenario: on switch 0 I have: company A on vlan 2 computers 172.16.1.100 and 101 255.255.0.0 FA0/10 FA0/11 company B on vlan 3 computers 172.16.2.102, 255.255.0.0 FA0/12 PBX on a trunk port 172.16.0.5, 255.255.0.0 FA0/5 trunk port on FA0/1 to connect the switches on switch 1 I have: company A on vlan 2 computers 172.16.1.102, 255.255.0.0 company B on vlan 3 computers 172.16.2.100 and 101, 255.255.0.0 trunk port on FA0/1 to connect the switches I can ping the respective computers on the same vlan but cant ping company A to B which is what I want. However neither company can talk (ping) the PBX. Here are the commands I used to configure what I have: switch 0 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 2 int fa0/11 switchport mode access switchport access vlan 2 int fa0/12 switchport mode access switchport access vlan 3 int fa0/5 switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 Switch 1 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 3 int fa0/11 switchport mode access switchport access vlan 3 int fa0/12 switchport mode access switchport access vlan 2 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3

    Read the article

  • How to specify search domain name of nginx resolver for proxy_pass

    - by myjpa
    Assuming my server is www.mydomain.com, on Nginx 1.0.6 I'm trying to proxy all request to http://www.mydomain.com/fetch to other hosts, the destination URL is specified as a GET parameter named "url". For instance, when user requests either one: http://www.mydomain.com/fetch?url=http://another-server.mydomain.com/foo/bar http://www.mydomain.com/fetch?url=http://another-server/foo/bar it should be proxyed to http://another-server.mydomain.com/foo/bar I'm using the following nginx config and it works fine only if the url paramter contains domain name, like http://another-server.mydomain.com/...; but fails on http://another-server/... on error: another-server could not be resolved (3: Host not found) nginx.conf is: http { ... # the DNS server resolver 171.10.129.16; server { listen 80; server_name localhost; root /path/to/site/root; location = /fetch { proxy_pass $arg_url; } } Here, I'd like to resolve all URL without domain name as host name in mydomain.com, in /etc/resolv.conf, it's possible to specify default search domain name for the whole Linux system, but it doesn't affect nginx resolver: search mydomain.com Is it possible in Nginx? Or alternatively, how to "rewrite" the url parameter so that I can add the domain name?

    Read the article

  • Redmine install not working and displaying directory contents - Ubuntu 10.04

    - by Casey Flynn
    I've gone through the steps to set up and install the redmine project tracking web app on my VPS with Apache2 but I'm running into a situation where instead of displaying the redmine app, I just see the directory contents: Does anyone know what could be the problem? I'm not sure what other files might be of use to diagnose what's going on. Thanks! # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ # Enable fastcgi for .fcgi files # (If you're using a distro package for mod_fcgi, something like # this is probably already present) #<IfModule mod_fcgid.c> # AddHandler fastcgi-script .fcgi # FastCgiIpcDir /var/lib/apache2/fastcgi #</IfModule> LoadModule fcgid_module /usr/lib/apache2/modules/mod_fcgid.so LoadModule passenger_module /var/lib/gems/1.8/gems/passenger-3.0.7/ext/apache2/mod_passenger.so PassengerRoot /var/lib/gems/1.8/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby1.8 ServerName demo and my vhosts file #No DNS server, default ip address v-host #domain: none #public: /home/casey/public_html/app/ <VirtualHost *:80> ServerAdmin webmaster@localhost # ScriptAlias /redmine /home/casey/public_html/app/redmine/dispatch.fcgi DirectoryIndex index.html DocumentRoot /home/casey/public_html/app/public <Directory "/home/casey/trac/htdocs"> Order allow,deny Allow from all </Directory> <Directory /var/www/redmine> RailsBaseURI /redmine PassengerResolveSymlinksInDocumentRoot on </Directory> # <Directory /> # Options FollowSymLinks # AllowOverride None # </Directory> # <Directory /var/www/> # Options Indexes FollowSymLinks MultiViews # AllowOverride None # Order allow,deny # allow from all # </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /home/casey/public_html/app/log/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /home/casey/public_html/app/log/access.log combined # Alias /doc/ "/usr/share/doc/" # <Directory "/usr/share/doc/"> # Options Indexes MultiViews FollowSymLinks # AllowOverride None # Order deny,allow # Deny from all # Allow from 127.0.0.0/255.0.0.0 ::1/128 # </Directory> </VirtualHost>

    Read the article

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • Proftpd on Debian ignoring umask setting

    - by sodan
    I have found a solution for my problem. This is what I did: I added the following to my /etc/proftpd/proftpd.conf: <Limit SITE_CHMOD> DenyAll </Limit> I have the following problem: When I upload files to my FTP server the umask I set is totally ignored. All files have permissions 644. I use Debian 5.0.3 as operating system and proftpd 1.3.1 as ftp server. The user logging in is called mug and he is a local user (no virtual user). He is chrooted to the home directory /home/mug/ I tried the following things: 1. set umask setting in /etc/proftpd/proftpd.conf Umask 000 000 This should result in 777 for directories and 666 for files since directory umask is applied to 777 and file umask is applied to 666. After that I of course restarted the proftpd to be sure that the config is reloaded. 2. set umask for the user in /home/mug/.bashrc I added the following to the .bashrc for the user: umask 0000 After that I reloaded the .bashrc: source /home/mug/.bashrc I also checked the umask setting for the user by changing to the user and using this command: su mug umask As result I got a umask of 0000 prompted. So this worked. But still all my uploaded files are having 644 permissions set :( What am I doing wrong?

    Read the article

  • PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded

    - by Tiny
    I'm trying to upgrade php 5.4.14 from php 5.4.3 in wamp server 2.2e. I have downloaded php-5.4.14-Win32-VC9-x86 (thread safe). Extracted it under C:\wamp\bin\php. Copied wampserver.conf from C:\wamp\bin\php\php5.4.3 to C:\wamp\bin\php\php5.4.14. Renamed php.ini-development to phpForApache.ini. -The port number the wamp server has been changed in the http.conf file to 8087 from its default 80. This is mentioned here though it is about upgrading from php 5.3.5 to php 5.4.0. After this, Restarting of the wamp server and services all over again has all been done and those two versions appeared in the menu php-versions (which is opened when the icon of the server is clicked). But when I attempt to enable a library like php_mysql or php_mysqli, a warning message box appears. PHP Startup: Unable to load dynamic library 'C:"\php\php_mysql.dll' - The specified module could not be loaded. I have also tried to removing the semicolon before them in the php.ini file but to no avail. I'm running Microsoft Windows XP Professional Version 2002, service pack 3. Where might be the problem? EDIT: I have changed extension_dir from C:\php to c:\wamp\bin\php\php5.4.14\ext\ in php.ini as the answer below indicates and the library is now loaded correctly but it says, 1045 - Access denied for user 'root'@'localhost' (using password: YES) though the user name and the password are the same as they are in MySQL in the config.inc.php file under phpmyadmin. I have also tried to restart MySQL56 service from Control Panel-Services(Local) but it keeps giving the same error. Does someone know why this happens?

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • Setting up dnsmasq for a local network

    - by WishCow
    Me, and a small group of developers have just moved to a new office, and I'd like to set up dnsmasq on our development server, so when we deploy web apps there, we don't have to edit our own hosts files. We have a router at 192.168.3.1 which we don't have access to. I figured I'd install a DNS server on the development box, and we all record it's IP as a secondary DNS server. Unfortunately I'm strugling to make this work. The name of the devel server is devbox, it's IP is 192.168.3.99, and it's running the latest Ubuntu Server (Karmic) My computer is running Ubuntu Desktop (Karmic) What I'd like to achieve Let's say I have three websites, website1, website2, website3, running on the development box. I'd like to access them by the urls: http://website1.devbox http://website2.devbox http://website3.devbox So I have configured Apache on the devel box, installed dnsmasq, and put the following lines into it's hosts file: 192.168.3.99 website1.devbox 192.168.3.99 website2.devbox 192.168.3.99 website3.devbox and edited my own resolv.conf file to include the devel box as a nameserver: nameserver 192.168.3.99 It's working fine, I can access the sites. The problem is that it doesn't scale well. I'd like all the domains ending with .devbox forwarded to the development box, and this is what I'm struggling with. I have tried putting 192.168.3.99 devbox into the hosts file, and editing the line in dnsmasq.conf: # Add local-only domains here, queries in these domains are answered # from /etc/hosts or DHCP only. local=/devbox/ But I cannot get it working. If I try any url that is not explicitly present in the development box's hosts file, the dns lookup fails. Is the local directive for something else? Am I looking at the wrong place?

    Read the article

  • Apache to read from /home/user/public_html on CentOS 5.7

    - by C.S.Putra
    this is my first experience using CentOS 5.7 / Linux as my web server OS and I have just finished installing Apache. Then I created a new account using WHM. The account is now created and the domain name can be accessed. I have put the web files under /home/user/public_html/ but when I access the domain assigned for that user which I assigned when creating new account in WHM, it doesn't read the files. In /usr/local/apache/conf/httpd.conf : <VirtualHost 175.103.48.66:80> ServerName domain.com ServerAlias www.domain.com DocumentRoot /home/user/public_html ServerAdmin [email protected] User veevou # Needed for Cpanel::ApacheConf <IfModule mod_suphp.c> suPHP_UserGroup group1 group1 </IfModule> <IfModule !mod_disable_suexec.c> SuexecUserGroup group1 group1 </IfModule> CustomLog /usr/local/apache/domlogs/domain.com-bytes_log "%{%s}t %I .\n%{%s}t %O ." CustomLog /usr/local/apache/domlogs/domain.com combined ScriptAlias /cgi-bin/ /home/user/public_html/cgi-bin/ </VirtualHost> Instead of reading from /home/user/public_html/ apache will read the /var/ww/html/ folder. How to set the apache so that when user access www.domain.com, they will access the files under /home/user/public_html/ ? Please advice. Thanks

    Read the article

  • Authenticate users with Zimbra LDAP Server from other CentOS clients

    - by efesaid
    I'am wondering that how can integrate my database,web,backup etc.. centos servers with Zimbra LDAP Server. Does it require more advanced configuration than standart ldap authentication ? My zimbra server version is [zimbra@zimbra ~]$ zmcontrol -v Release 8.0.5_GA_5839.RHEL6_64_20130910123908 RHEL6_64 FOSS edition. My LDAP Server status is [zimbra@ldap ~]$ zmcontrol status Host ldap.domain.com ldap Running snmp Running stats Running zmconfigd Running I already installed nss-pam-ldapd packages to my servers. [root@www]# rpm -qa | grep ldap nss-pam-ldapd-0.7.5-18.2.el6_4.x86_64 apr-util-ldap-1.3.9-3.el6_0.1.x86_64 pam_ldap-185-11.el6.x86_64 openldap-2.4.23-32.el6_4.1.x86_64 My /etc/nslcd.conf is [root@www]# tail -n 7 /etc/nslcd.conf uid nslcd gid ldap # This comment prevents repeated auto-migration of settings. uri ldap://ldap.domain.com base dc=domain,dc=com binddn uid=zimbra,cn=admins,cn=zimbra bindpw **pass** ssl no tls_cacertdir /etc/openldap/cacerts When i run [root@www ~]# id username id: username: No such user But i am sure that username user exist on ldap server. EDIT : When i run ldapsearch command i got all result with credentials and dn. [root@www ~]# ldapsearch -H ldap://ldap.domain.com:389 -w **pass** -D uid=zimbra,cn=admins,cn=zimbra -x 'objectclass=*' # extended LDIF # # LDAPv3 # base <dc=domain,dc=com> (default) with scope subtree # filter: objectclass=* # requesting: ALL # # domain.com dn: dc=domain,dc=com zimbraDomainType: local zimbraDomainStatus: active . . .

    Read the article

  • Apache Virtual host not recognized

    - by Bozho
    I've been using one server, then I reinstalled everything on another server, and the mod_jk stopped working. Here is the situation: apache 2.0 sitting "in front" mod_jk used to connect to the apache to tomcat tomcat 6.0.26 used to server the actual requests I followed this tutorial. The result is: accessing http://mysite.com opens the index.html in /var/www/ accessing http://mysite.com:8080/ works OK the logs at /var/logs/apache2 show everything is OK: [Mon Mar 29 22:01:53.310 2010] [28349:3075389184] [info] init_jk::mod_jk.c (2830): mod_jk/1.2.26 initialized [Mon Mar 29 22:01:53 2010] [warn] No JkShmFile defined in httpd.conf. Using default /var/log/apache2/jk-runtime-status [Mon Mar 29 22:01:53 2010] [notice] Apache/2.2.9 (Debian) mod_jk/1.2.26 configured -- resuming normal operations I compared the server.xml, jk.conf, sites-enabled/mysite from the new server to those from the old one and they are identical. The domain name is the same (I updated the DNS record today, and it has refreshed successfully) So the question is, what can go wrong? Is there another place where problems would be logged, if such occur? Update What I can be almost certain of is that the virtual host is not recognized. It is always forwarded to the default virtual host. So, how to make sure the virtual host is recognized and working?

    Read the article

  • Ubuntu 12.04 cloud edition on Amazon - Apache2 - /etc

    - by jdog
    I have setup a web server on Amazon with 3 Virtual hosts. For some reason I can't get any of the sites going on it, they all show a 404 error. /var/log/apache2/error.log shows "File does not exist: /etc/apache2/htdocs" I have checked: a2ensite all my virtual hosts actually checked softlinks in sites-enabled access rights in /var/www to 777, in case user is not www-data grep -r htdocs /etc/apache2 (returns nothing) ports.conf has NameVirtualHost directive exactly matching Virtual Hosts What else could this be? ports.conf # If you just change the port or add more ports here, you will likely also # have to change the VirtualHost statement in # /etc/apache2/sites-enabled/000-default # This is also true if you have upgraded from before 2.2.9-3 (i.e. from # Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and # README.Debian.gz NameVirtualHost 107.20.169.163:80 Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> sites-available/www.seleconlight.com <VirtualHost 107.20.169.163:80> ServerName www.seleconlight.com DocumentRoot /var/www/www.seleconlight.com CustomLog /var/log/apache2/www.seleconlight.com-access.log combined ErrorLog /var/log/apache2/www.seleconlight.com-error.log </VirtualHost>

    Read the article

  • DNS configuration issues. Clients inside network unable to resolve DNS server's name

    - by hydroparadise
    Setup the DNS service on Ubuntu 12.04 64 and all apears to be well except that my dhcp clients do not recognize my DNS servers hostname. When doing a nslookup on one of my Windows clients, I get C:\Users\chad>nslookup Default Server: UnKnown Address: 192.168.1.2 Where I would expect the FQDN in the spot where UnKnown is seen. The DNS server know's itself pretty well, but I think only because I have an entry in the /etc/hosts file to resolve. There's so many places to look I don't even know where to begin. Are there any logs I can look at? Something. Places I've looked at and configured: /etc/bind/zones/domain.com.db /etc/bind/zones/rev.1.168.192.in-addr.arpa /etc/bind/named.conf.local EDIT: '/etc/bind/zones/rev.1.168.192.in-addr.arpa' @ IN SOA dns-serv1.mydomain.com [email protected]. ( 2006081401; 28800; 604800; 604800; 86400 ) IN NS dns-serv1.mydomain.com. 2 IN PTR dns-serv1 2 IN PTR mydomain.com EDIT 2: '/etc/bind/named.conf.local' zone "mydomain.com" { type master; file "/etc/bind/zones/mydomain.com.db"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.168.192.in-addr.arpa"; };

    Read the article

  • AjaxControlToolkit JavaScript is not pointing correctly on IIS7 running behind Apache mod_proxy

    - by sohum
    So here's my setup. I've got a DynDNS account since I have a dynamic IP. I have Apache listening on port 80 and IIS7 on port 8080. I don't want users to have to enter in mydyndns.dyndns.com:8080 to get to IIS7, so I've added the following code to my Apache httpd.conf file to enable a proxy/reverse proxy: <VirtualHost *:80> ProxyPass / http://localhost:8080/myASPSite/ ProxyPassReverse / http://localhost:8080/myASPSite/ ServerName myaspsite.mydomain.com </VirtualHost> I've got a CNAME record set up on my DNS so that myaspsite.mydomain.com redirects to mydyndns.dyndns.com. When I type in myaspsite.mydomain.com into my browser, everything works beautifully... mostly. IIS7 serves up the ASPX pages and visitors to the site don't know any better. A problem arises, however, when I add Ajax Control Toolkit controls into my ASPX website, because these generate JavaScript and apparently mod_proxy_html isn't geared to handle the JS URIs properly. Sure enough, when I open up the source of my ASPX page, it has script elements as follows: <script src="/myASPSite/WebResource.axd?xyz" type="text/javascript"></script> <script src="/myASPSite/ScriptResource.axd?xyz" type="text/javascript"></script> Sure enough, these scripts are attempting to be resolved at http://myaspsite.mydomain.com/myASPSite/WebResource..., which through the proxy translates to localhost:8080/myASPSite/myASPSite/.... How can I solve this problem. The couple of websites I found suggested turning on ProxyHTMLExtended but when I tried doing that, the server did not start. I'm guessing I didn't know how to do it properly. Anyone has a handy couple of config lines that I can add to my Apache conf file to get this working as I need? I'm using Apache 2.2.11. Thanks!

    Read the article

  • Firefox and Chrome keeps forcing HTTPS on Rails app using nginx/Passenger

    - by Steve
    I've got a really weird problem here where every time I try to browse my Rails app in non-SSL mode Chrome (v16) and Firefox (v7) keeps forcing my website to be served in HTTPS. My Rails application is deployed on a Ubuntu VPS using Capistrano, nginx, Passenger and a wildcard SSL certificate. I have set these parameters for port 80 in the nginx.conf: passenger_set_cgi_param HTTP_X_FORWARDED_PROTO http; passenger_set_cgi_param HTTPS off; The long version of my nginx.conf can be found here: https://gist.github.com/2eab42666c609b015bff The ssl-redirect.include file contains: rewrite ^/sign_up https://$host$request_uri? permanent ; rewrite ^/login https://$host$request_uri? permanent ; rewrite ^/settings/password https://$host$request_uri? permanent ; It is to make sure those three pages use HTTPS when coming from non-SSL request. My production.rb file contains this line: # Enable HTTP and HTTPS in parallel config.middleware.insert_before Rack::Lock, Rack::SSL, :exclude => proc { |env| env['HTTPS'] != 'on' } I have tried redirecting to HTTP via nginx rewrites, Ruby on Rails redirects and also used Rails view url using HTTP protocol. My application.rb file contains this methods used in a before_filter hook: def force_http if Rails.env.production? if request.ssl? redirect_to :protocol => 'http', :status => :moved_permanently end end end Every time I try to redirect to HTTP non-SSL the browser attempts to redirect it back to HTTPS causing an infinite redirect loop. Safari, however, works just fine. Even when I've disabled serving SSL in nginx the browsers still try to connect to the site using HTTPS. I should also mention that when I pushed my app on to Heroku, the Rails redirect work just fine for all browsers. The reason why I want to use non-SSL is that my homepage contains non-secure dynamic embedded objects and a non-secure CDN and I want to prevent security warnings. I don't know what is causing the browser to keep forcing HTTPS requests.

    Read the article

  • Recompiling Nginx 1.4.3 with "--with-http_gzip_static_module" error

    - by Elijah Paul
    I'm trying to enable the 'ngx_http_gzip_static_module' module in Nginx 1.4.3 by adding the --with-http_gzip_static_module to my ./configure configuration. But I recieve the following error when i try to recompile (make): # make make -f objs/Makefile make[1]: Entering directory `/tmp/nginx-1.4.3' make[1]: *** No rule to make target `src/os/unix/ngx_gcc_atomic_x86.h', needed by `objs/src/core/nginx.o'. Stop. make[1]: Leaving directory `/tmp/nginx-1.4.3' make: *** [build] Error 2 My current config (CentOS 6.4): # nginx -V nginx version: nginx/1.4.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --prefix=/usr --add-module=./nginx-sticky-module-1.1 --add-module=./headers-more-nginx-module-0.23 i was under the impression that this was a module that just need to be 'enabled' as opposed to 'added'. What am I missing here?

    Read the article

  • netconfig won't change DNS on opensuse 12.2

    - by Krystian
    I'm trying to update my dns servers after openvpn connection, but netconfig won't do that for me. Here's how I'm trying to do it [manually now]: /sbin/netconfig modify -v -i tap0 -s openvpn <<-EOF INTERFACE='tap0' DNSSERVERS='10.10.0.1' EOF And here's the verbose output: debug: lockfile created (/var/run/netconfig.pid) for PID 5530 debug: lockfile created debug: write new STATE file /var/run/netconfig//tap0/netconfig0 debug: Module order: dns-resolver dns-bind dns-dnsmasq nis ntp-runtime debug: dns-resolver module called debug: Static Fallback debug: Use NetworkManager policy merged settings debug: exec get_dns_settings: /var/run/netconfig/NetworkManager.netconfig debug: get_dns_settings: service 'NetworkManager' => rank '1' debug: get_dns_settings: DNS_SEARCHLIST_1='mydomain.com' debug: get_dns_settings: DNS_SERVERS_1='192.168.0.1' debug: exit get_dns_settings: /var/run/netconfig/NetworkManager.netconfig debug: write_resolv_conf: ' mydomain.com ' ' 192.168.0.1 ' debug: No changes for /etc/resolv.conf debug: dns-bind Module called debug: dns-dnsmasq Module called debug: nis Module called debug: Static Fallback debug: Use NetworkManager policy merged settings debug: exec get_nis_settings: /var/run/netconfig/NetworkManager.netconfig debug: exit get_nis_settings: /var/run/netconfig/NetworkManager.netconfig debug: set_nisdomainname: eth0 24 debug: set_nisdomainname: => yes debug: set_nisdomainname: old[]=, new[24]= debug: format_yp_conf called with : debug: Using static fallback debug: format_static[0] called debug: No changes for /etc/yp.conf debug: nis domainname '' is up to date debug: ntp-runtime Module called debug: Static Fallback debug: Use NetworkManager policy merged settings debug: exec get_ntp_settings: /var/run/netconfig/NetworkManager.netconfig debug: get_ntp_settings: NTP_SERVER_LIST='' debug: exit get_ntp_settings: /var/run/netconfig/NetworkManager.netconfig I've been trying to find something relevant on the web, but failed to do so. I have no other clue on how to progress with this issue. Any thoughts?

    Read the article

  • Why would I be getting IXFR and AXFR transfer denied on my DNS server?

    - by danielj
    From everything I've researched and tried, it appears that my named.conf is configured correctly, including the allow-transfer section. Here is a sample of the errors. It is only happening with a couple of my secondary servers, but it is happening for every zone for those servers that are failing. One of the servers is attempting IXFR, the other AXFR. The result is the same: 18-Mar-2011 14:27:51.372 security: error: client 84.234.24.90#59208: zone transfer 'juansgaranton.com/IXFR/IN' denied 18-Mar-2011 14:32:18.015 security: error: client 174.37.196.55#50783: zone transfer 'cheshirecat.net/AXFR/IN' denied Here is the relevant part of named.conf. options { directory "/etc/bind"; pid-file "/var/run/named/named.pid"; files 4096; allow-transfer { 140.186.190.103; 84.234.24.90; 207.246.95.34; 203.20.52.5; 140.186.190.103; 127.0.0.1; 174.37.196.55; }; }; logging { channel "bind" { file "/var/log/bind.log" versions 3; print-time yes; print-severity yes; print-category yes; severity info; }; category lame-servers { null; }; category "default" { "bind"; }; };

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >