Search Results

Search found 3764 results on 151 pages for 'mod alias'.

Page 98/151 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Is it possible to setup an internal test email server to keep all mail sent to it?

    - by MattGrommes
    We have a need at my work to setup a test email server that will take all mail sent to it for delivery and instead just dump it into an account for later retrieval. I've been out of the email server configuration game long enough that I think that's possible but I don't know for sure. As a more specific example of what we need: We have code that sends emails to outside clients in certain cases. We want to point our code to a test server that will accept those emails, but not let them get to the outside world (yes, it's happened before, oops). We then need to be able to verify that Email X would have gotten sent to Client Y if we had sent to the real server. As a bonus, we have a error email alias on our real server that goes to the programmers that we would like to keep getting email from. So anything sent to that alias on the test server would forward to our real server for delivery. My preference is for postfix but our IT staff seems set on using sendmail (or Exchange) for everything so hints/pointers for either server would be helpful. Thanks a lot.

    Read the article

  • Apache 2.4, Ubuntu 12.04 Forbidden Errors

    - by tubaguy50035
    I just installed Apache 2.4 today, and I'm having some issues getting vhost configuration to work correctly. Below is the vhost conf <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot /hosting/Client/site.com/www ServerName site.com ServerAlias www.site.com <Directory "/hosting/Client/site.com/www"> Options +Indexes +FollowSymLinks Order allow,deny Allow from all </Directory> DirectoryIndex index.html </VirtualHost> There is an index.html file in /hosting/Client/site.com/www. When I go to the site, I receive a 403 forbidden error. The www-data group is the group on the www folder, which I've already given all permissions (r/w/x). I'm really at a loss as to why this is happening. Any thoughts? If I remove the vhost and go straight to the IP address, I get the default, "It works!" page. So I know that it's working. The error log says "client denied by server configuration". apache2ctl -S dump: nick@server:~$ apache2ctl -S /usr/sbin/apache2ctl: 87: ulimit: error setting limit (Operation not permitted) VirtualHost configuration: *:80 is a NameVirtualHost default server site.com (/etc/apache2/sites-enabled/site.com.conf:1) port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com port 80 namevhost site.com (/etc/apache2/sites-enabled/site.com.conf:1) alias www.site.com ServerRoot: "/etc/apache2" Main DocumentRoot: "/var/www" Main ErrorLog: "/var/log/apache2/error.log" Mutex watchdog-callback: using_defaults Mutex default: dir="/var/lock/apache2" mechanism=fcntl Mutex mpm-accept: using_defaults PidFile: "/var/run/apache2.pid" Define: DUMP_VHOSTS Define: DUMP_RUN_CFG Define: ENALBLE_USR_LIB_CGI_BIN User: name="www-data" id=33 not_used Group: name="www-data" id=33 not_used Ouput of namei -mo /hosting/Client/site/www/index.html f: /hosting/Client/site.com/www/index.html drwxr-xr-x root root / drwxr-xr-x root root hosting drwxr-xr-x root root Client drwxr-xr-x nick www-data site.com drwxr-xr-x nick www-data www -rw-rwxr-x nick www-data index.html

    Read the article

  • Switching from prefork MPM to worker MPM + php-fpm on ubuntu

    - by Shane
    All tutorials I found were how to fresh install worker MPM + PHP-FPM, since my wordpress blog's already up and running with prefork MPM, correct me if I'm wrong in the simulated installation process: I'm on ubuntu and according to some tutorials, the following lines would do all the tricks: apt-get install apache2-mpm-worker libapache2-mod-fastcgi php5-fpm php5-gd a2enmod actions fastcgi alias Then you setup configuration in /etc/apache2/conf.d/php5-fpm.conf: <IfModule mod_fastcgi.c> AddHandler php5-fcgi .php Action php5-fcgi /php5-fcgi Alias /php5-fcgi /usr/lib/cgi-bin/php5-fcgi FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -host 127.0.0.1:9000 -pass-header Authorization </IfModule> After all these, restart: service apache2 restart && service php5-fpm restart Question: 1) Would it cause any down time in the whole process for previously running sites with prefork MPM? 2) Do you have to change any already existent configuration files like php or mysql or apache2(would they take effect immediately after the switch without you doing anything)? 3) I've already have apc up and running, do you have to re-install/re-configure it after the switch? 4) How do you find out if apache2 is working in worker MPM mode as expected? Thanks a lot!

    Read the article

  • The RTL8111/8168B NIC under Linux and the r8168 driver

    - by nik
    So I've got one of the infamous R8168 Realtek ethernet NIC, which have some problems under Linux. After some research, I found out I had to use the r8168 driver for this card (and not the r8169 which still loads when nothing else is available), which I did. So now everything works fine... Sort of. My download and upload rates are more than halved compared to what I should get. When I test (with eg. speedtest) I get something like 20M (often 15M) in download and 30M in upload, but if I test under Windows (everything is otherwise identical: same ethernet cable, same connection, at the same time of the day (well 5 min apart)...), I get 50M upload/download (which is what I expect). Where can it come from? Here's some info: ~ # lspci [...] 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) ~ # modinfo r8168 filename: /lib/modules/3.2.1-gentoo-r2/net/r8168.ko version: 8.027.00-NAPI license: GPL description: RealTek RTL-8168 Gigabit Ethernet driver author: Realtek and the Linux r8168 crew <[email protected]> srcversion: 0A6E9F1D4E8E51DE4B6BEE3 alias: pci:v00001186d00004300sv00001186sd00004B10bc*sc*i* alias: pci:v000010ECd00008168sv*sd*bc*sc*i* depends: vermagic: 3.2.1-gentoo-r2 SMP mod_unload [...] ~ # mii-tool -v eth0: negotiated 100baseTx-HD, link ok product info: vendor 00:07:32, model 17 rev 4 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD

    Read the article

  • Nginx with http/https - Http seemed redirected to https all the time

    - by dwarfy
    I've this really weird behaviour with my ubuntu 10.04 / nginx 1.2.3 server. Basically I changed the SSL certificates this morning. And ever since it has been behaving weirdly on all apps. Godaddy is reporting that HTTPS/SSL setup is correct. When I try a page it still works correctly when I'm using HTTPS. But when I try using HTTP nginx reports error : 400 Bad Request The plain HTTP request was sent to HTTPS port After looking around on google for hours, I've tried different setup (while originaly my setup was working correctly for longtime, I just renewed certificates) I kindof found a half solution by adding this to my config : error_page 497 $request_uri; The realllly weird thing is that when I use this setup : server { listen 80; server_name john.johnrocks.eu; access_log /home/john/envs/john_prod/nginx_access.log; error_log /home/john/envs/john_prod/nginx_error.log; location / { uwsgi_pass unix:///home/john/envs/john_prod/john.sock; include uwsgi_params; } location /media { alias /home/john/envs/john_prod/johntab/www; } location /adminmedia { alias /home/john/envs/john_prod/johntab/www/adminmedia; } } I still have the same error when using HTTP (while nothing is setup for HTTPS here)?? I'm getting crazy on this !

    Read the article

  • Gnome 3 gdm fails to start after preupgrade from fedora 14 to 15

    - by digital illusion
    I'm not able to boot fedora 15 in runlevel 5. After all services start, when the login screen should appear, gdm just show a mouse waiting cursor and keeps restarting itself. From /var/log/gdm/\:0-greeter.log Gtk-Message: Failed to load module "pk-gtk-module" /usr/bin/gnome-session: symbol lookup error: /usr/lib/gtk-3.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type /usr/libexec/gnome-setting-daemon: symbol lookup error: /usr/lib/gtk-3.0modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Where should atk_plug_get_type be defined? Edit: Here a better description of the error (system-config-network-gui:2643): Gnome-WARNING **: Accessibility: failed to find module 'libgail-gnome' which is needed to make this application accessible /usr/bin/python: symbol lookup error: /usr/lib/gtk-2.0/modules/libatk-bridge.so: undefined symbol: atk_plug_get_type Why there are still references to gtk2? Did preupgrade fail? Attaching upgrade log... it seems gdm was not added, but it is present in the users and groups list. May 26 11:25:52 sysimage sendmail[1076]: alias database /etc/aliases rebuilt by root May 26 11:25:52 sysimage sendmail[1076]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:46:09 sysimage useradd[1793]: failed adding user 'dbus', data deleted May 26 11:53:37 sysimage systemd-machine-id-setup[2443]: Initializing machine ID from D-Bus machine ID. May 26 11:55:28 sysimage useradd[2835]: failed adding user 'apache', data deleted May 26 11:55:38 sysimage useradd[2842]: failed adding user 'haldaemon', data deleted May 26 11:55:43 sysimage useradd[2848]: failed adding user 'smolt', data deleted May 26 11:57:32 sysimage sendmail[3032]: alias database /etc/aliases rebuilt by root May 26 11:57:32 sysimage sendmail[3032]: /etc/aliases: 77 aliases, longest 23 bytes, 795 bytes total May 26 11:57:46 sysimage groupadd[3066]: group added to /etc/group: name=cgred, GID=482 May 26 11:57:47 sysimage groupadd[3066]: group added to /etc/gshadow: name=cgred May 26 11:57:47 sysimage groupadd[3066]: new group: name=cgred, GID=482 May 26 11:58:42 sysimage useradd[3086]: failed adding user 'ntp', data deleted May 26 12:00:13 sysimage dbus: avc: received policyload notice (seqno=2) May 26 12:15:08 sysimage useradd[4950]: failed adding user 'gdm', data deleted May 26 12:24:39 sysimage dbus: avc: received policyload notice (seqno=3) May 26 12:25:24 sysimage useradd[5522]: failed adding user 'mysql', data deleted May 26 12:25:37 sysimage useradd[5533]: failed adding user 'rpcuser', data deleted May 26 12:26:31 sysimage useradd[5592]: failed adding user 'tcpdump', data deleted Any suggestions before I revert installation to F14?

    Read the article

  • Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx)

    - by Finglish
    Getting a "403 access denied" error instead of serving file (using django, gunicorn nginx) I am attempting to use nginx to serve private files from django. For X-Access-Redirect settings I followed the following guide http://www.chicagodjango.com/blog/permission-based-file-serving/ Here is my site config file (/etc/nginx/site-available/sitename): server { listen 80; listen 443 default_server ssl; server_name localhost; client_max_body_size 50M; ssl_certificate /home/user/site.crt; ssl_certificate_key /home/user/site.key; access_log /home/user/nginx/access.log; error_log /home/user/nginx/error.log; location / { access_log /home/user/gunicorn/access.log; error_log /home/user/gunicorn/error.log; alias /path_to/app; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://127.0.0.1:8000; proxy_connect_timeout 100s; proxy_send_timeout 100s; proxy_read_timeout 100s; } location /protected/ { internal; alias /home/user/protected; } } I then tried using the following in my django view to test the download: response = HttpResponse() response['Content-Type'] = "application/zip" response['X-Accel-Redirect'] = '/protected/test.zip' return response but instead of the file download I get: 403 Forbidden nginx/1.1.19 Please note: I have removed all the personal data from the the config file, so if there are any obvious mistakes not related to my error that is probably why. My nginx error log gives me the following: 2012/09/18 13:44:36 [error] 23705#0: *44 directory index of "/home/user/protected/" is forbidden, client: 80.221.147.225, server: localhost, request: "GET /icbdazzled/tmpdir/ HTTP/1.1", host: "www.icb.fi"

    Read the article

  • Specific DNS sometimes resolves to wildcard, incorrectly

    - by Mojo
    I have an intermittent problem, and I'm not sure where to start trying to troubleshoot it. In our dev environment, we have two visible IP addresses on load balancers, one to the front-end, and one to a number of back-end service machines. The front-end is configured to take a wildcard DNS name to support generic "portals." dev.example.com A 10.1.1.1 *.dev.example.com CNAME dev.example.com The back-end servers are all specific names within the same space: core.dev.example.com A 10.1.1.2 cms.dev.example.com CNAME core.dev.example.com search.dev.example.com CNAME core.dev.example.com Here's the problem. Periodically a developer or a program trying to reach, say, cms.dev.example.com will get a result that points to the front-end, instead of the back-end load balancer: cms.dev.example.com is an alias to core.dev.example.com core.dev.example.com is an alias to dev.example.com (WRONG!) dev.example.com 10.1.1.1 The developers are all on Mac OS X machines, though I've seen the problem occur on an Ubuntu machine as well, using a local cloud host DNS resolver. Sometimes the developer is using a VPN, which directs the DNS to its own resolver, and sometimes he's on the local net using a DNS resolver assigned by the NAT router. Sometimes clearing the Mac OS X DNS cache, logging into the VPN, then logging out of the VPN, will make the problem go away. The origin authoritative server is on zerigo, and a dig directly to their name servers always seems to give the correct answer. The published DNS cache time for these records is 15 minutes, but the problem has been intermittent for about a week. Any troubleshooting suggestions?

    Read the article

  • Ubuntu second static IP, ifconfig, /etc/network/interfaces

    - by Schmoove
    I would like to add a second static IP to my local Ubuntu 11.10 desktop machine and have it automatically available after rebooting. So far I am successfully using ifconfig to to temporarily set up an alias for my primary network interface: # ifconfig eth1:0 192.168.178.3 up # ifconfig eth1 Link encap:Ethernet HWaddr c8:60:00:ef:a3:d9 inet addr:192.168.178.2 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::ca60:ff:feef:a3d9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:61929 errors:0 dropped:0 overruns:0 frame:0 TX packets:64034 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:45330863 (45.3 MB) TX bytes:28175192 (28.1 MB) Interrupt:42 Base address:0x4000 eth1:0 Link encap:Ethernet HWaddr c8:60:00:ef:a3:d9 inet addr:192.168.178.3 Bcast:192.168.178.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:42 Base address:0x4000 However, when I add the following to /etc/network/interfaces, the alias is not up and running as expected after a reboot: # vi /etc/network/interfaces auto eth1:0 iface eth1:0 inet static address 192.168.178.3 netmask 255.255.255.0 I would like to know what to configure to get this to work. As a side note, I am running gnome shell.

    Read the article

  • Making one of the folders default in Apache

    - by OmerO
    Hello, The file & directory structure of my website is as follows: /Library/WebServer/mysite/joomla .. /Library/WebServer/mysite/wiki .. /Library/WebServer/mysite/forum .. /Library/WebServer/mysite/index.php As you see, there are various applications each residing in separate folders. Now, in order to define this structure, I have made this entry in Apache http-vhosts.config file: ServerName mysite.com DocumentRoot "/Library/WebServer/mysite" ** And I already have the DirectoryIndex defined: DirectoryIndex index.html index.php, and so on. So far so good but I want this specific functionality: When someone visits mysite, he/she should automatically directed to: /Library/WebServer/mysite/joomla (and therefore /Library/WebServer/mysite/joomla/index.php) I don't want to achieve that functionality by putting a redirection code inside /Library/WebServer/mysite/index.php or /Library/WebServer/mysite/index.htm because that causes time delays (because of the redirection, of course) But in this case, the only proper way of achieving it seems to set DocumentRoot this way: DocumentRoot "/Library/WebServer/mysite/joomla" But when I set it that way, then the other folders (/wiki, /forum, etc.) are simply not served by Apache. To work around it, I put directives like: Alias /wiki /Library/WebServer/mysite/wiki .. Alias /forum /library/WebServer/mysite/forum and it did work actually the way I wanted. But... I still cannot use it that way because in this case I just couldn't manage to make the wiki use Short URLs (as described in link text) So, I have to set the DocumentRoot back to /Library/WebServer/mysite and shoud be able to assign /Library/WebServer/mysite/joomla as the "default directory" (my own terminology :) Can I do it in Apache? Is there any other way you might suggest? Thanks.

    Read the article

  • nginx reverse proxy to apache mod_wsgi doesn't work

    - by user11243
    I'm trying to run a django site with apache mod-wsgi with nginx as the front-end to reverse proxy into apache. In my Apache ports.conf file: NameVirtualHost 192.168.0.1:7000 Listen 192.168.0.1:7000 <VirtualHost 192.168.0.1:7000> DocumentRoot /var/apps/example/ ServerName example.com WSGIDaemonProcess example WSGIProcessGroup example Alias /m/ /var/apps/example/forum/skins/ Alias /upfiles/ /var/apps/example/forum/upfiles/ <Directory /var/apps/example/forum/skins> Order deny,allow Allow from all </Directory> WSGIScriptAlias / /var/apps/example/django.wsgi </VirtualHost> In my nginx config: server { listen 80; server_name example.com; location / { include /usr/local/nginx/conf/proxy.conf; proxy_pass http://192.168.0.1:7000; proxy_redirect default; root /var/apps/example/forum/skins/; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } After restarting both apache and nginx, nothing works, example.com simply hangs or serves index.html in my /var/www/ folder. I'd appreciate any advice to point me in the right direction. I've tried several tutorials online to no avail.

    Read the article

  • Keytool and SSL Apache config

    - by Safari
    I have a question about SSL certificate... I have generate a certificate using this keytool command.. keytool -genkey -alias myalias -keyalg RSA -keysize 2048 and I used this command to export the certificate keytool -export -alias myalias -file certificate.crt So, I have a file .crt Now I would to configure my Apache ssl module. I need to use keytool...At the moment I can't to use Openssl How can I configure the module if I have only this certificate.crt file? I see these sections in my ssl.conf # Server Certificate: # Point SSLCertificateFile at a PEM encoded certificate. If # the certificate is encrypted, then you will be prompted for a # pass phrase. Note that a kill -HUP will prompt again. A new # certificate can be generated using the genkey(1) command. #SSLCertificateFile /etc/pki/tls/certs/localhost.crt # Server Private Key: # If the key is not combined with the certificate, use this # directive to point at the key file. Keep in mind that if # you've both a RSA and a DSA private key you can configure # both in parallel (to also allow the use of DSA ciphers, etc.) #SSLCertificateKeyFile /etc/pki/tls/private/localhost.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/pki/tls/certs/server-chain.crt How can I configure the correct section?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • How can I make non-anti-aliased text look good in Firefox on Mac OS X?

    - by cosmic.osmo
    After being a Windows user for the last 10 years, I got a MacBook Pro, which I'm working on configuring to my liking. I find small-size anti-aliased text to be blurry and hard to read, so I typically disable it. I've found the settings in the General Control Panel, and used TinkerTool to increase the anti-alias threshold size to 18pt. Mac OS X and other applications appear to respect these settings. A problem appears when I use Firefox. By default, it's configured to ignore the Mac OS anti-alias settings. This is changed by going to about:config, and setting gfx.use_text_smoothing_setting = true (default is false). However, even with this setting, it appears Firefox is still rendering the fonts under the assumption that they will be anti-aliased, which results in very odd and uneven spacing, as you can see in this example (pay attention to the placement of the "s" in "Disable"): With anti-aliasing: Without anti-aliasing: How can I configure Firefox to both not use anti-aliasing and to use correct font spacing? I'm using Mac OS X Lion and Firefox 5.

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • Time Machine is getting stuck at "Preparing to Back Up" and my Trash isn't emptying

    - by zarose
    I have encountered two separate problems, but I am putting them in the same question in case they are related. First, my Trash would not empty. It seems to be getting stuck on certain files, because I will reset my Macbook and some of the files will be deleted, and then if I remove a file or two at random, more can be deleted. Some of these files had strange characters in their names. I tried changing the names to single characters, but this did not help. Next, I attempted to backup my Macbook using Time Machine. I plugged in the HDD I've been using for this, but every time I try to start the backup, Time Machine gets stuck at "Preparing to Back Up". I definitely need to know how to fix the Time Machine problem, but I am curious how to solve the trash problem as well, and whether or not these problems are related. EDIT: Console.app logged the following this morning before I left on a trip. I did not bring the HDD with me. 6/5/12 7:41:28.312 AM com.apple.backupd: Starting standard backup 6/5/12 7:41:46.877 AM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 7:41:58.368 AM com.apple.backupd: Backup failed with error: 19 6/5/12 7:59:08.999 AM com.apple.backupd: Starting standard backup 6/5/12 7:59:10.187 AM com.apple.backupd: Backing up to: /Volumes/Seagate 3TB Mac/Backups.backupdb 6/5/12 7:59:13.308 AM com.apple.backupd: Event store UUIDs don't match for volume: Macintosh HD 6/5/12 7:59:13.331 AM com.apple.backupd: Event store UUIDs don't match for volume: Blank 6/5/12 7:59:13.683 AM com.apple.backupd: Deep event scan at path:/ reason:must scan subdirs|new event db| 6/5/12 8:23:31.807 AM com.apple.backupd: Backup canceled. 6/5/12 8:23:33.373 AM com.apple.backupd: Stopping backup to allow backup destination disk to be unmounted or ejected. 6/5/12 9:51:21.572 PM com.apple.backupd: Starting standard backup 6/5/12 9:51:22.515 PM com.apple.backupd: Error -35 while resolving alias to backup target 6/5/12 9:51:32.741 PM com.apple.backupd: Backup failed with error: 19

    Read the article

  • In Icinga (Nagios), how do I configure hosts with multiple IPs?

    - by gertvdijk
    I'm setting up Icinga (Nagios fork) and I have some machines with multiple interfaces. Some services are only listening on one of them and to check them correctly, I like to know if it's possible to have multiple IP addresses configured for a single host in Icinga. Here's a minimal example: Remote Server: eth0: 1.2.3.4 (public IP) eth1: 10.1.2.3 (private IP, secure tunnel) Apache listening on 1.2.3.4:80. (public only) OpenSSH listening on 10.1.2.3:22. (internal network only) Postfix SMTP listening on 0.0.0.0:25 (all interfaces) Icinga Server: eth0: 10.2.3.4 (private IP, internet access) Now if I define a host: define host { use generic-host host_name server1 alias server1.gertvandijk.net address 10.1.2.3 } This will not check the HTTP status correctly. And defining an additional host: define host { use generic-host host_name server1-public alias server1.gertvandijk.net address 1.2.3.4 } will check everything, but shows up as two independent hosts. Now I want to 'aggregate' these two hosts to show up as a single host, yet providing an easy configuration to check the services on their proper address. What is the most elegant number-of-configuration-lines-saving solution to this? I read about several plugins available to workaround this, but I can't figure out what is the current way to address it. Solutions go back to 2003, but I'm running Icinga 1.7.1, already capable of the address6 option, yet that triggers IPv6-only resolving on the hostname... Ideally, I wish to configure Icinga to be intelligent enough to know that the Postfix instance running on 10.1.2.3:25 is the same as 1.2.3.4:25 and thus not triggering two alarms. I guess this must have been tackled before and sysadmins have it set up now. Please share your solution to this. Thanks! :)

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

  • phpmyadmin “Forbidden: You don't have permission to access /phpmyadmin on this server.”

    - by Caterpillar
    I need to modify the file /etc/httpd/conf.d/phpMyAdmin.conf in order to allow remote users (not only localhost) to login # phpMyAdmin - Web based MySQL browser written in php # # Allows only localhost by default # # But allowing phpMyAdmin to anyone other than localhost should be considered # dangerous unless properly secured by SSL Alias /phpMyAdmin /usr/share/phpMyAdmin Alias /phpmyadmin /usr/share/phpMyAdmin <Directory "/usr/share/phpMyAdmin/"> Options Indexes FollowSymLinks MultiViews AllowOverride all Order Allow,Deny Allow from all </Directory> <Directory /usr/share/phpMyAdmin/setup/> <IfModule mod_authz_core.c> # Apache 2.4 <RequireAny> Require ip 127.0.0.1 Require ip ::1 </RequireAny> </IfModule> <IfModule !mod_authz_core.c> # Apache 2.2 Order Deny,Allow Allow from All Allow from 127.0.0.1 Allow from ::1 </IfModule> </Directory> # These directories do not require access over HTTP - taken from the original # phpMyAdmin upstream tarball # <Directory /usr/share/phpMyAdmin/libraries/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/lib/> Order Deny,Allow Deny from All Allow from None </Directory> <Directory /usr/share/phpMyAdmin/setup/frames/> Order Deny,Allow Deny from All Allow from None </Directory> # This configuration prevents mod_security at phpMyAdmin directories from # filtering SQL etc. This may break your mod_security implementation. # #<IfModule mod_security.c> # <Directory /usr/share/phpMyAdmin/> # SecRuleInheritance Off # </Directory> #</IfModule> When I get into phpmyadmin webpage, I am not prompted for user and password, before getting the error message: Forbidden: You don't have permission to access /phpmyadmin on this server. My system is Fedora 20

    Read the article

  • Enable SSL with Jetty 8

    - by Jerec TheSith
    I received certificates from GoDaddy an I'm trying to enable SSL with Jetty but receive an error 107 SSL protocol error when connecting to https://server.com:8443 I generated the keystore using these commands : keytool -keystore keystore -import -alias gd_bundle -trustcacerts -file gd_bundle.crt keytool -keystore keystore -import -alias server.com -trustcacerts -file server.com.crt and placed it in /opt/jetty/etc/ And used the following configuration in jetty.xml : <Call name="addConnector"> <Arg> <New class="org.eclipse.jetty.server.ssl.SslSelectChannelConnector"> <Arg> <New class="org.eclipse.jetty.http.ssl.SslContextFactory"> <Set name="keyStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="keyStorePassword">**password1**</Set> <Set name="keyManagerPassword">**password1**</Set> <Set name="trustStore"><SystemProperty name="jetty.home" default="."/>/etc/keystore</Set> <Set name="trustStorePassword">**password1**</Set> </New> </Arg> <Set name="port">8443</Set> <Set name="maxIdleTime">30000</Set> <Set name="Acceptors">2</Set> <Set name="statsOn">false</Set> <Set name="lowResourcesConnections">20000</Set> <Set name="lowResourcesMaxIdleTime">5000</Set> </New> </Arg> </Call> Am I missing something in jetty's configuration ?

    Read the article

  • 400 error with nginx subdomains over https

    - by aquavitae
    Not sure what I'm doing wrong, but I'm trying to get gunicorn/django through nginx using only https. Here is my nginx configuration: upstream app_server { server unix:/srv/django/app/run/gunicorn.sock fail_timeout=0; } server { listen 80; return 301 https://$host$request_uri; } server { listen 443; server_name app.mydomain.com; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; client_max_body_size 4G; access_log /srv/django/app/logs/nginx-access.log; error_log /srv/django/app/logs/nginx-error.log; location /static/ { alias /srv/django/app/data/static/; } location /media/ { alias /wrv/django/app/data/media/; } location / { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header Host $http_host; proxy_pass http://app_server; } } I get a 400 error on app.mydomain.com, but the app is published on mydomain.com. Is there an error in my configuration?

    Read the article

  • Accessing a webpage folder with .htaccess in it via apache webdav?

    - by pingo
    I have setup webdav access in order to enable an external user to upload the content of his web page to his folder on my server that is served by apache to the web. This way he could update his web page via webdav. Now the problem is that the user requires a .htaccess file and of course .htaccess breaks webdav probably because it overrides settings. (new files cannot be uploaded anymore via webdav if below specified .htaccess exists) I am running Apache2.2.17 and this is my webdav config: Alias /folderDAV "d:/wamp/www/somewebsite/" <Location /folderDAV> Order Allow,Deny Allow from all Dav On AuthType Digest AuthName DAV-upload AuthUserFile "D:/wamp/passtore/user.passwd" AuthDigestProvider file require valid-user </Location> This config is part of my naive solution to fixing this problem. The idea was to specify an alias to the web page folder where webdav would be enabled and then set AllowOverride to none so that the .htaccess would have no effect. Of course I then found out that in <Location /> AllowOverride directive is not valid. The .htaccess file looks like this: #opencart settings Options +FollowSymlinks Options -Indexes <FilesMatch "\.(tpl|ini)"> Order deny,allow Deny from all </FilesMatch> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)\?*$ index.php?_route_=$1 [L,QSA] ErrorDocument 403 /403.html deny from 1.1.1.1/19 allow from 2.2.2.2 What would be the solution here? I would like to have the web page accessible from the web but at the same time be able to access and modify it via apache's webdav (with digest auth). How would I do that? Also if possible I would like a solution that permits the existence of the .htaccess so that the user still has the power to setup access rules for his web page.

    Read the article

  • 404 with serving static files in a custom nginx configuration

    - by code90
    In my nginx configuration, I have the following: location /admin/ { alias /usr/share/php/wtlib_4/apps/admin/; location ~* .*\.php$ { try_files $uri $uri/ @php_admin; } location ~* \.(js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location ~ ^/admin/modules/([^/]+)(.*\.(html|js|json|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air))$ { alias /usr/share/php/wtlib_4/modules/$1/admin/$2; } location ~ ^/admin/modules/([^/]+)(.*)$ { try_files $uri @php_admin_modules; } location @php_admin { if ($fastcgi_script_name ~ /admin(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/apps/admin$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } location @php_admin_modules { if ($fastcgi_script_name ~ /admin/modules/([^/]+)(.*)$) { set $byr_module $1; set $byr_rest $2; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4/modules/$byr_module/admin$byr_rest; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Following is the requested url which ends up with "404": http://www.{domainname}.com/admin/modules/cms/styles/cms.css Following is the error log: [error] 19551#0: *28 open() "/usr/share/php/wtlib_4/apps/admin/modules/cms/styles/cms.css" failed (2: No such file or directory), client: xxx.xxx.xxx.xxx, server: {domainname}.com, request: "GET /admin/modules/cms/styles/cms.css HTTP/1.1", host: "www.{domainname}.com" Following urls works fine: http://www.{domainname}.com/admin/modules/store/?a=manage http://www.{domainname}.com/admin/modules/cms/?a=cms.load Can anyone see what the problem could be? Thanks. PS. I am trying to migrate existing sites from apache to nginx.

    Read the article

  • Hyper-V vss-writer not making current copies [migrated]

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Nagios send mail when server is down

    - by tzulberti
    I am using nagios 3.06 to monitor the servers. When a service is critical, it sends a mail, but when a server is down no mail is sent. Even if all the services go to critical state, no mail is sent. I have the following configuration: define command {     command_name notify-host-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$" "******** Nagios ****\n\n Host: $HOSTNAME$\n Description: the server is down" } define command{     command_name notify-service-by-email     command_line python /etc/nagios3/send_mail.py "[Nagios] $HOSTNAME$: $SERVICEDESC$ ($NOTIFICATIONTYPE$)" "***** Nagios *****\n\nNotification Type: $NOTIFICATIONTYPE$\nService: $SERVICEDESC$\nHost: $HOSTALIAS$\nAddress: $HOSTADDRESS$\nState: $SERVICESTATE$\nDate/Time: $LONGDATETIME$\nAdditional Info:$SERVICEOUTPUT$" } The python script is a script to sent a mail. It works if I execute it from the command line, but it doesn't sents an email from nagios. What I am doing wrong? UPDATE: The contact data is: define contact{     contact_name root     alias Root     service_notification_period 24x7     host_notification_period 24x7     service_notification_options w,u,c,r     host_notification_options d,r     service_notification_commands notify-service-by-email     host_notification_commands notify-host-by-email     email [email protected] } define contactgroup{     contactgroup_name admins     alias Nagios Administrators     members root }

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >