Search Results

Search found 9559 results on 383 pages for 'mail rule'.

Page 312/383 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • phpBB configuration problem under Nginx

    - by zvikico
    Hi, I have a phpBB site running with Nginx (PHP via FastCGI). It works OK. However, some specific actions like moving or deleting a topic fail. Instead, I'm redirected to the forum index. I think it is a problem with the URLs redirection or rewriting. My rewrite rule looks like this: if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } Any help would be appreciated. My full configuration file is: server { listen 80; server_name forum.xxxxx.com; access_log /xxxxx/access.log; error_log /xxxxx/error.log; location = / { root /xxxxx/phpBB3/; index index.php; } location / { root /xxxxx/phpBB3/; index index.php index.html; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; break; } } error_page 404 /index.php; error_page 403 /index.php; error_page 500 502 503 504 /index.php; # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; root /xxxxx/phpBB3/; break; } # hide protected files location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~ \.php$ { fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /xxxxx/phpBB3/$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; } }

    Read the article

  • Ubuntu Postfix Gmail SMTP Relay Not Working

    - by Nick DeMayo
    I currently have postfix set up to relay messages from my websites through gmail, and up until recently it was working perfectly. However, within the last week or so (not really sure when) I started getting the below error whenever attempting to send an email: Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[2001:4860:800a::6c]:587: Network is unreachable Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.109]:587: Connection refused Jul 20 07:40:46 localhost postfix/smtp[11958]: connect to smtp.gmail.com[173.194.76.108]:587: Connection refused Here is my configuration file: # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h #readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key smtpd_use_tls=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = [my domain name] alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases #myorigin = /etc/mailname mydestination = [my host name], localhost.localdomain, localhost relayhost = [smtp.gmail.com]:587 mynetworks = 127.0.0.0/8 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = loopback-only inet_protocols = all ########################################## ##### non debconf entries start here ##### ##### client TLS parameters ##### smtp_tls_loglevel=1 smtp_tls_security_level=encrypt smtp_sasl_auth_enable=yes smtp_sasl_password_maps=hash:/etc/postfix/sasl/passwd smtp_sasl_security_options = noanonymous ##### map username@localhost to [email protected] ##### smtp_generic_maps=hash:/etc/postfix/generic Nothing changed on my server, as far as I know...any ideas what could have caused it to stop working?

    Read the article

  • Can't remotely connect through SQL Server Management Studio

    - by FAtBalloon
    I have setup a SQL Server 2008 Express instance on a dedicated Windows 2008 Server hosted by 1and1.com. I cannot connect remotely to the server through management studio. I have taken the following steps below and am beyond any further ideas. I have researched the site and cannot figure anything else out so please forgive me if I missed something obvious, but I'm going crazy. Here's the lowdown. The SQL Server instance is running and works perfectly when working locally. In SQL Server Management Studio, I have checked the box "Allow Remote Connections to this Server" I have removed any external hardware firewall settings from the 1and1 admin panel Windows firewall on the server has been disabled, but just for kicks I added an inbound rule that allows for all connections on port 1433. In SQL Native Client configuration, TCP/IP is enabled. I also made sure the "IP1" with the server's IP address had a 0 for dynamic port, but I deleted it and added 1433 in the regular TCP Port field. I also set the "IPALL" TCP Port to 1433. In SQL Native Client configuration, SQL Server Browser is also running and I also tried adding an ALIAS in the I restarted SQL server after I set this value. Doing a "netstat -ano" on the server machine returns a TCP 0.0.0.0:1433 LISTENING UDP 0.0.0.0:1434 LISTENING I do a port scan from my local computer and it says that the port is FILTERED instead of LISTENING. I also tried to connect from Management studio on my local machine and it is throwing a connection error. Tried the following server names with SQL Server and Windows Authentication marked in the database security. ipaddress\SQLEXPRESS,1433 ipaddress\SQLEXPRESS ipaddress ipaddress,1433 tcp:ipaddress\SQLEXPRESS tcp:ipaddress\SQLEXPRESS,1433

    Read the article

  • missing network usage through iptables

    - by Purres
    I inserted a rule to iptables to track the input usage to a certain ip address. The vps server's IP is 192.168.1.5 and the guest os's IP is 192.168.1.115. I ran 'yum update' inside the guest OS to get some network traffic. Then I ran iptables -vnL from the hypervisor. However it only showed network usage to the host, but not to the guest. Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target source destination 0 0 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.115-192.168.1.115 1853 114K 0.0.0.0/0 0.0.0.0/0 destination IP range 192.168.1.5-192.168.1.5 I ran tcpdump and the log showed that there're data packets to the guest os. 16:17:43.932514 IP mirrordenver.fdcservers.net.http > 192.168.1.115.34471: Flags [.], seq 17694667:17696115, ack 1345, win 113, options [nop,nop,TS val 1060308643 ecr 1958781], length 1448 16:17:43.932559 IP 192.168.1.115.34471 > mirrordenver.fdcservers.net.http: Flags [.], ack 17696115, win 15287, options [nop,nop,TS val 1958869 ecr 1060308643], length 0 Why the guest OS network usage couldn't be tracked? iptables -L will return the INPUT chain as following: Chain INPUT (policy ACCEPT) target prot opt source destination all -- anywhere anywhere destination IP range 192.168.1.115-192.168.1.115 all -- anywhere anywhere destination IP range 192.168.1.5-192.168.1.5 all -- anywhere anywhere

    Read the article

  • How do you get Windows 7 to show time remaining in the battery meter?

    - by MrDaniel
    Running Microsoft Windows 7 Home Premium on a HP Laptop. The system tray power meter never shows the time remaining in the system tray. Only really ever show a percentage remaining number as pictured. The windows help documentation on the "battery meter" seems to indicate that it should display a time remaining indicator, is this accurate? How accurate is the battery meter? The accuracy of what the battery meter reports—what percentage of a full charge remains and how long you can use your laptop before you must plug it in—depends on several factors. Most of these factors fall into the following two categories: What you use the laptop for. Because some activities drain the battery faster than others (for example, watching a DVD consumes more power than reading and writing e-mail), alternating between activities that have significantly different power requirements changes the rate at which your laptop uses battery power. This can vary the estimate of how much battery charge remains. Battery hardware and sensor circuitry. Newer, "smart" batteries are equipped with circuitry that calculates the measurements of charge remaining and reports the information to the battery meter. Older batteries use less sophisticated circuitry and might be less accurate.

    Read the article

  • Outlook 2010, 2007 Sync problems after migration from SMTP to Exchange

    - by kirgy
    Our organization recently switched from an SMTP server to an Exchange server, since then several user's Outlook's are not synchronizing their emails as expected with the Exchange server. Our move over from an SMTP server to an Exchange server consisted of adding the new Exchange account alongside the existing SMTP account, drag-dropping/copy-pasting folders client-side from the SMTP account in the folder pane in outlook, to the newly created Exchange account. The problem happens when a user moves an email to a folder from their inbox or another folder. At this point the email disappears from Outlook client side. Re-syncing the folder, send/receive, closing/opening outlook and even system reboots do not make this email reappear. The Outlook web interface (OWA) reports the email is in fact in the folder they placed it in, and is not deleted. Doing a "search all mail items" for the emails shows that the email is still there; not deleted nor removed. To add to the confusion, when new folders are created and the email is placed in these folders, the synchronization happens without any issue both client side and server side. As the emails are appearing server side, we are confident to presume this is a client side issue. We have tried adding/removing accounts on one system which resulted in the same issue. This was a very long and slow process due to the sheer volume of emails (20gig+ from most users). We have tried reinstalling outlook restoring accounts from back-ups which has not resolved the issue. We also tried upgrading one system from outlook 2007 to outlook 2010 which, again, did not resolve the issue. We have experienced issues with a lot of emails disappearing during the copy-over process in which I'm not convinced it was the best route of migration, but nonetheless we are where we are. Can anyone suggest potential avenues of solutions to resolve this issue? Thank you. Systems: Windows 7 (10 systems) Windows XP (2 systems) Outlook 2007 (2 systems) Outlook 2010 (7 systems) Problem Outlook systems: Windows XP, Outlook 2007 x 1 Windows 7, Outlook 2007 x1 Windows 7, Outlook 2010 x 2

    Read the article

  • Bind9 zone files

    - by user42780
    Well for the better part of the last two hours I've tried to figure out what is actually wrong, but I can't seem to find anything obvious to me. What I'm trying to do is setup my DNS for say(per example) domain.com. This should include two NS records, namely ns1.domain.com and ns2.domain.com. With that there should be a mail record, as well as a CNAME record for www. I've been trough roughly 20 how to's in the last two hours, rewrote everything from scratch four times and I still can't seem to find whats wrong. My only suspicion to this might be two things; the error I get from the bind9 daemon when I stop the service, and the named.conf file. The error I get from the bind9 daemon when stopping the service is: * Stopping domain name service... bind9 rndc: connection to remote host closed This may indicate that * the remote server is using an older version of the command protocol, * this host is not authorized to connect, * the clocks are not syncronized, or * the key is invalid. I honestly doesn't know what this means, apart from the key defined in /etc/bind/rndc.key that's not in the named.conf file(yes, I did try to add it to no avail). Here's all the zone files, and configuration files; http://208.77.101.5/bind9/ If anyone could help, it would be greatly appreciated.

    Read the article

  • Vserver: secure mails from a hacked webservice

    - by lukas
    I plan to rent and setup a vServer with Debian xor CentOS. I know from my host, that the vServers are virtualized with linux-vserver. Assume there is a lighthttpd and some mail transfer agent running and we have to assure that if the lighthttpd will be hacked, the stored e-mails are not readable easily. For me, this sounds impossible but may I missed something or at least you guys can validate the impossibility... :) I think basically there are three obvious approaches. The first is to encrypt all the data. Nevertheless, the server would have to store the key somewhere so an attacker (w|c)ould figure that out. Secondly one could isolate the critical services like lighthttpd. Since I am not allowed to do 'mknod' or remount /dev in a linux-vserver, it is not possible to setup a nested vServer with lxc or similar techniques. The last approach would be to do a chroot but I am not sure if it would provide enough security. Further I have not tried yet, if I am able to do a chroot in a linux-vserver...? Thanks in advance!

    Read the article

  • How to configure a Router (TL-WR1043ND) to work in WDS mode?

    - by LanceBaynes
    I have a WRT160NL router (192.168.1.0/24 - OpenWrt 10.04) as AP. It's: - WAN port: connected to the ISP - WLAN: working as an AP, using 64 bit WEP/SSID: "MYWORKINGSSID", channel 5, using password: "MYPASSWORDHERE" - It's IP Address is: 192.168.1.1 Ok! It's working great! But: I have a TL-WR1043ND router that I want to configure as a "WDS". (My purpose is to extend the wireless range of the original WRT160NL.) Here is how I configure the TL-WR1043ND: 1) I enable WDS bridging. 2) In the "Survey" I select my already working network. 3) I set up the encryption (exact same like the already working one) 4) I choose channel 5 5) I type the SSID 6) I disable the DHCP server on it. After I reboot the router and connect to this router (TL-WR1043ND) over wireless I'm trying to ping google.com. From the ping I see that I can reach this router, that's ok, but it seems like that this router can't connect to the original one, the WRT160NL (so I don't get ping reply from Google). The encryption settings/password is good I checked it many-many-many times. what could be the problem? I'm thinking it could be a routing problem, but what should I add to the "Static Routing" menu? I tried to change the IP address of the TL-WR1043ND to: 192.168.1.2 So if this a routing issue then I should add a static routing rule that says: If destination: any then forward the packet to 192.168.1.1 p.s.: I updated the Firmware to the latest version. It's still the same. p.s.2: The HW version of the TL-WR1043ND is 1.8 p.s.3: Could that be the problem that I use different routers? (If I would buy.. another TL-WR1043ND and use it instead of the WRT160NL, and with normal Firmware, not OpenWrt, then it would work?? The "WDS" is different on different routers?) p.s.4: I will try to check the router logs@night - and paste it here! :\

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • How can I set up a 404 error page when people access http://ftp.mydomain.com?

    - by Tim B.
    I am a freelance videographer/developer, and part of my job involves transferring large files over FTP to production houses/television stations. While the majority of people in my industry understand the difference between FTP and HTTP, I've experienced several interactions in the past couple months of people who still open Internet Explorer and try to access http://ftp.mydomain.com, receive an error page served by HostGator, and tell me that they cannot access my FTP server. Instead of spending time delivering instructions via e-mail, I'd much prefer to serve up a custom error page in this instance that instructs them how to download and use an FTP client. I tried setting up a sub-domain in Cpanel hoping I could simply drop in an .htaccess file with the error page, but I got this error: ftp.mydomain.com domainadmin-domainexistsglobal I also tried creating a custom error page in PHP which reads the site URL and serves up the custom content only when http://ftp.mydomain.com is accessed. Unfortunately, the error page works for every subdomain except that one. I'm not entirely sure this is even technically possible, which is why I bring it to the good people of StackOverflow to help. Thanks!

    Read the article

  • What are the right reverse PTR, domain keys, and SPF settings for two domains running the same appli

    - by James A. Rosen
    I just read Jeff Atwood's recent post on DNS configuration for email and decided to give it a go on my application. I have a web-app that runs on one server under two different IPs and domain names, on both HTTP and HTTPS for each: <VirtualHost *:80> ServerName foo.org ServerAlias www.foo.org ... </VirtualHost> <VirtualHost 1.2.3.4:443> ServerName foo.org ServerAlias www.foo.org </VirtualHost> <VirtualHost *:80> ServerName bar.org ServerAlias www.bar.org ... </VirtualHost> <VirtualHost 2.3.4.5:443> ServerName bar.org ServerAlias www.bar.org </VirtualHost> I'm using GMail as my SMTP server. Do I need the reverse PTR and SenderID records? If so, do I put the same ones on all of my records (foo.org, www.foo.org, bar.org, www.bar.org, ASPMX.L.GOOGLE.COM, ASPMX2.GOOGLEMAIL.COM, ..)? I'm pretty sure I want the domain-keys records, but I'm not sure which domains to attach them to. The Google mail servers? foo.org and bar.org? Everything?

    Read the article

  • How to count the most recent value based on multiple criteria?

    - by Andrew
    I keep a log of phone calls like the following where the F column is LVM = Left Voice Mail, U = Unsuccessful, S = Successful. A1 1 B1 Smith C1 John D1 11/21/2012 E1 8:00 AM F1 LVM A2 2 B2 Smith C2 John D2 11/22/2012 E1 8:15 AM F2 U A3 3 B3 Harvey C3 Luke D3 11/22/2012 E1 8:30 AM F3 S A4 4 B4 Smith C4 John D4 11/22/2012 E1 9:00 AM F4 S A5 5 B5 Smith C5 John D5 11/23/2012 E5 8:00 AM F5 LVM This is a small sample. I actually have over 700 entries. In my line of work, it is important to know how many unsuccessful (LVM or U) calls I have made since the last Successful one (S). Since values in the F column can repeat, I need to take into consideration both the B and C column. Also, since I can make a successful call with a client and then be trying to contact them again, I need to be able to count from the last successful call. My G column is completely open which is where I would like to put a running total for each client (G5 would = 1 ideally while G4 = 0, G3 = 0, G2 = 2, G1 = 1 but I want these values calculated automatically so that I do not have scroll through 700 names).

    Read the article

  • Making Apache 2.2 on SuSE Linux Case In-Sensitive. Which is a better approach?

    - by pingu
    Problem: http://<server>/home/APPLE.html http://<server>/hoME/APPLE.html http://<server>/HOME/aPPLE.html http://<server>/hoME/aPPLE.html All the above should pick this http://<server>/home/apple.html I implemented 2 solutions and both are working fine. Not sure which one is better(performance). Please Suggest..Also Directive - CheckCaseOnly on never worked Option 1: a)Enable:mod_speling In /etc/sysconfig/apache2 - APACHE_MODULES="rewrite speling apparmor......" b) Add directive - CheckSpelling on (Either in .htaccess or add in httpd.conf) In httpd.conf <Directory srv/www/htdcos/home> Order allow,deny CheckSpelling on Allow from all </Directory> or In .htaccess inside /srv/www/htdcos/home(your content folder) CheckSpelling on Option 2: a) Enable: mod_rewrite b) Write the rule vhost(you can not write RewriteMap in directory. check apache docs ) <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> This changes the entire request uri into lowercase. I want this to happen for specific folder, but RewriteMap doesn't work in .htaccess. I am novice in regex and Rewrite. I need a RewriteCond which checks only /css//. can any body help

    Read the article

  • what port should I open for mysql master-master replication?

    - by Vanddel
    I have two servers running php5-fpm and a load balancer running nginx, the three servers share /var/www/drupal using nfs. nfs is working correctly. I replicated the two servers' database using mysql master master replication. everything was working fine till I added my iptables rules. In my iptables script, I first drop all chains then I accept the ones I want, other than that there are no other drop statements. I opened port 3306 for mysql replication like this : (the rule is on both servers ) iptables -A INPUT -p tcp -s $ip_Of_Other_Server --dport 3306 -j ACCEPT iptables -A OUTPUT -p tcp -d $ip_Of_Other_Server --sport 3306 -j ACCEPT The problem is, when I run both servers and I try to log in using my account on drupal it doesn't log in although I find a successful log in attempt in drupal logs. When I run only one server of them I can log in normally. when I allow everything in my iptables rules it works normally. I believe there's some port I need to open using iptables for the replication to work correctly but I can't find which one to open.

    Read the article

  • Problems sending email using .Net's SmtpClient

    - by Jason Haley
    I've been looking through questions on Stackoverflow and Serverfault but haven't found the same problem mentioned - though that may be because I just don't know enough about how email works to understand that some of the questions are really the same as mine ... here's my situation: I have a web application that uses .Net's SmtpClient to send email. The configuration of the SmtpClient uses a smtp server, username and password. The SmtpClient code executes on a server that has an ip address not in the domain the smtp server is in. In most cases the emails go without a problem - but not AOL (and maybe others - but that is one we know for sure right now). When I look at the headers in the message that was kicked back from AOL it has one less line than the successful messages hotmail gets: AOL Bad Message: Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Mon, 18 Jun 2012 09:48:24 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Good Hotmail Message: Received: from mail.domainofsmtp.com ([###.###.###.###]) by subdomainsof.hotmail.com with Microsoft SMTPSVC(6.0.3790.4900); Thu, 21 Jun 2012 09:29:13 -0700 Received: from WEBSVRNAME ([##.###.###.###]) by domainofsmtp.com with MailEnable ESMTP; Thu, 21 Jun 2012 11:29:03 -0500 MIME-Version: 1.0 From: "[email protected]" <[email protected]> ... Notice the hotmail message headers has an additional line. I'm confused as to why the Web server's name and ip address are even in the headers since I thought I was using the SmtpClient to go through the smtp server (hence the need for the username and password of a valid email box). I've read about SPFs, DKIM and SenderID's but at this point I'm not sure if I would need to do something with the web server (and its ip/domain) or the domain the smtp is coming from. Has anyone had to do anything similiar before? Am I using the smtp server as a relay? Any help on how to describe what I'm doing would also help.

    Read the article

  • Running multiple services on different servers with IPv6 and a FQDN

    - by Mark Henderson
    One of the things NAT has permitted us to do in the past decade is split physical services onto different servers whilst hiding behind a single interface. For example, I have example.com behind a NAT on 192.0.2.10. I port-forward :80 and :443 to my web server. I'm also port forward :25 to my mail server, and :3389 to a terminal server and :8080 to the web interface of my computer that downloads torrents, and the story goes on. So I have 5 port forwardings going to 4 different computers on example.com. Then, I go and get me some neat IPv6. I assign example.com an IPv6 address of 2001:db8:88:200::10. That's great for my websites, but I want to go to example.com:8080 to get to my torrents, or example:3389 to log on to my terminal server. How can I do this with IPv6, as there is no NAT. Sure, I could create a bunch of new DNS entries for each new service, but then I have to update all my clients who are used to just typing example.com to get to either the website or the terminal server. My users are dumber than two bricks so they won't remember to connect to rdp.example.com. What options do I have for keeping NAT-style functionality with IPv6? In case you haven't figured it out, the above scenario is not a real scenario for me, or perhaps anyone yet, but it's bound to happen eventually. You know, with devops and all.

    Read the article

  • mod_wsgi, .htaccess and rewriterule

    - by hadaraz
    I'm using several django projects running on the same apache instance through mod_wsgi, configured with virtualhost for each site, see the httpd.conf here. For one of the sites I want to use static-cache (staticgenerator), so I set up a directory with .htaccess file which contains: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}/index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] where 3456 is the django port on the server. Using this rewrite rule, the request is always forwarded to the mod_wsgi handler, even if the file or directory exists, and if the file index.html exists the request shows as request-path/index.html. I tried another setup: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond $1 !-d RewriteCond $1index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] but got almost the same results. All requests are transferred to the mod_wsgi handler, but the request path is now the original one. To sum it up: What is the correct RewriteCond to use here? How do you transfer a request to the mod_wsgi handler? Is it the right way? If that's not the way to do it, then how do you serve static files from a directory when they exist, and when they don't you serve from apache/mode_wsgi? Thanks for your help.

    Read the article

  • Serving images from another hostname vs Apache overload for the rewrites

    - by luison
    We are trying to improve further the speed of some sites with older HTML in order as well to obtain better SEO results. We have now applied some minify measures, combined html, css etc. We use a small virtualized infrastructure and we've always wanted to use a light + standar http server configuration so the first one can serve images and static contents vs the other one php, rewrites, etc. We can easily do that now with a VM using the same files and conf of vhosts (bind mounts) on apache but with hardly any modules loaded. This means the light httpd will have smaller fingerprint that would allow us to serve more and quicker, have more minSpareServer running, etc. So, as browsers benefit from loading static content from different hostnames as well, we've thought about building a rewrite rule on our main server (main.com) to "redirect" all images and css *.jpg, *.gif, *.css etc to the same at say cdn.main.com thus the browser being able to have more connections. The question is, assuming we have a very complex rewrite ruleset already (we manually manipulate many old URLs for SEO) will it be worth? I mean will the additional load of main's apache to have to redirect main.com/image.jpg (I understand we'll have to do a 301) to cdn.main.com/image.jpg + then cdn.main.com having to serve it, be larger than the gain we would be archiving on the browser? Could the excess of 301s of all images on a page be penalized by google? How do large companies work this out, does the original code already include images linked from the cdn with absolute paths? EDIT Just to clarify, our concern is not to do so much with server performance or bandwith. We could obviously employ an external CDN server but we have plenty CPU and bandwith. Our concern is with how to have "old" sites with plenty semi-static HTML content benefiting from splitting connections for images and static content via apache without having to change the html to absolute paths (ie. image.jpg to cdn.main.com/image.jpg happening on the server not the code)

    Read the article

  • How to test email spam scores with amavis?

    - by CaptSaltyJack
    I'd like a way to test a spam message to see its spam scores that SpamAssassin gives it. The SA db files (bayes_toks, etc) reside in /var/lib/amavis/.spamassassin. I've been testing emails by doing this: sudo su amavis -c 'spamassassin -t msgfile' Though this yields some strange results, such as: Content analysis details: (3.7 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- 3.5 BAYES_99 BODY: Bayes spam probability is 99 to 100% [score: 1.0000] -0.0 NO_RELAYS Informational: message was not relayed via SMTP 0.0 LONG_TERM_PRICE BODY: LONG_TERM_PRICE 0.2 BAYES_999 BODY: Bayes spam probability is 99.9 to 100% [score: 1.0000] -0.0 NO_RECEIVED Informational: message has no Received headers 0.2 is an awfully low scores for BAYES_999! But this is the first time I've used amavis, previously I've always just used spamassassin directly as a content filter in postfix, but apparently running amavis/spamassassin is more efficient. So, with amavis in the picture, how can I run a test on a message to see its spam score breakdown? Another email I ran a test on got this result: 2.0 BAYES_80 BODY: Bayes spam probability is 80 to 95% [score: 0.8487] Doesn't make sense, that BAYES_80 can yield a higher score than BAYES_999. Help!

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • iOS 6 in-app email does not send from within any app that supports it

    - by Joe Termine
    A strange problem -- Last night I upgraded to the final release of iOS 6 on my iPhone 4S and my iPad 2. When I open an app that allows you to send emails from within the app (e.g. adobe Reader, TurboScan, etc.) -- doesn't matter which one -- I am prompted with the email dialog from within the app, I can compose the message, but when I go to send one of two things will happen: either the email sending sound will "swoosh" and the dialog will close (leading me to think it worked) or some apps with good error handling will say there is an "error sending email." The error logs on my device are not reporting errors. It's just that the email doesn't really send. I have two Exchange mail boxes on these devices. One connects to a corporate network hosting on-premise exchange 2007 and the other connects to Gmail over the exchange interface. Have attempted to delete and re-pair these accounts (one at a time) without any change. I'm wondering if others are experiencing this problem, or whether I should just wipe the devices and chalk it up to (another) failed upgrade. Thoughts much appreciated. Joe

    Read the article

  • Recompiling Nginx 1.4.3 with "--with-http_gzip_static_module" error

    - by Elijah Paul
    I'm trying to enable the 'ngx_http_gzip_static_module' module in Nginx 1.4.3 by adding the --with-http_gzip_static_module to my ./configure configuration. But I recieve the following error when i try to recompile (make): # make make -f objs/Makefile make[1]: Entering directory `/tmp/nginx-1.4.3' make[1]: *** No rule to make target `src/os/unix/ngx_gcc_atomic_x86.h', needed by `objs/src/core/nginx.o'. Stop. make[1]: Leaving directory `/tmp/nginx-1.4.3' make: *** [build] Error 2 My current config (CentOS 6.4): # nginx -V nginx version: nginx/1.4.3 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --conf-path=/etc/nginx/nginx.conf --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --http-log-path=/var/log/nginx/access.log --with-http_ssl_module --prefix=/usr --add-module=./nginx-sticky-module-1.1 --add-module=./headers-more-nginx-module-0.23 i was under the impression that this was a module that just need to be 'enabled' as opposed to 'added'. What am I missing here?

    Read the article

  • Exim 4 Virtual Domains and Catchall on Debian (Squeeze)

    - by parazuce
    Hello, I've been at it for about 4 hours now. Searching as well as trying different tutorials. Here's my setup: I have 2 domains, both under my own DNS server (MX records setup as well). I have exim4 successfully running, and it is able to send messages from both of those domains. I have tested this using sendmail, and manually setting the "From" attribute. Exim successfully delivers mail to users no matter which domain was specified. I'm fine with that, but I'm having an issue editing virtual domains, and adding custom delivery options (such as a catch all). I've been searching for about 4 hours, and I can't find any up-to-date documentation on how to do this. The old methods would be to add a line such as: domainlist local_domains = @:localhost:dsearch;/etc/exim4/virtual Once that line was added, I made a directory at /etc/exim4/virtual, then created files inside such as example.com which would then contain rules for delivery under that domain. This did not work, however. Searching further, I've found that exim no longer supports dsearch (I guess because they claim it never has?) This is where I'm stuck. I'm on a "split" configuration as well.

    Read the article

  • Windows 7: Windows Firewall: Logging/Notifying on Outgoing Request Attempts

    - by Maxim Z.
    I'm trying to configure Windows Firewall with Advanced Security to log and tell me when programs are trying to make outbound requests. I previously tried installing ZoneAlarm, which worked wonders for me with this in XP, but now, I'm unable to install ZA on Win7. My question is, is it possible to somehow monitor a log or get notifications when a program tries to do that if I set all outbound connections to auto-block, so that I can then create a specific rule for the program and block it.? Thanks! UPDATE: I've enabled all the logging options available through the Properties windows of the Windows Firewall with Advanced Security Console, but I am only seeing logs in the %systemroot%\system32\LogFiles\Firewall\pfirewall.log file, not in the Event Viewer, as the first answer suggested. However, the logs that I can see only tell me the request's or response's destination IP and whether the connection was allowed or blocked, but it doesn't tell me what executable it comes from. I want to find out the file path of the executable that each blocked request comes from. So far, I haven't been able to.

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >