Search Results

Search found 29495 results on 1180 pages for 'cross site scripting'.

Page 1121/1180 | < Previous Page | 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128  | Next Page >

  • Cygwin's RSYNC for large data transfer

    - by Tim Brigham
    I'm using rsync from Cygwin to do a large scale data transfer from an aging HP MSA 1000 to a new DAS attached to a different server. I have a daemon running on the remote server in read only mode and a local copy writing the files to disk. One of my servers is an image repository with over a million files spread across about 300 directories. Each file averages only a couple hundred kilobytes. More so than any other box this one is proving problematic. The rsync process will work for a while - some times 20 minutes, some times an hour - and then it simply quits and sits idle at a given file name. I have verified that the file isn't corrupt on the remote server and that the file is successfully created on the local drive. I ran the rsync client in -vv mode, which returns nothing. I checked out the logs created by the daemon. I looked at the network utilization on the interface, which is sitting idle. I looked at the AV settings to see if anything could pose a problem there. I even updated to the latest release of Cygwin. What do I need to in order to keep this connection up? EDIT: The client system is using the command rsync.exe server::Drives/f/Repo/ /cygdrive/T/Repo --archive -P -vv The server is using the command rsync.exe --daemon --no-detach --config "rsyncd.conf" The contents of rsyncd.conf: use chroot = false strict modes = false hosts allow = 192.168.100.9 log file = c:/rsyncd.log uid=0 gid=0 [Drives] path = /cygdrive read only = yes EDIT: The file server is 2003, the disk type on the array is GPT and the size is of the array is about 4 TB. EDIT: Stranger.. It looks like the process is reliably erroring out at about 175,000 files. Rsync runs fine when I pick the same directory it has problems with one at a time. EDIT: rsync version 3.0.9 protocol version 30 Copyright (C) 1996-2011 by Andrew Tridgell, Wayne Davison, and others. Web site: http://rsync.samba.org/ Capabilities: 64-bit files, 64-bit inums, 32-bit timestamps, 64-bit long ints, no socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace, append, ACLs, xattrs, iconv, symtimes A similar failure occurred when going from the same set of files with Cygwin to a Linux install. It didn't happen until several hours later than normal however.

    Read the article

  • serving mp3s to mobile devices is flooding nginx with partial requests

    - by drumfire
    I am serving mp3s with a minimalistic nginx server. What I see in my log files is that there are a lot of requests, in particular from AppleCoreMedia and sometimes Android useragents, that flood the server with short requests. Sometimes they keep requesting to download the same partial content for a very long time; sometimes more than an hour. For example: "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" "GET /somefile.mp3 HTTP/1.1" 206 33041 "AppleCoreMedia/1.0.0.9B206 (iPhone; U; CPU OS 5_1_1 like Mac OS X; en_us)" [...] I also get a lot, but not as much, of these: "-" 400 0 "-" "-" 400 0 "-" The IP addresses are always from clients that start downloading shortly after that request, usually they have roughly the same UserAgent as in the first example. emphasized text I have enabled server throttling and connection limits in nginx to limit the huge amount of log entries from equivalent IPs at least somewhat. There was a performance issue when I saw the same behaviour on the previous server that used Apache. I installed nginx on a better server then moved the site. When Apache could not handle more connections from the increasing number of clients effectively that server was ddossed. There was no bandwidth issue with already connected clients and I don't know if the already connected clients were using more than one connection at a time. Please tell me: Are clients that appear to get stuck on a download a Bad Thing™ I heard people say their mobile bandwidth use was much higher than they could account for. I'm thinking this type of client behaviour can account for that. And costs us more bandwidth too. Which up to date alternatives exist out there that can handle serving this type of data better than plain HTTP? Useful general insights for someone who just came into this field straight out of the late 90s. :-)

    Read the article

  • Nginx giving a lot of 502 errors

    - by Loki
    Since a while I have installed nginx and everything seemed to be working fine, recently I found out that about 20% of the time users are getting 502-errors. This is also noticable when Google tries to crawl my site in Webmaster Tools (from 10000 posts, approx. 2000 502 errors) At first I was thinking to disable nginx, but I'd really like to keep using it. I'm running it on a server with 2GB RAM and 4 Reserved CPU Cores. WHM/cPanel installed and Mod_Ruid2 enabled + DSO as a PHP Handler with APC caching installed. Is there anything I can change in the config, that can fix this? I have installed Nginx Admin in WHM and here is what's in the configuration editor screen: user nobody; worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 10240; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } I hope someone can help me out. Thanks in advance!

    Read the article

  • CNAME to another domain fails on some office networks, why?

    - by crashalpha
    Our domain "aspenfasteners.com" is hosted by Volusion. We have CNAME records "find" and "search" which point to site indexing accounts on www.picosearch.com. These addresses fail on SOME private office networks which have their own DNS. We suspect the problem comes from Volusion's own name servers, n2.volusion.com and n3.volusion.com. Volusion support on problems this technical is non-existant. We have tried an NSLOOKUP on find.aspenfasteners.com with level 2 debugging info, and we got the results below. Is it possible that the local DNS is recursing to Volusion's name servers, and that while Volusion DOES return the canonical name, they do NOT resolve the address? Can anybody with expertise in this sort of stuff PLEASE look at the NSLOOKUP below and tell me if we are right, because Volusion is giving me absolutely NO support on this topic. I need proof of where the problem lies. Thanks VERY much! Carlo find.aspenfasteners.com Server: mtl-srm-dbsv-01.fastenerwholesale.com Address: 192.168.0.44 SendRequest(), len 61 HEADER: opcode = QUERY, id = 8, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN ------------ Got answer (138 bytes): HEADER: opcode = QUERY, id = 8, rcode = NXDOMAIN header flags: response, auth. answer, want recursion, recursion avail. questions = 1, answers = 0, authority records = 1, additional = 0 QUESTIONS: find.aspenfasteners.com.fastenerwholesale.com, type = A, class = IN AUTHORITY RECORDS: -> fastenerwholesale.com type = SOA, class = IN, dlen = 46 ttl = 3600 (1 hour) primary name server = mtl-srm-dbsv-01.fastenerwholesale.com responsible mail addr = admin.fastenerwholesale.com serial = 10219 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ------------ SendRequest(), len 41 HEADER: opcode = QUERY, id = 9, rcode = NOERROR header flags: query, want recursion questions = 1, answers = 0, authority records = 0, additional = 0 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ------------ Got answer (141 bytes): HEADER: opcode = QUERY, id = 9, rcode = NXDOMAIN header flags: response, auth. answer questions = 1, answers = 1, authority records = 1, additional = 1 QUESTIONS: find.aspenfasteners.com, type = A, class = IN ANSWERS: -> find.aspenfasteners.com type = CNAME, class = IN, dlen = 17 canonical name = www.picosearch.com ttl = 3600 (1 hour) AUTHORITY RECORDS: -> com type = SOA, class = IN, dlen = 43 ttl = 900 (15 mins) primary name server = ns3.volusion.com responsible mail addr = admin.volusion.com serial = 1 refresh = 900 (15 mins) retry = 600 (10 mins) expire = 86400 (1 day) default TTL = 3600 (1 hour) ADDITIONAL RECORDS: -> ns3.volusion.com type = A, class = IN, dlen = 4 internet address = 65.61.137.154 ttl = 900 (15 mins) * mtl-srm-dbsv-01.fastenerwholesale.com can't find find.aspenfasteners.com: Non-existent domain

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • Nginx Proxying to Multiple IP Addresses for CMS' Website Preview

    - by Matthew Borgman
    First-time poster, so bear with me. I'm relatively new to Nginx, but have managed to figure out what I've needed... until now. Nginx v1.0.15 is proxying to PHP-FPM v.5.3.10, which is listening at http://127.0.0.1:9000. [Knock on wood] everything has been running smoothly in terms of hosting our CMS and many websites. Now, we've developed our CMS and configured Nginx such that each supported website has a preview URL (e.g. http://[WebsiteID].ourcms.com/) where the site can be, you guessed it, previewed in those situations where DNS doesn't yet resolve to our server, etc. Specifically, we use Nginx's Map module (http://wiki.nginx.org/HttpMapModule) and a regular expression in the server_name of the CMS' server{ } block to 1) lookup a website's primary domain name from its preview URL and then 2) forward the request to the "matched" primary domain. The corresponding Nginx configuration: map $host $h { 123.ourcms.com www.example1.com; 456.ourcms.com www.example2.com; 789.ourcms.com www.example3.com; } and server { listen [OurCMSIPAddress]:80; listen [OurCMSIPAddress]:443 ssl; root /var/www/ourcms.com; server_name ~^(.*)\.ourcms\.com$; ssl_certificate /etc/nginx/conf.d/ourcms.com.chained.crt; ssl_certificate_key /etc/nginx/conf.d/ourcms.com.key; location / { proxy_pass http://127.0.0.1/; proxy_set_header Host $h; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } (Note: I do realize that the regex in the server_name should be "tighter" for security reasons and match only the format of the website ID (i.e. a UUID in our case).) This configuration works for 99% of our sites... except those that have a dedicated IP address for an installed SSL certificate. A "502 Bad Gateway" is returned for these and I'm unsure as to why. This is how I think the current configuration works for any requests that match the regex (e.g. http://123.ourcms.com/): Nginx looks up the website's primary domain from the mapping, and as a result of the proxy_pass http://127.0.0.1 directive, passes the request back to Nginx itself, which since the proxied request has a hostname corresponding to the website's primary domain name, via the proxy_set_header Host $h directive, Nginx handles the request as if it was as direct request for that hostname. Please correct me if I'm wrong in this understanding. Should I be proxying to those website's dedicated IP addresses? I tried this, but it didn't seem to work? Is there a setting in the Proxy module that I'm missing? Thanks for the help. MB

    Read the article

  • long access times and errors in iis application

    - by Jens Olsson
    Hi, I am having an issue with an IIS application (details of environment at the end of the message). The web site works great most of the time and I cannot reproduce any error in our test system. On the live system however with on averare of 5-15 requests per second I have a problem with that some requests (about 0.05%) will take over 300 seconds to complete. The other requests complete withing 5-10 seconds. It seem like if all the errornous requests end up with a Timer_EntityBody error in the error log. I have never seen this as an end user but I guess that they will receive some kind of error message. I am trying to find out what can be causing this errornous behaviour. Any ideas are welcome. I have read something about that there can be an MTU issue if ICMP and MTU protocols are blocked in the firewall. Does that sound reasonable? I have also read about updating to IIS 7 should do the trick. Does it sound reasonable? I think that the problem has another cause but I have no idea of what. I have tried running hte perormance monitor, monitoring for database locks and active transaction counts. I can see some of these in the perfmon log for the MSSQL server (another machine) for example: Active transactions is sometimes peaking and sometimes for long periods Lock waits per seconds is sometimes peaking Transactions per second is sometimes peaking Page IO Latch wait is sometimes peaking Lock wait time (ms) is sometimes peaking But I cannot see that any of these correlate to the errors in the IIS error log. On the IIS server machine I can also see with perfmon that some values peak a few times during a day: Request execution time Avg disk queue length I can neither see that any of these correlate to the errors in the IIS error log. In the below code I have anonymized by replacing some parts with HIDDEN The following can be seen in the access log 2010-10-01 08:35:05 W3SVC1301873091 **HIDDEN** POST /**HIDDEN**/Modules/BalanceModule.aspx - 80 - **HIDDEN** Mozilla/4.0+(compatible;+MSIE+7.0;+Windows+NT+5.1;+.NET+CLR+2.0.50727;+.NET+CLR+3.0.4506.2152;+.NET+CLR+3.5.30729;+.NET4.0C;+.NET4.0E) ASP.NET_SessionId=**HIDDEN** 400 0 64 0 2241 127799 At the same time the following can be seen in the error log: 2010-10-01 08:35:05 **HIDDEN** 1999 **HIDDEN** 80 HTTP/1.0 POST /**HIDDEN**/Modules/BalanceModule.aspx - 1301873091 Timer_EntityBody Test+Pool I can tell the following about the environment: Server: Windows Server 2003 x64 SP2 running on VMWare HTTP Server: IIS v6.0 with ASP.NET 2.0.50727 Antivirus: Trend Micro OfficeScan (Is it a good idea to have this on a server?)

    Read the article

  • nginx can't see MySQL

    - by user135235
    I have a fully working Joomla 2.5.6 install driven by a local MySQL server, but I'd like to test nginx to see if it's a faster web serving experience than Apache. \ PHP 5.4.6 (PHP54w) \ CentOS 6.2 \ Joomla 2.5.6 \ PHP54w-fpm.i386 (FastCGI process manager) \ php -m shows: mysql & mysqli modules loaded Nginx seems to have installed fine via yum, it can process a PHP-info file via FastCGI perfectly OK (http://37.128.190.241/php.php) but when I stop Apache, start nginx instead and visit my site I get: "Database connection error (1): The MySQL adapter 'mysqli' is not available." I've tried adjusting my Joomla configuration.php to use mysql instead of mysqli but I get the same basic error, only this time "Database connection error (1): The MySQL adapter 'mysql' is not available" of course! Can anyone think what the problem might be please? I did try explicitly setting extension = mysqli.so and extension = mysql.so in my php.ini to try and force the issue (despite php -m showing they were both successfully loaded anyway) - no difference. I have a pretty standard nginx default.conf: server { listen 80; server_name www.MYDOMAIN.com; server_name_in_redirect off; access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log info; root /var/www/html/MYROOT_DIR; index index.php index.html index.htm default.html default.htm; # Support Clean (aka Search Engine Friendly) URLs location / { try_files $uri $uri/ /index.php?q=$uri&$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/fastcgi.conf; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 1y; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } Snip of output from phpinfo under nginx: Server API FPM/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/phar.ini, /etc/php.d/zip.ini Snip of output from phpinfo under Apache: Server API Apache 2.0 Handler Virtual Directory Support disabled Configuration File (php.ini) Path /etc Loaded Configuration File /etc/php.ini Scan this dir for additional .ini files /etc/php.d Additional .ini files parsed /etc/php.d/curl.ini, /etc/php.d/fileinfo.ini, /etc/php.d/json.ini, /etc/php.d/mysql.ini, /etc/php.d/mysqli.ini, /etc/php.d/pdo.ini, /etc/php.d/pdo_mysql.ini, /etc/php.d/pdo_sqlite.ini, /etc/php.d/phar.ini, /etc/php.d/sqlite3.ini, /etc/php.d/zip.ini Seems that with Apache, PHP is loading substantially more additional .ini files, including ones relating to mysql (mysql.ini, mysqli.ini, pdo_mysql.ini) than nginx. Any ideas how I get nginix to also call these additional .ini's ? Thanks in advance, Steve

    Read the article

  • Old CRT screen's high resolution doesn't work anymore on windows 7

    - by Mixxiphoid
    One year ago I decided to switch from Windows XP to Windows 7. I had a 17" CRT monitor with a screen resolution of 1600x1200 which worked fine on Windows XP. While installing Windows 7 everything went well until Windows 7 was going to install the video card its driver. Windows 7 puts the screen to its recommended resolution and my screen became black. I waited a few minutes to be sure the installation was finished. I turned off the computer by hand and restarted the computer on a resolution of 800x640. When windows 7 was done installing I went to screen resolutions and the resolution of 1600x1200 was on the top of the list with '(recommended)' next to it. I tried putting it on 1600x1200 but again my screen went black. I installed all windows 7 updates including the video card driver from the NVidia site (NOT from Windows 7). I tried about everything to make it work on 1600x1200 but with no succes. The highest resolution I got with the crt monitor was 1280x1024. I had a TFT screen which had 1280x1024 as max resolution and had better colors, so I used that one till today. My video card is 9600GT and my power supply is beyond sufficient. I even tried to install the driver I had on XP to see if it worked, but no results. I tried classic mode on windows 7, changed the dpi, the frequentie and the monitor settings, but nothing worked. I really like a vertical resolution of 1200, but it seems today I'm bound to all those standard monitors with a resolution of 1980x1024... Can anybody explain to me what the cause is that it worked on Windows XP but not on Windows 7? And maybe a solution to the problem (I actually gave up on getting it fixed...) Thanks a lot in advance. SOLUTION I downloaded the according monitor driver and installed it. Next I rebooted my computer on low resolution (800x640) and connected the CRT monitor. When Windows 7 booted successfully I went to computer management and 'update the driver' of my monitor. I manually selected 'Generic PnP monitor' and made that one active. I want to advanced settings at 'Screen resolution' and selected the mode '1600x1200 (32-bit) 80 Hertz (95 Hertz did not work). Now I had my resolution on 1600x1200. I repeated the earlier step to select the original monitor again instead of the Generic monitor. Quite a way to solve this problem, but it worked! Thanks a lot you all.

    Read the article

  • How to Setup Ubuntu Mail Server with Google Apps?

    - by Apreche
    I have a domain, let's call it foobar.com. All of the MX records for foobar.com point to Google's mail servers because I am using Google Apps for your domain to manage it. It's great because everyone gets all the advantages of GMail, but our e-mail addresses aren't @gmail.com. I also have a server. Primarily, it's a web server, but it also serves other things. One of the things it serves is the web site for foobar.com and also sites for various virtual hosts such as shop.foobar.com and forum.foobar.com. The server is running Ubuntu 8.04, because I like using LTS releases in production. The thing is, there are various applications running on the server that need the ability to send out emails. Various applications, like the cron jobs, send me e-mails in case of errors. Some of the web applications need to send e-mail to users when they forget their passwords, to confirm new registered users, etc. Lastly, it's nice to be able to send e-mail from the command line using the mail command, or mutt. How can I setup the mail on the web server to go through the Google apps mail servers? I don't need the web server to receive mail, though that would be cool. I do need it to be able to send mail as any legitimate address @foobar.com. That way the forum application can send mails with [email protected] in the from field, and the ecommerce application will have [email protected] in the from field. Also, by sending the mail through the Google servers, we can avoid a lot of the problems with the e-mails being blocked by various spam filters on the web. Google's SMTP servers are trusted a lot more than mine would be. I'm pretty good with administering Linux systems, but I am absolutely brain dead when it comes to e-mail. I need step by step directions from beginning to end on how to set this up. I need to know every thing to install, and every single change to the configuration files that is necessary. I have tried following various howtos and guides in the past, but none of them were quite right. Either they didn't work at all, or they offered a configuration that is not what I wanted. Please help. Thanks.

    Read the article

  • SSL support with Apache and Proxytunnel

    - by whuppy
    I'm inside a strict corporate environment. https traffic goes out via an internal proxy (for this example it's 10.10.04.33:8443) that's smart enough to block ssh'ing directly to ssh.glakspod.org:443. I can get out via proxytunnel. I set up an apache2 VirtualHost at ssh.glakspod.org:443 thus: ServerAdmin [email protected] ServerName ssh.glakspod.org <!-- Proxy Section --> <!-- Used in conjunction with ProxyTunnel --> <!-- proxytunnel -q -p 10.10.04.33:8443 -r ssh.glakspod.org:443 -d %host:%port --> ProxyRequests on ProxyVia on AllowCONNECT 22 <Proxy *> Order deny,allow Deny from all Allow from 74.101 </Proxy> So far so good: I hit the Apache proxy with a CONNECT and then PuTTY and my ssh server shake hands and I'm off to the races. There are, however, two problems with this setup: The internal proxy server can sniff my CONNECT request and also see that an SSH handshake is taking place. I want the entire connection between my desktop and ssh.glakspod.org:443 to look like HTTPS traffic no matter how closely the internal proxy inspects it. I can't get the VirtualHost to be a regular https site while proxying. I'd like the proxy to coexist with something like this: SSLEngine on SSLProxyEngine on SSLCertificateFile /path/to/ca/samapache.crt SSLCertificateKeyFile /path/to/ca/samapache.key SSLCACertificateFile /path/to/ca/ca.crt DocumentRoot /mnt/wallabee/www/html <Directory /mnt/wallabee/www/html/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> <!-- Need a valid client cert to get into the sanctum --> <Directory /mnt/wallabee/www/html/sanctum> SSLVerifyClient require SSLOptions +FakeBasicAuth +ExportCertData SSLVerifyDepth 1 </Directory> So my question is: How to I enable SSL support on the ssh.glakspod.org:443 VirtualHost that will work with ProxyTunnel? I've tried various combinations of proxytunnel's -e, -E, and -X flags without any luck. The only lead I've found is Apache Bug No. 29744, but I haven't been able to find a patch that will install cleanly on Ubuntu Jaunty's Apache version 2.2.11-2ubuntu2.6. Thanks in advance.

    Read the article

  • System Reserved partition no longer marked as System

    - by Mark
    I recently posted a question to Super User about accidentally marking my external HDD's partition as Active and how I could undo my accidental mistake. I followed the instructions provided and they worked fine. This involved some command line magic and from what I understand, I did not have to really do this, but I just wanted to get things back to how they were originally. After making the fix things went back to normal in disk management. After I restarted my computer though i had an issue: BOOTMGR is missing Press Ctrl+Alt+Del to restart Rugh roh! I brought my laptop to work so I could search for a solution on my work computer and I found a nice guide on fixing the issue. To summarize the instructions, I had to reboot with my Windows 7 install disc and click the Repair button. Once there I could then repair the start-up options. One of the commenters on the site claimed you need to do this twice, as the first time the "repair" doesn't actually fix it. I found this to be true as well. I tried to repair it and it did some work, then rebooted. I then got the same error again. I booted from the CD again and repaired the start-up options then after this second time Windows started to boot up. Before the restart I got a nice info window telling me that it did make repairs to the boot info (this was promising). I've been using Windows 7 for a few days now with no problem, but I just recently noticed that I now can see the System Reserved partition in Computer: (click for full size) I immediately went to disk management to see what was up. I noticed that my System Reserved partition is no longer marked as System and instead I believe the repair operation made my C: drive the system partition. I'm not fully aware of what the System partition really is but I briefly read that its a Windows 7 thing that gets created on install of Win7 that writes some BitLocker encryption stuff to a isolated partition as well as some boot files. (click for full size) How can I undo this and make the System Partition marked as System instead of my OS C: partition? How can I make it so that I don't see this partition in Computer (I believe fixing #1 will fix this) What are the implications of what the current state is and the fact that I can now browse into this new partition? Thanks in advance.

    Read the article

  • NGiNX performance degrades over time.

    - by Rylea Stark
    So here's the situation, I run a small cluster, Dedicated box for MySQL, and a dedicated PHP-FPM/NGINX box, Nginx talks to php-fpm via socket, As far as i can tell the problem does not lie in php-fpm, it lies somewhere in my configuration. What happens, is the site loads instant for a few moments after starting and slowly starts to degrade to load times of greater than 2 seconds, eventually taking 12 seconds to complete a load, PHP is configured to close a child after 175 requests, and spawn 20 at start and have a max of 60. Not really sure where the bottle neck is, most of my code is optimized and works flawlessly, but these issues with nginx will most likely force me to switch back over to Apache, And I really dont want to do that, NGINX.conf configuration below. user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 512; multi_accept on; use epoll; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; resolver_timeout 5s; satisfy all; ## Size Limits limit_zone brainbug $binary_remote_addr 5m; client_body_buffer_size 8k; client_header_buffer_size 75M; client_max_body_size 1k; large_client_header_buffers 2 1k; ## Timeouts client_body_timeout 60; client_header_timeout 60; keepalive_timeout 60; send_timeout 60; ## General Options ignore_invalid_headers on; recursive_error_pages on; sendfile on; server_name_in_redirect off; server_tokens off; ## TCP options tcp_nodelay on; #tcp_nopush on; output_buffers 128 512k; gzip on; gzip_http_version 1.0; gzip_comp_level 7; gzip_proxied any; gzip_min_length 0; gzip_buffers 32 32k; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/jpeg image/png image/gif; ## Disable GZIP for MSIE 1-6 gzip_disable "MSIE [1-6].(?!.*SV1)"; ## Set a vary header so downstream proxies don't send cached gzipped content to IE6 gzip_vary on; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; }

    Read the article

  • Firefox can't establish a connection to the server at www.google.com

    - by Tom
    My home page in Firefox [v4.0] and Internet Explorer [v9.0.8112.16421, Update Versions RTM (KB982861)] is currently set to Google but when I depress the quick start icon to start up either browser, I am getting the following immediate results: Unable to connect (In Firefox) Firefox can't establish a connection to the server at www.google.com. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. Internet Explorer cannot display the webpage What you can try: Diagnose Connection Problems More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. There might be a typing error in the address. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds: Click the Favorites button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages): Press Alt, click File, and then click Work Offline. Click the Favorites button, click History, and then click the page you want to view. Thankfully, I am able to use one browser that I have installed on my computer (Mathon v3.0.20.5000) to search online for technical assistance in this matter. I have seen several WinSock error issues mentioned; but, they are pointing to Windows XP and I am using Windows 7 Pro and remain uncertain whether anything identified as a fix for one OS will work in another. Things I've tried: HiJackThis Complete scan with Avira AntiVirus Premium. What am I overlooking? What should I do to address this problem?

    Read the article

  • Getting an boot error when starting computer

    - by Rob Avery IV
    I was in the middle of watching a movie on Netflix, then suddenly everything started crashing. First, explorer.exe closed down, then Google chrome. I had multiple things running in the background (Steam, Raptr, etc.). Individuality, each of those apps closed down also. When they did, a small dialog box popped up for each of them, one at a time, saying that it was missing a file, it couldn't run anymore, or something similar to that. It also had some jumbled up "code" with numbers and letters that I couldn't read. Ever since then, everytime I turn my computer on, it will run for a few seconds and give this error "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". No matter how many times I try to reboot it, it always gives me the same error. A day later after this happened I was able to start the computer, but before it booted, it told me that I didn't shut down the computer properly and asked how I wanted to run the OS (Run Windows in Safety Mode, Run Windows Normally, etc.). Once I logged, everything went SUPER slow and everything crashed almost instantly. The only thing I opened was Microsoft Security Essentials and only got in about two clicks before it was "Not Responding". Then, after that the whole computer froze and I had to restart it. Now, it's back to saying what it originally said, "Reboot and select proper boot device or insert boot media in selected boot device and press a key_". I built this PC back in February 2012. Here are the specs: OS: Windows 7 Ultimate CPU: AMD 8-core GPU: Nvidia GTX Force 560 Ti RAM: 16GB Hard Drive: Hitachi Deskstar 750GB I'm usually very good taking care of my PC. I don't download anything that's not from a trusted site or source. I don't open up any spam email or such or go to any harmful websites like porn or stream movies. I am very clean with the things I do with my PC and don't do many DIFFERENT things with it. I use it pretty often especially for video games and doing homework in Eclipse. Also, good to note that I don't have any Norton or antisoftware installed. I have Microsoft Security Essentials installed but never did a scan. Thanks!

    Read the article

  • Package problems after upgrade

    - by Dan
    I installed a new upgrade on ubuntu which seemed to fail near the end. Now I'm being told that an error has occurred and to please run apt-get to see what's wrong. After some further tries with that I eventually gave up. It seems there's a left over latex package(?) somewhere and I can't seem to get rid of or fix it. Here's an example: blank@ubuntu:~$ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: tex-common : Breaks: texlive-common (< 2010) but 2009-15 is installed texlive-base : Depends: texlive-binaries (>= 2012.20120516) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-doc-base : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-extra-utils : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed texlive-font-utils : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-generic-recommended : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-base : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-base-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-extra : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-extra-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-recommended : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-latex-recommended-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-luatex : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pictures : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pictures-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pstricks : Depends: texlive-binaries (>= 2012-0) but 2009-11ubuntu2 is installed Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed texlive-pstricks-doc : Depends: texlive-common (>= 2012.20120516) but 2009-15 is installed zlib1g : Breaks: texlive-binaries (< 2009-12) but 2009-11ubuntu2 is installed zlib1g:i386 : Breaks: texlive-binaries (< 2009-12) but 2009-11ubuntu2 is installed E: Unmet dependencies. Try using -f. I've seen similar errors on the site here, but not close enough that i could get it fixed. Any help would be great, Thanks

    Read the article

  • Bluetooth not detecting any devices in Windows 7

    - by underDog
    My Lenovo ThinkPad E320 Laptop running Windows 7 64bit has recently been refusing to detect any Bluetooth devices. I have tried to connect, using 'Add devices' under 'devices and printers' to two different Bluetooth mice and my HTC Wildfire android (2.2.1) phone and none of them are detected in the 'Add a device' dialog. History - Bluetooth initially seemed OK when I first got this laptop. I was able to connect to and use my android phone as a remote with no issues. I got my first Bluetooth mouse, it paired, but after each restart, or even after sleeping, it would not 're-connect' (even though it was listed under Bluetooth devices and supposedly 'working'), and I would need to remove the device and add it again. A week or two ago it stopped working all together. It is not detected at all. I gave up on the mouse and bought another (Lenovo ThinkPad brand) only to find that it was not detected either. I subsequently tested my Android phone and discovered it would not detect either. One thing of note is under 'Devices and Printers' there is listed a 'HID Keyboard Device' which under properties is listed as a 'Bluetooth HID Device'. This was not previously there before this problem started. Each time I remove it, or uninstall it from Device Manager it will quickly re-install itself, even with all my Bluetooth devices switched off. My (Google and searching this site) research of this issue has not yielded any definitive answers. I have changed the setting under Device Manager - Bluetooth - Properties - Power Management - 'Allow the computer to turn off this device to save power' to off. I have attempted to un-install and re-install the Bluetooth hardware, including the 'remove drivers' option and downloading and running the Lenovo Bluetooth installer package (found @ http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS014997). Bluetooth is turned on. All items under Bluetooth properties (Discovery and Connections) are checked. I have tried changing the batteries. I'm not sure what else I can try, apart from perhaps doing a fresh install of Windows. Any suggestions?

    Read the article

  • Joining Windows 7 Professional to a Windows Server 2003 R2 x64 domain fails.

    - by Vinko Vrsalovic
    I have a windows 7 professional (spanish) laptop trying to join a Windows Server 2003 (english) domain. It detect correctly the SRV record, finding the proper domain controller, but then the join fails with the error message (snippet, because the error is in spanish) An Active Directory Domain Controller for This Domain Could Not be Contacted The DNS is correctly set, and client can ping by name and IP the server, the server can ping the client by IP. I've tested with the FW down to no avail. A host of other XP Pro clients are connected to the domain. I've restarted Net Logon and checked that Windows Time is up. Also the times are in sync between the server and the client. I'll put below diagnostics output. I'm wondering if there's anything special to be done on either the server or the client to have a Win 7 Pro join a 2k3 R2 domain. The following diagnostic information follows: netdiag /q for the DC dcdiag on the DC ipconfig /all on the Win 7 client netdiag /q on the DC: .................................. Computer Name: HI-X2 DNS Host Name: hi-x2.hi.local System info : Microsoft Windows Server 2003 R2 (Build 3790) Processor : EM64T Family 6 Model 23 Stepping 10, GenuineIntel List of installed hotfixes : KB923561 KB924667-v2 KB925398_WMP64 KB925902 KB926122 KB927891 KB929123 KB930178 KB932168 KB936357 KB938127 KB941569 KB942830 KB942831 KB943055 KB943460 KB944338-v2 KB944653 KB945553 KB946026 KB948496 KB950760 KB950762 KB950974 KB951066 KB951748 KB952004 KB952069 KB952954 KB954155 KB954550-v7 KB955069 KB955759 KB956572 KB956802 KB956803 KB956844 KB958469 KB958644 KB958869 KB959426 KB960225 KB960803 KB960859 KB961063 KB961118 KB961501 KB967715 KB967723 KB968389 KB968816 KB969059 KB969947 KB970238 KB970430 KB970483 KB971032 KB971468 KB971657 KB971737 KB971961 KB971961-IE8 KB972270 KB973037 KB973354 KB973507 KB973540 KB973687 KB973815 KB973825 KB973869 KB973904 KB973917-v2 KB974112 KB974318 KB974392 KB974571 KB975025 KB975467 KB975560 KB975713 KB976662-IE8 KB977290 KB977816 KB977914 KB978037 KB978262 KB978338 KB978542 KB978601 KB978706 KB979306 KB979309 KB979683 KB980182 KB980182-IE8 KB980232 KB980302-IE8 KB981332-IE8 KB981350 Q147222 Per interface results: Adapter : Local Area Connection Host Name. . . . . . . . . : hi-x2.hi.local IP Address . . . . . . . . : 10.0.1.199 Subnet Mask. . . . . . . . : 255.0.0.0 Default Gateway. . . . . . : 10.0.1.1 Dns Servers. . . . . . . . : 10.0.1.199 WINS service test. . . . . : Skipped Global results: [WARNING] You don't have a single interface with the 'WorkStation Service', 'Messenger Service', 'WINS' names defined. DNS test . . . . . . . . . . . . . : Passed PASS - All the DNS entries for DC are registered on DNS server '10.0.1.199'. IP Security test . . . . . . . . . : Skipped The command completed successfully dcdiag on the DC: Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\HI-X2 Starting test: Connectivity ......................... HI-X2 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\HI-X2 Starting test: Replications ......................... HI-X2 passed test Replications Starting test: NCSecDesc ......................... HI-X2 passed test NCSecDesc Starting test: NetLogons ......................... HI-X2 passed test NetLogons Starting test: Advertising ......................... HI-X2 passed test Advertising Starting test: KnowsOfRoleHolders ......................... HI-X2 passed test KnowsOfRoleHolders Starting test: RidManager ......................... HI-X2 passed test RidManager Starting test: MachineAccount ......................... HI-X2 passed test MachineAccount Starting test: Services ......................... HI-X2 passed test Services Starting test: ObjectsReplicated ......................... HI-X2 passed test ObjectsReplicated Starting test: frssysvol ......................... HI-X2 passed test frssysvol Starting test: frsevent ......................... HI-X2 passed test frsevent Starting test: kccevent ......................... HI-X2 passed test kccevent Starting test: systemlog ......................... HI-X2 passed test systemlog Starting test: VerifyReferences ......................... HI-X2 passed test VerifyReferences Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : hi Starting test: CrossRefValidation ......................... hi passed test CrossRefValidation Starting test: CheckSDRefDom ......................... hi passed test CheckSDRefDom Running enterprise tests on : hi.local Starting test: Intersite ......................... hi.local passed test Intersite Starting test: FsmoCheck ......................... hi.local passed test FsmoCheck ipconfig /all on the Windows 7 client: Configuraci¢n IP de Windows Nombre de host. . . . . . . . . : hi-p6 Sufijo DNS principal . . . . . : Tipo de nodo. . . . . . . . . . : h¡brido Enrutamiento IP habilitado. . . : no Proxy WINS habilitado . . . . . : no Adaptador de LAN inal mbrica Conexi¢n de red inal mbrica: Sufijo DNS espec¡fico para la conexi¢n. . : Descripci¢n . . . . . . . . . . . . . . . : Intel(R) WiFi Link 5100 AGN Direcci¢n f¡sica. . . . . . . . . . . . . : 00-22-FB-63-47-A0 DHCP habilitado . . . . . . . . . . . . . : no Configuraci¢n autom tica habilitada . . . : s¡ Direcci¢n IPv4. . . . . . . . . . . . . . : 10.0.1.42(Preferido) M scara de subred . . . . . . . . . . . . : 255.255.255.0 Puerta de enlace predeterminada . . . . . : 10.0.1.1 Servidores DNS. . . . . . . . . . . . . . : 10.0.1.199 NetBIOS sobre TCP/IP. . . . . . . . . . . : habilitado Adaptador de Ethernet Conexi¢n de  rea local: Estado de los medios. . . . . . . . . . . : medios desconectados Sufijo DNS espec¡fico para la conexi¢n. . : Descripci¢n . . . . . . . . . . . . . . . : Realtek PCIe GBE Family Controller Direcci¢n f¡sica. . . . . . . . . . . . . : 00-1E-33-1F-35-B1 DHCP habilitado . . . . . . . . . . . . . : s¡ Configuraci¢n autom tica habilitada . . . : s¡ Adaptador de t£nel isatap.{8926581E-09AC-4123-906B-DA6386AD2D60}: Estado de los medios. . . . . . . . . . . : medios desconectados Sufijo DNS espec¡fico para la conexi¢n. . : Descripci¢n . . . . . . . . . . . . . . . : Adaptador ISATAP de Microsoft Direcci¢n f¡sica. . . . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP habilitado . . . . . . . . . . . . . : no Configuraci¢n autom tica habilitada . . . : s¡ Adaptador de t£nel Teredo Tunneling Pseudo-Interface: Sufijo DNS espec¡fico para la conexi¢n. . : Descripci¢n . . . . . . . . . . . . . . . : Teredo Tunneling Pseudo-Interface Direcci¢n f¡sica. . . . . . . . . . . . . : 00-00-00-00-00-00-00-E0 DHCP habilitado . . . . . . . . . . . . . : no Configuraci¢n autom tica habilitada . . . : s¡ Direcci¢n IPv6 . . . . . . . . . . : 2001:0:5ef5:73ba:1cec:3883:f5ff:fed5(Preferido) V¡nculo: direcci¢n IPv6 local. . . : fe80::1cec:3883:f5ff:fed5%13(Preferido) Puerta de enlace predeterminada . . . . . : :: NetBIOS sobre TCP/IP. . . . . . . . . . . : deshabilitado

    Read the article

  • Apache won't follow Symlink

    - by Marvin Dickhaus
    I have a LAMP server (Ubuntu 12.10) setup on my development machine. It is a T60 modified with an SSD. The server base is in /var/www. Apache has the following config: DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews SymLinksIfOwnerMatch AllowOverride all Order allow,deny allow from all </Directory> I'm currently developing a SilverStripe CMS featured site. The folder for the server is /var/www/sfk/. The framework and all cms relavant features are in their respective folders. The only folder that need to be modified would be the /var/www/sfk/mysite folder. Because of that I want to keep the mysite folder under my home directory and symlink it into the server folder. So here is what I've done: ln -s ~/sfk/mysite/ /var/www/sfk/ sudo chgrp www-data /var/www/sfk/mysite -R ls tells me the following: /var/www/sfk (exerpt) drwxr-xr-x 3 marvin www-data 4096 Nov 16 16:53 assets drwxr-xr-x 12 marvin www-data 4096 Nov 16 16:53 cms drwxr-xr-x 29 marvin www-data 4096 Nov 16 16:53 framework -rw-r--r-- 1 marvin www-data 2410 Nov 16 16:53 index.php lrwxrwxrwx 1 marvin www-data 24 Nov 20 17:45 mysite -> /home/marvin/sfk/mysite/ -rw-rw-r-- 1 marvin www-data 514 Nov 16 16:55 _ss_environment.php drwxr-xr-x 4 marvin www-data 4096 Nov 16 16:53 themes and ls /var/www/sfk/mysite/ drwxrwxr-x 6 marvin www-data 4096 Nov 16 00:15 code drwxrwxr-x 2 marvin www-data 4096 Nov 16 11:51 _config -rwxrwxr-x 1 marvin www-data 2685 Nov 16 15:39 _config.php drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 css drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 images drwxrwxr-x 2 marvin www-data 4096 Nov 16 00:15 javascript drwxrwxr-x 5 marvin www-data 4096 Nov 16 00:15 templates This is literally the same setup I have on my desktop machine. The problem I have is that the mysite/ folder is just not recognized. I'm thankful for every advice I get. I'm frustrated because I'm stuck with this issue for hours.

    Read the article

  • Some emails are being delivered, some returned

    - by Tom Broucke
    I have my own VPS where a site is running (control panel: directadmin). When I send mails, some are being delivered (hotmail, gmail, [email protected] ,...), others are not ([email protected]), others are delivered after being greylisted ([email protected]). /var/log/exim/mainlog What could be the cause of this? Is the problem Sender-Side or Receiver-Side? case 1: [email protected] (delivered) 2012-06-20 15:02:03 1ShKXr-0005Sc-7g <= [email protected] U=apache P=local S=1319 T="Password reset" from <[email protected]> for [email protected] 2012-06-20 15:02:03 1ShKXr-0005Sc-7g gmail-smtp-in-v4v6.l.google.com [2a00:1450:8005::1b] Network is unreachable 2012-06-20 15:02:03 1ShKXr-0005Sc-7g => [email protected] F=<[email protected]> R=lookuphost T=remote_smtp S=1355 H=gmail-smtp-in-v4v6.l.google.com [173.194.67.27] X=TLSv1:RC4-SHA:128 C="250 2.0.0 OK 1340196103 cp4si34336466wib.14" 2012-06-20 15:02:03 1ShKXr-0005Sc-7g Completed case 2: [email protected] (not being delivered) 2012-06-21 09:57:14 1ShcGQ-0007No-5H <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=740 [email protected] T="hey" from <[email protected]> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H ** [email protected] F=<[email protected]> R=virtual_aliases: 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z <= <> R=1ShcGQ-0007No-5H U=mail P=local S=1546 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H Completed 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z => info <[email protected]> F=<> R=virtual_user T=virtual_localdelivery S=1643 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z Completed case 3: [email protected] (greylisted) 2012-06-21 15:29:02 1ShhRW-000862-BV <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=782 [email protected] T="testmail squirrel" from <[email protected]> for [email protected] 2012-06-21 15:29:02 1ShhRW-000862-BV SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b1.one.com [195.47.247.194]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes 2012-06-21 15:29:02 1ShhRW-000862-BV == [email protected] R=lookuphost T=remote_smtp defer (-44): SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b2.one.com [195.47.247.195]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes Notice that the "from" in case1 differs in case2: [email protected] or [email protected]. Thanks for your time!

    Read the article

  • Computer freezes +/- 30 seconds, suspicion on SSD

    - by Robert vE
    My computer freezes for about 30 seconds, this happens occasionally. When it happens I can still move the mouse, sometimes even alternate between tabs in google chrome. If I try to open windows explorer nothing happens. Also chrome rapports "waiting for cache". It also happens in starcraft II, during which the sounds loops. I have made a Trace as this topic describes: How do I troubleshoot a Windows 7 freeze or slowness? Trace: https://docs.google.com/open?id=0B_VkKdh535p6NklhSDdBLURUMnc I have looked at it, but I couldn't figure it out. My system specs are: AMD Athlon X4 651 Asus Ati HD6670 ADATA SSD sp900 Asus f1a55 mainboard 4 GB crucial 1333 ram 500 watt atx ps I'm running Windows 7, fully updated. Any help is much appreciated. Update: I tried something before your reply that may have helped the problem. I don't know for sure if it has, it's too soon to tell. A bit of history first. I had problems installing win7 on my ssd from the start. In IDE mode it worked, but I had the same problems as above. AHCI was a total fail, with it on before install as well as turning it on after install (including tweaking register). I didn't bother installing the AMD chipset/AHCI as it was reported to have no TRIM function and thus make problems worse. Eventually I did install the AMD SMbus driver as the stability issues were driving me crazy. It worked, no more issues, until I installed some extra drivers and software. Audio/LAN/ASUS suite, I don’t see the relation, but somehow it screwed up my system again. As a last effort I posted here on this site. After which the thought occurred to me turn on AHCI again as by now I had all necessary drivers installed anyway. (plus all windows updates downloaded/installed in the meantime) I did and stability didn’t seem great the first few reboots, but eventually everything seemed to work great. I tried to play starcraft II – an almost guaranteed freeze before – and I had no problems. I’m basically crossing my fingers and hope the problem is gone for good. I still think it has something to do with my SSD. In my research into the problem I noticed a lot of these issues with sandforce 2281 firmware, the exact same firmware I have. People reported the same problem that I had, freezes. Additionally they reported that during a freeze the hdd light stayed on, I noticed after I read this that this happened with my computer as well. None of this is conclusive evidence that my SSD is really to fault, but it is suspicious. And why turning on AHCI would fix it I don’t know. Thank you Tom for taking a look, if the problem returns I will certainly do what you advised.

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • SQL Server database filled the hard drive and freeing up space isn't possible

    - by Jon
    I have a database in SQL Server 2008 on a 1Tb hard drive and it filled the drive, there is only 4Kb free. The MDF file is 323Gb and the LDF is 653Gb. The hard disk this DB is on has no other files on it other than the MDF and LDF so it's impossible to free up any space on the drive. The main hard disk is smaller but there is enough room to transfer the MDF to that drive, in case that helps. This server is overseas at a customer site and it's not possible at the moment to add more disk space to the server. It's also not possible to delete any records because the DB is in a failed mode (due to no disk space) and it doesn't respond to most commands. The Db is currently in full recovery mode which is why the LDF file is so large. This DB really doesn't need to be in full recovery so going forward we plan on switching it to simple mode which will save us a lot of space. I also don't care about losing the LDF file, but I need all of the data. I've spent a lot of time looking for a way out of this problem but everything I've found first involves either freeing up disk space or adding more disk space, neither of which is an option at this time. I'm stuck and any help would be greatly appreciated. I get the following log when trying to switch the DB to online mode. Msg 945, Level 14, State 2, Line 3 Database 'DBNAME' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details. Msg 5069, Level 16, State 1, Line 3 ALTER DATABASE statement failed. Msg 1101, Level 17, State 12, Line 3 Could not allocate a new page for database 'DBNAME' because of insufficient disk space in filegroup 'DEFAULT'. Create the necessary space by dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. I've found the following solutions but none work due to having no disk space on that drive, and since the DB is in a failed state I can't run most commmands. - DBCC SHRINKFILE - can't be run because doing a 'use DBNAME' fails - Detaching the DB and then changing the location of the MDF/LDF files, this fails because the DB is in an offline mode so you can't run detach. I'm at a loss about what else to try. Thanks.

    Read the article

  • NIC is receiving, but not transmitting at all?

    - by Shtééf
    I'm trying to fix a very strange problem remotely on a machine at a customer site. The machine is a Dell PowerEdge, I believe a 1950 (haven't verified, but the lspci output matches specs I found.) The machine has two similar NICs, identified as Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet (rev 12) by lspci, and using the bnx2 driver. (I suspect these are on-board and on the same controller, which is what I'm accustomed to for this type of machine.) The primary interface eth0 works perfectly, and is in fact how I am ssh'd in. However, the secondary interface eth1 is not transmitting. I can see this in ifconfig output, for example, where the TX field is always 0. However, it is receiving, and tcpdump shows ARP requests coming from the ISP's gateway on the other side. The interface is physically connected to a Siemens BSTU4 modem, configured by the ISP. The link is properly set to 10MBps and full duplex, without negotation, as the ISP requested. A small /30 subnet is configured. For the sake of anonimity, let's say the machine is 3.3.3.2/30, and the ISP's gateway .1. The machine has no firewall settings whatsoever. Even running something like arping -I eth1 3.3.3.1, and running tcpdump alongside, shows no traffic whatsoever being transmitted on the interface. (But the other side keeps steadily sending ARP requests, and that is all that can be seen.) What could be causing this? Here's some output, anonymized, which may hopefully help: $ ethtool eth1 Settings for eth1: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supports auto-negotiation: Yes Advertised link modes: Not reported Advertised auto-negotiation: No Speed: 10Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: off Supports Wake-on: d Wake-on: d Link detected: yes $ ip link show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:15:c5:xx:xx:xx brd ff:ff:ff:ff:ff:ff $ ip -4 addr show eth1 3: eth1: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000 inet 3.3.3.2/30 brd 3.3.3.3 scope global eth1 $ ip -4 route show match 3.3.3.0/30 3.3.3.0/30 dev eth1 proto kernel scope link src 3.3.3.2 default via 10.0.0.5 dev eth0

    Read the article

  • dig @my-server-ip mydomain.com works from inside, not from outside?

    - by x4954
    My server has 2 ips: x.x.x.73 and x.x.x.248. I can access my site via these ips, using Web browser. {Now, from a CentOS machine (not my server), using terminal} If I: dig @x.x.x.73 mydomain.com dig @x.x.x.248 mydomain.com I get the result: Connection timed out; no server could be reached. Could somebody please tell me how to fix it? Thank you. More information: If I log in to my server using ssh and do: dig @x.x.x.73 mydomain.com dig @x.x.x.248 mydomain.com I can see my zone shown as expected: ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-16.P1.el5_7.1 <<>> @x.x.x.73 mydomain.com ; (1 server found) ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12757 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;mydomain.com. IN A ;; ANSWER SECTION: mydomain.com. 38400 IN A x.x.x.73 mydomain.com. 38400 IN A x.x.x.248 ;; AUTHORITY SECTION: mydomain.com. 38400 IN NS ns2.mydomain.com. mydomain.com. 38400 IN NS ns1.mydomain.com. ;; ADDITIONAL SECTION: ns1.mydomain.com. 38400 IN A x.x.x.73 ns2.mydomain.com. 38400 IN A x.x.x.248 ;; Query time: 20 msec ;; SERVER: x.x.x.73#53(x.x.x.73) ;; WHEN: Sun Jan 15 11:46:30 2012 ;; MSG SIZE rcvd: 129 BIND version 9.3.6, Centos 5. Logging to my server using ssh, do inga "dig google.com" also shows expected results.

    Read the article

< Previous Page | 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128  | Next Page >