Search Results

Search found 5560 results on 223 pages for 'brute force attacks'.

Page 170/223 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Ubuntu Server mdadm drbd ocfs2 kvm hangs under heavy file reading

    - by Stefano Annese
    I have deployed four ubuntu 10.04 server. They are coupled two by two in a cluster scenario. on both sides we have software raid1 disks, drbd8 and OCFS2 and on top of it some kvm machines run with qcow2 disks. I followed this: Link corosync is just used for DRBD and OCFS, the kvm machines are run "manually" When it works is fine: good performances, good I/O, but at a given time one of the two cluster started hanging. Then we tried with just one server turned on and it hangs the same. It seems to happen when an heavy READ in one of the virtual machines occurs, that is during rsyn backup. When the fact occurs the virtual machines are not reachable any more and the real server responds with good delay to the ping but no screen and no ssh is available. All we can do is force shutdown (hold the button) and restart and when it turns on again the raid on which relay drbd is resyncing. All the time it hangs we see such fact. After a couple of week of pain on one side this morning also the other cluster hung, but it has different moteherboard, ram, kvm instances. What is similar is reading for rsync scenario and Western Digital RAID Edistion disks on both side. Can anybody give me some input to solve such issue?

    Read the article

  • How do I stop mail.app and its nag screen from opening upon login in OS X if I don't use it

    - by user26453
    Currently whenever I start OS X (10.6.2), mail.app starts up with a "Welcome to Mail" dialog, asking me to create an account by inputting name, email address, and password. If I cancel this dialog, the app just hangs and I have to force quit it. I do not use the mail.app and I do not want it to start up with OS X. I have checked the login items and it is NOT present in the login items list for my account. I have also ctrl+clicked the doc icon that appears and confirmed there is no option enabled for "Run at Login". If I go ahead and just spam continue through the dialogs for a new account, I can get through to actually using Mail and accessing preferences. I cannot find a startup option in Mail preferences. After I have completed this, if I now restart, Mail does NOT open automatically. However as soon as I delete the account that I created, it once again goes back to popping up a "Welcome to Mail" dialog every time I startup and login. As best as I can tell, it seems OS X checks if an account exists in the Mail app, and if it does not, it will always start up a "Welcome to Mail" dialog on login, regardless if the Mail app is set to run via login items, etc. This is incredibly frustrating given I have no intention of using the Mail app. I realize I can easily leave account info in there (perhaps even disable the account via preferences), but this behavior is ridiculous. ajgs

    Read the article

  • Incorrect deployment of WSGI app to AWS using Elastic Beanstalk

    - by Dzmitry Zhaleznichenka
    cross-link to AWS forums I have developed a simple Python web service using WSGI and would like to deploy it to AWS cloud using Elastic Beanstalk. My problem is I cannot make all the options I specify in Elastic Beanstalk configuration to be correctly configured in the cloud. For deployment, I use Elastic Beanstalk CLI utility. I have run eb init command and set up the required parameters. After this, a directory named .elasticbeanstalk was created in my source tree. It has two config files that are used for deployment, namely config and optionsettings. The latter one among the other options contains the WSGI configuration that has to update /etc/httpd/conf.d/wsgi.conf at the instances. After some of my adjustments the file has the following settings: [aws:elasticbeanstalk:application:environment] DJANGO_SETTINGS_MODULE = PARAM1 = PARAM2 = PARAM4 = PARAM3 = PARAM5 = [aws:elasticbeanstalk:container:python] WSGIPath = handler.py NumProcesses = 2 StaticFiles = /static= NumThreads = 10 [aws:elasticbeanstalk:container:python:staticfiles] /static = static/ [aws:elasticbeanstalk:hostmanager] LogPublicationControl = false [aws:autoscaling:launchconfiguration] InstanceType = t1.micro EC2KeyName = zmicier-aws [aws:elasticbeanstalk:application] Application Healthcheck URL = [aws:autoscaling:asg] MaxSize = 10 MinSize = 1 Custom Availability Zones = [aws:elasticbeanstalk:monitoring] Automatically Terminate Unhealthy Instances = true [aws:elasticbeanstalk:sns:topics] Notification Endpoint = Notification Protocol = email It turns out that not all of these options are considered when I start the environment or update it. Thus, when I update NumThreads or NumProcesses, the respective parameters get changed in wsgi.conf as expected. But whatever I write to the WSGIPath and StaticFiles parameters, I'm not able to automatically change the respective values of wsgi.conf, they remain Alias /static /opt/python/current/app/ WSGIScriptAlias / /opt/python/current/app/application.py which drives me nuts. Moreover, when I deploy my application using git aws.push and having the following contents of .ebextensions/python.config file, neither of options I specify in it affects the deployment. option_settings: - namespace: aws:elasticbeanstalk:container:python option_name: WSGIPath value: mysite/wsgi.py - namespace: aws:elasticbeanstalk:container:python option_name: NumProcesses value: 5 - namespace: aws:elasticbeanstalk:container:python option_name: NumThreads value: 25 - namespace: aws:elasticbeanstalk:container:python:staticfiles option_name: /static/ value: app/static/ I wonder what I should do to force AWS use all the parameters I specify in the configuration, namely the WSGI Path and path to my static data.

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Apahe configuration with virtual hosts and SSL on a local network

    - by Petah
    I'm trying to setup my local Apache configuration like so: http://localhost/ should serve ~/ http://development.somedomain.co.nz/ should serve ~/sites/development.somedomain.co.nz/ https://development.assldomain.co.nz/ should serve ~/sites/development.assldomain.co.nz/ I only want to allow connections from our local network (192.168.1.* range) and myself (127.0.0.1). I have setup my hosts file with: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 development.somedomain.co.nz 127.0.0.1 development.assldomain.co.nz 127.0.0.1 development.anunuseddomain.co.nz My Apache configuration looks like: Listen 80 NameVirtualHost *:80 <VirtualHost development.somedomain.co.nz:80> ServerName development.somedomain.co.nz DocumentRoot "~/sites/development.somedomain.co.nz" DirectoryIndex index.php <Directory ~/sites/development.somedomain.co.nz> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost localhost:80> DocumentRoot "~/" ServerName localhost <Directory "~/"> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <IfModule mod_ssl.c> Listen *:443 NameVirtualHost *:443 AcceptMutex flock <VirtualHost development.assldomain.co.nz:443> ServerName development.assldomain.co.nz DocumentRoot "~/sites/development.assldomain.co.nz" DirectoryIndex index.php SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /Applications/XAMPP/etc/ssl.crt/server.crt SSLCertificateKeyFile /Applications/XAMPP/etc/ssl.key/server.key BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 <Directory ~/sites/development.assldomain.co.nz> SSLRequireSSL Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> </IfModule> http://development.somedomain.co.nz/ http://localhost/ and https://development.assldomain.co.nz/ work fine. The problem is when I request http://development.anunuseddomain.co.nz/ or http://development.assldomain.co.nz/ it responds with the same as http://development.somedomain.co.nz/ I want it to deny all requests that do not match a virtual host server name and all requests to a https host that are requested with http PS I'm running XAMPP on Mac OS X 10.5.8

    Read the article

  • gpg symmetric encryption using pipes

    - by Thomas
    I'm trying to generate keys to lock my drive (using DM-Crypt with LUKS) by pulling data from /dev/random and then encrypting that using GPG. In the guide I'm using, it suggests using the following command: dd if=/dev/random count=1 | gpg --symmetric -a >./[drive]_key.gpg If you do it without a pipe, and feed it a file, it will pop up an (n?)curses prompt for you to type in a password. However when I pipe in the data, it repeats the following message four times and sits there frozen: pinentry-curses: no LC_CTYPE known assuming UTF-8 It also says can't connect to '/root/.gnupg/S.gpg-agent': File or directory doesn't exist, however I am assuming that this doesn't have anything to do with it, since it shows up even when the input is from a file. So I guess my question boils down to this: is there a way to force gpg to accept the passphrase from the command line, or in some other way get this to work, or will I have to write the data from /dev/random to a temporary file, and then encrypt that file? (Which as far as I know should be alright due to the fact that I'm doing this on the LiveCD and haven't yet created the swap, so there should be no way for it to be written to disk.)

    Read the article

  • Cache updates when migrating DNS from one provider to another

    - by JohnCC
    This may be a Windows DNS specific question or a general DNS best practice question - I'm not sure! We migrated our 3rd party DNS provision from provider A to provider B. I noticed that our internal recursive windows DNS servers still had NS records cached for our domains pointing to provider A's servers, even though I changed the nameservers with our registrar several days ago, and even though selecting the properties of the cached records showed a TTL of 1 day. After 24 hours when the NS records in this cache have expired, will the DNS server go back to the TLD server for an update on the authority, or will it go by preference to dns1.providera.com since that is what it has cached? In this case I arranged to leave Provider A's servers up for a week to allow changes to propagate, so dns1.providera.com is still active and would still provide NS and SOA records that said that dns1.providera.com. was in charge of this domain. Given this fact, would the Windows DNS server ever go back to the TLD and pick up the authority changes, or would it just assume all was well and renew timestamps on its cached NS records? I wonder what would be the best approach to ensuring that caches pick this up. Should I:- (1) Leave Provider A's servers in place and active and wait for caches to catch up ... basically what we're doing now which seems to have issues - perhaps specifically for Windows servers, or perhaps more widely. (2) Leave Provider A's servers in place but change the NS and/or SOA information they provide to tell caches that new servers are in charge. (3) Remove Provider A's servers after 2*TTL to force remaining caches to update. The issue with (2) is that on Provider A's system I can't seem to change the NS or SOA information to anything other than their servers. The issue with (3) is that I'm not sure how a DNS server would behave in this case. When it couldn't reach the cached name servers, would it flush its cache and try a full recursive lookup, or would it just return an error, forcing the user to clear the cache manually? Thanks in advance!

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Identify differences between MP3 files

    - by Thingomy
    I have 2 old similar directory trees with MP3 files in them. I am happily using tools like diff and Rsync to identify and merge the files that are only present on one side, or are identical, I'm left with a bunch of files that are bitwise different. On running diff over a pair actually different files, (with -a tag to force text analysis) it produces incomprehensible gibberish. I have listened to files from both sides, and they both seem to play fine (but at nearly 10 minutes per song, when listening to them twice each, I haven't done many) I suspect the differences are due to some player in the past "enhancing" my collection by messing about with ID3 tags, but I can't be certain. Even if I identify differences in ID3 tags, I would like to confirm that no cosmic ray or file copy error issues have damaged any of the files. One method that occurs to be is finding the byte locations of the differences, and ignoring all changes in the first ~10kb of each file, but I don't know how to do this. I have on the order of a hundred or so files that differ across the directory tree. I found How to compare mp3, flac audio data in a file, ignoring header data (ID3 tag) etc.? -- but I can't run alldup due to being Linux only, and from the sounds of it, it would only partially solve my issues anyway.

    Read the article

  • Expert iptables help needed?

    - by Asad Moeen
    After a detailed analysis, I collected these details. I am under a UDP Flood which is more of application dependent. I run a Game-Server and an attacker is flooding me with "getstatus" query which makes the GameServer respond by making the replies to the query which cause output to the attacker's IP as high as 30mb/s and server lag. Here are the packet details, Packet starts with 4 bytes 0xff and then getstatus. Theoretically, the packet is like "\xff\xff\xff\xffgetstatus " Now that I've tried a lot of iptables variations like state and rate-limiting along side but those didn't work. Rate Limit works good but only when the Server is not started. As soon as the server starts, no iptables rule seems to block it. Anyone else got more solutions? someone asked me to contact the provider and get it done at the Network/Router but that looks very odd and I believe they might not do it since that would also affect other clients. Responding to all those answers, I'd say: Firstly, its a VPS so they can't do it for me. Secondly, I don't care if something is coming in but since its application generated so there has to be a OS level solution to block the outgoing packets. At least the outgoing ones must be stopped. Secondly, its not Ddos since just 400kb/s input generates 30mb/s output from my GameServer. That never happens in a D-dos. Asking the provider/hardware level solution should be used in that case but this one is different. And Yes, Banning his IP stops the flood of outgoing packets but he has many more IP-Addresses as he spoofs his original so I just need something to block him automatically. Even tried a lot of Firewalls but as you know they are just front-ends to iptables so if something doesn't work on iptables, what would the firewalls do? These were the rules I tried, iptables -A INPUT -p udp -m state --state NEW -m recent --set --name DDOS --rsource iptables -A INPUT -p udp -m state --state NEW -m recent --update --seconds 1 --hitcount 5 --name DDOS --rsource -j DROP It works for the attacks on un-used ports but when the server is listening and responding to the incoming queries by the attacker, it never works. Okay Tom.H, your rules were working when I modified them somehow like this: iptables -A INPUT -p udp -m length --length 1:1024 -m recent --set --name XXXX --rsource iptables -A INPUT -p udp -m string --string "xxxxxxxxxx" --algo bm --to 65535 -m recent --update --seconds 1 --hitcount 15 --name XXXX --rsource -j DROP They worked for about 3 days very good where the string "xxxxxxxxx" would be rate-limited, blocked if someone flooded and also didn't affect the clients. But just today, I tried updating the chain to try to remove a previously blocked IP so for that I had to flush the chain and restore this rule ( iptables -X and iptables -F ), some clients were already connected to servers including me. So restoring the rules now would also block some of the clients string completely while some are not affected. So does this mean I need to restart the server or why else would this happen because the last time the rules were working, there was no one connected?

    Read the article

  • Mac OS Leopard: SyncServer process constantly using 100% CPU

    - by macca1
    I am running Leopard that I upgraded from Tiger. I've been noticing that every once in a while the SyncServer process starts up and eats up all the CPU. The fans will start going at full blast and the laptop will slow down to a crawl. I need to force quit the process from Activity Monitor to get it under control. It disappears for a while, but eventually gets started again. I do have an iphone as well that I sync so I'm wondering if syncServer might be an apple process checking for my phone plugged in. Edit: Tried iSync and the manual resetsync as suggested, but got this output: Vince-2:~ vince$ /System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl full 2010-03-12 08:03:50.230 perl[176:10b] SyncServer is unavailable: exception when connecting: connection timeout: did not receive reply PerlObjCBridge: NSException raised while sending reallyResetSyncData to NSObject object name: "ISyncServerUnavailableException" reason: "Can't connect to the sync server: NSPortTimeoutException: connection timeout: did not receive reply ((null))" userInfo: "" location: "/System/Library/Frameworks/SyncServices.framework/Versions/A/Resources/resetsync.pl line 16" ** PerlObjCBridge: dying due to NSException Vince-2:~ vince$ And during that syncServer started spinning up 95-100% just like it always does.

    Read the article

  • Is there an IE8 setting or policy to make it work like IE7 with respect to persistent connections?

    - by Stephen Pace
    I am working with a commercial application running on XP using IIS 5.1. Periodically the application is returning an IIS error "There are too many people accessing the Web site at this time." This is caused by Microsoft artificially limiting the number of connections (10) under IIS 5.1 under Windows XP, but in this case, there is really only one user (albeit a few tabs open at a time). Microsoft suggests you can reduce the problem by turning off HTTP Keep-Alives for that particular web site: http://support.microsoft.com/kb/262635 If you use IIS 5.0 on Windows 2000 Professional or IIS 5.1 on Microsoft Windows XP Professional, disable HTTP keep-alives in the properties of the Web site. When you do this, a limit of 10 concurrent connections still exists, but IIS does not maintain connections for inactive users. I may do that; however, I'm worried about performance degradation. However, I also notice that IE8 appears to handle this differently than IE7. By default, IE6 and IE7 use 2 persistent connections while IE8 uses 6. Perhaps in this case IE8 itself is generating multiple connections in an attempt to be faster, but those additional connections are overwhelming the artificially limited IIS 5.1 on XP? Assuming that is the case, is there an Internet Explorer option, registry setting, or policy I can set to force IE8 to behave like IE7 with respect to persistent connections? I would not set this for all users, but for the small number of users that used this application, it might solve their intermittent problem until the application can be rehosted on Windows Server 2008. Thanks.

    Read the article

  • Mac: window manager frozen, have ssh access

    - by Bernd
    I have a Mac which regularly runs into a problem. The user interface stops reponding, showing a "frozen" user interface. The mouse is still moving but clicking does not trigger anything. This happens about once a week. Solution so far is to force switch-off the Mac and reboot it. I have ssh root access to the Mac. Killing (kill -9) the active application has no visible impact on what is shown on the screen. Any ideas on how to diagnose this? Is there a way to restart the window manager from the ssh shell? Killing /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer seems not to be possible. The Mac is an early 2008 iMac and runs Lion with latest updates. /Library/Logs/DiagnosticReports is empty. Update: Problem stays after update to Mountain Lion. The WindowServer process is in "uninterruptible wait" state ("U" flag in ps output set): imac:~ root# ps ax|awk "NR==1|| /WindowServer/"|grep -v awk PID TT STAT TIME COMMAND 86 ?? Us 50:51.69 /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer -daemon Any idea for diagnosing what blocks the process? Any idea for "waking up" the process?

    Read the article

  • Official end of support date for Internet Explorer 6 on Windows XP

    - by scunliffe
    If I read the docs on Windows Service Pack support policies, and the specific Internet Explorer lifecycle support page as well as the Wikipedia page I've deduced that: IE6 support ends/ended at: Windows 2000 Ended (date unknown) Windows XP SP0 (RTM) Ended Home: 30-Aug-2003 Pro: 30-Sep-2004 SP1 Ended Home: 11-Jul-2004 Pro: 11-Jul-2004 SP2 Home: 13-Jul-2010 Pro: 13-Jul-2010 SP3 (released: April 21, 2008) Home: ??? Pro: ??? What isn't clear is the Windows XP SP3 scenario. In "human" terms, when is the end of support for IE6 on Windows XP SP3? e.g. if there is never a Windows XP SP4... or heaven forbid, an SP4 is released. I realize this doesn't force people to upgrade etc. however I'm trying to get a "semi" official word on when IE6 moves into the "not supported" category. I'm not interested in philosophical answers e.g. "big enterprise won't upgrade but they will expect support into 2017" stuff... I just want the "clear answer" in terms of official Microsoft support.

    Read the article

  • apache tomcat loadbalancing clustering on ubuntu

    - by user740010
    i am facing a problem in clustering the tomcat with apache as a loadbalancer using mod_jk on ubuntu. i have install apache2 on my ubuntu 11.04 and i have downloaded tomcat7 created two copies and kept them at two different location. 1st one is at /home/net4u/vishal/test/tomcatA 2nd one is at /home/net4u/vishal/test1/tomcatB i have made following changes to server.xml file in /conf folder 1. <Server port="8205" shutdown="SHUTDOWN"> 2. <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> 3.<Connector port="8209" protocol="AJP/1.3" redirectPort="8443" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> 4. <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> similarly i have modified other tomcat i.e tomcatA server.xml content of the server.xml is as follow: -- <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8280" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8109" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost" jvmRoute="tomcatB"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- uncomment for clustering--> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> <!-- Use the LockOutRealm to prevent attempts to guess user passwords via a brute-force attack --> <Realm className="org.apache.catalina.realm.LockOutRealm"> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> </Realm> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> </Host> </Engine> i have install libapache2-mod-jk step 1. i have Created jk.load file in /etc/apache2/mods-enabled/jk.load content is as follows: LoadModule jk_module /usr/lib/apache2/modules/mod_jk.so Create /etc/apache2/mods-enabled/jk.conf: JkWorkersFile /etc/apache2/workers.properties JkLogFile /var/log/apache2/jk.log JkMount /ecommerce/* worker1 JkMount /images/* worker1 JkMount /content/* worker1 step 2. Created workers.properties file in /etc/apache2/workers.properties content is as follows: workers.tomcat_home=/home/vishal/Desktop/test/tomcatA workers.java_home=/usr/lib/jvm/default-java ps=/ worker.list=tomcatA,tomcatB,loadbalancer   worker.tomcatA.port=8109 worker.tomcatA.host=localhost worker.tomcatA.type=ajp13 worker.tomcatA.lbfactor=1   worker.tomcatB.port=8209 worker.tomcatB.host=localhost worker.tomcatB.type=ajp13 worker.tomcatB.lbfactor=1 worker.loadbalancer.type=lb worker.loadbalancer.balanced_workers=tomcatA,tomcatB worker.loadbalancer.sticky_session=1 i tried the same thing on the windows machine it is working.

    Read the article

  • How to diagnose occasional sudden resets?

    - by steve314
    I have a Windows XP system, and have recently upgraded by adding 2 1GB sticks of RAM to the 2x0.5GB already present. Since then, about once per day (the system is used 8+ hours per day), the system has suddenly and unexpectedly reset. On a couple of occasions, the system has frozen completely, only responding to the power button being held in for several seconds to force power off. Nothing at all ever appears in the system event log that might indicate a possible cause - everything seems to suggest business as usual. Sounds like faulty memory - but memtest86+ says otherwise. A full test, taking over an hour, found no issues. The next likely suspicion, then, is that I've knocked something while installing the RAM. Trouble is, everything I can think of to test seems fine. I've opened up the case and prodded a few things around, hoping to get better contact on connections etc, but there's no sign yet as to whether that has made a difference or not. I thought about a malware-related timing fluke, but again, so far as I can tell I'm all clear. All I can think of to add to my checklist (mainly of things that I can't easily check) is... The power supply is (1) only 350W, (2) not necessarily the best quality, and (3) powering a Prescott P4 640 3.2GHz. Could that be borderline overloaded or about to die? How do I check? Is it possible that the CPU isn't getting cooled properly? I haven't had the fan past normal tickover even doing video encoding, and the only sane temperature reading from SpeedFan is pretty steady at 36 celcius, so probably not. Any thoughts? Is there a standard procedure for diagnosing this kind of fault?

    Read the article

  • What could cause the file command in Linux to report a text file as data?

    - by Jonah Bishop
    I have a couple of C++ source files (one .cpp and one .h) that are being reported as type data by the file command in Linux. When I run the file -bi command against these files, I'm given this output (same output for each file): application/octet-stream; charset=binary Each file is clearly plain-text (I can view them in vi). What's causing file to misreport the type of these files? Could it be some sort of Unicode thing? Both of these files were created in Windows-land (using Visual Studio 2005), but they're being compiled in Linux (it's a cross-platform application). Any ideas would be appreciated. Update: I don't see any null characters in either file. I found some extended characters in the .cpp file (in a comment block), removed them, but file still reports the same encoding. I've tried forcing the encoding in SlickEdit, but that didn't seem to have an effect. When I open the file in vim, I see a [converted] line as soon as I open the file. Perhaps I can get vim to force the encoding?

    Read the article

  • CentOS 6.0 yum audacity dependency

    - by Kaemic
    I'm trying to install audacity on centos using yum and I cant force yum to resolve all dependencies, here's what i got, can someone help me with it (when I download the rpm file and click-installit i get the dependency problem too): # yum --disablerepo=c6-media install audacity-1.2.3-2.2.el4.rf.i386.rpm Loaded plugins: fastestmirror, refresh-packagekit Loading mirror speeds from cached hostfile * base: mirror.karneval.cz * centosplus: mirror.karneval.cz * extras: centos.vieth-server.de * rpmforge: fr2.rpmfind.net * updates: mirror.karneval.cz Setting up Install Process Examining audacity-1.2.3-2.2.el4.rf.i386.rpm: audacity-1.2.3-2.2.el4.rf.i386 Marking audacity-1.2.3-2.2.el4.rf.i386.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package audacity.i386 0:1.2.3-2.2.el4.rf set to be updated --> Processing Dependency: wxGTK >= 2.4.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0(WXGTK_2.4) for package: audacity-1.2.3-2.2.el4.rf.i386 --> Running transaction check ---> Package audacity.i386 0:1.2.3-2.2.el4.rf set to be updated --> Processing Dependency: libwx_gtk-2.4.so.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0(WXGTK_2.4) for package: audacity-1.2.3-2.2.el4.rf.i386 ---> Package wxGTK.i686 0:2.8.12-1.el6.rf set to be updated --> Finished Dependency Resolution Error: Package: audacity-1.2.3-2.2.el4.rf.i386 (/audacity-1.2.3-2.2.el4.rf.i386) Requires: libwx_gtk-2.4.so.0 Error: Package: audacity-1.2.3-2.2.el4.rf.i386 (/audacity-1.2.3-2.2.el4.rf.i386) Requires: libwx_gtk-2.4.so.0(WXGTK_2.4) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

    Read the article

  • Problem with Windows Service and network printers.

    - by Mohammadreza
    I have a Windows Service application that every now and then should print some documents. As far as I know, to print those documents, my service must be run with a user account other than Local Service or Network Service. So i have created a user account and added that to the Administrators group and ran the service with it. With locally installed printers, I don't have any problems because those printers are automatically installed for all accounts. To be able to print with the network printers, I have created another application that syncs the installed printers of the currently logged in user with the user account that my service uses with the rundll32.exe printui.dll,PrintUIEntry command. In Vista and Windows7 I don't have any problems with the syncing of the printers because every time that a printer should be installed the authentication window will open and it asks for the appropriate user account to install that printer (The service user account is not created on the network printers computers) but in XP a find dialog with the "Connecting to {printername}" caption will appear and stops responding, or sometimes it installs the printer but every time service tries to print, a Win32Exception with "A StartDocPrinter call was not issued" message will throw and in the user account that runs the sync application a duplicate printer will be shown and I couldn't delete those printers unless with force (using registry). Am I doing the right thing for printing documents with Windows Services at all? If yes, how can I solve the above-mentioned problem? And if not, what the heck should I do? Thanks.

    Read the article

  • Linksys wireless router will not hardware reset.

    - by Jack M.
    Hello, all. I'm unable to make my router perform a hardware reset, and I cannot understand why. All was working well, except that my iPhone could not connect to the wireless. I found that the router was only allowing AES encryption on WPA2 Personal mode, so I upgraded the firmware. I updated the firmware to Ver.1.06.1, and everything went screwy. The router is no longer showing up in the WiFi list (as Linksys, or its previous network name). Wiring into the router gives me an IP address from my ISP (24.121.121.XXX). Attempting to do a hardware reset, but the power light never starts flashing and the router does not seem to reboot. My machine wired in is still online with no interruption in WoW. Pulling the power cord to force a reset returns it to the same state. I even went so far as to pull up my previous IP address (from DynDNS) and try to connect to that, but it wont even ping. What I'm trying to find out is: Did the new firmware fry the thing, or is there some way to fix this? Thanks in advance for any help.

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • How to choose which network connection provides the default gateway in Windows XP

    - by Cathy
    I have a laptop with an integrated NIC and a WiFi connection. Both the wired and wireless networks I am using can access the Internet. Win XP is routing all traffic through the wireless network. I want to force it to route everything through the wired network when it is available (i.e. when I am sitting at my desk with the laptop docked) and through the wireless when that is the only option (i.e. when I have undocked my laptop and carried it to a conference room, or if I am out of the office working on a different WiFi network). The wireless connection cannot be established until after I am logged into Windows, so it's always the second network to become available to the OS. I have manually overridden the metric values in the TCP/IP configurations so that the NIC has metric 10 and the WiFi has metric 20. However, Windows is still picking the WiFi adapter's address as the Default Gateway, so this isn't helping. If I manually disable and re-enable the WiFi adapter, then it will switch the default gateway to the wired network and stay that way until I shutdown Windows. How can I tell Windows XP not to replace the default gateway when the WiFi connection is first enabled?

    Read the article

  • Make isolinux 4.0.3 chainload itself

    - by chainloader
    I have a bootable iso which boots into isolinux 4.0.3 and I want to make it chainload itself (my actual goal is to chainload isolinux.bin v4.0.1-debian, which should start up the Ubuntu10.10 Live CD, but for now I just want to make it chainload itself). I can't get isolinux to chainload any isolinux.bin, no matter what version. It either freezes or shows a "checksum error" message. I'm using VMWare to test the iso. Things I have tried: .com32 /boot/isolinux/chain.c32 /boot/isolinux/isolinux-debug.bin (chainload self) this shows Loading the boot file... Booting... ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: Main image LBA = 53F00100 ...and the machine freezes. Then I've tried this (chainload GRUB4DOS 0.4.5b) chainloader /boot/isolinux/isolinux-debug.bin Result: Error 13: Invalid or unsupported executable format Next try: (chainload GRUB4DOS 0.4.5b) chainloader --force /boot/isolinux/isolinux-debug.bin boot Result: ISOLINUX 4.03 2010-10-22 Copyright (C) 1994-2010 H. Peter Anvin et al isolinux: Starting up, DL = 9F isolinux: Loaded spec packet OK, drive = 9F isolinux: No boot info table, assuming single session disk... isolinux: Spec packet missing LBA information, trying to wing it... isolinux: Main image LBA = 00000686 isolinux: Image checksum error, sorry... Boot failed: press a key to retry... I have tried other things, but all of them failed miserably. Any suggestions?

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >