Search Results

Search found 3105 results on 125 pages for 'stupid mistake'.

Page 78/125 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • Are relative-path symlinks reliable on Rackspace Cloud Sites?

    - by Jakobud
    Rackspace's Cloud Sites have a lot of stupid limitations. For example, no SSH (in or out), no shell, no RSYNC, etc... (even through cron). Recently I learned that you can't reliably use symlinks in Cloud Sites. Apparently this is because the absolute path of your sites could change at any moment, since it's a shared host environment split up between many disks/servers. I guess different account's sites get moved from disk to disk whenever Rackspace decides to. Supposedly to increase efficiency across the board. So after talking with a Rackspace tech, he said they cannot guarantee that symlinks would always work. Obviously this is because if you have a symlink that use's an absolute path like this: //mnt/disk-34566/home/user34566/files/sites/www.mysite.com/mydir If you files go moved to a different disk (or whatever they do), then the absolute path would be different and the link would now be broken. That makes sense. So next, I asked the Rackspace tech if relative path symlinks were reliable. So if I have the following link: files/sites/www.mysite.com/mylink --> ../www.myothersite.com/anotherdir You can see that the symlink simply points to a nearby directory's sub-directory. He said they cannot guarantee that even those would always work either. Since it uses a relative path to another nearby directory I'm not sure how it could ever break from something Rackspace would do. Do relative symlinks somehow rely on absolute paths underneath? Or is Rackspace using some weird custom filesystem where they will break from absolute path changes? It seems like a relative-path symlink would be fine and would only break if the user did something to mess up the directories involved. But when the tech's say that they "don't officially support symlinks of any kind" that makes me hesitant to use them for large commercial websites in Cloud Sites. Can anyone with Rackspace experience give input on this topic?

    Read the article

  • Dynamic virtual host configuration in Apache

    - by Kostas Andrianopoulos
    I want to make a virtual host in Apache with dynamic configuration for my websites. For example something like this would be perfect. <VirtualHost *:80> AssignUserId $domain webspaces ServerName $subdomain.$domain.$tld ServerAdmin admin@$domain.$tld DocumentRoot "/home/webspaces/$domain.$tld/subdomains/$subdomain" <Directory "/home/webspaces/$domain.$tld/subdomains/$subdomain"> .... </Directory> php_admin_value open_basedir "/tmp/:/usr/share/pear/:/home/webspaces/$domain.$tld/subdomains/$subdomain" </VirtualHost> $subdomain, $domain, $tld would be extracted from the HTTP_HOST variable using regex at request time. No more loads of configuration, no more apache reloading every x minutes, no more stupid logic. Notice that I use mpm-itk (AssignUserId directive) so each virtual host runs as a different user. I do not intend to change this part. Since now I have tried: - mod_vhost_alias but this allows dynamic configuration of only the document root. - mod_macro but this still requires the arguments of the vhost to be declared explicitly for each vhost. - I have read about mod_vhs and other modules which store configuration in a SQL or LDAP server which is not acceptable as there is no need for configuration! Those 3 necessary arguments can be generated at runtime. - I have seen some Perl suggestions like this, but as the author states $s->add_config would add a directive after every request, thus leading to a memory leak, and $r->add_config seems not to be a feasible solution.

    Read the article

  • Apache debugging: where to find error logs?

    - by AP257
    I'm new to Apache and web serving generally, so apologies if this is a very stupid question. I want to configure a new sub-domain on a working site and install a forum there. I'm using a Debian server that already has Apache, mod_wsgi and a bunch of virtual hosts successfully running on it. I first installed my forum app (Django's OSQA). Following the OSQA instructions, I then created an Apache config file that specified ServerName as the new sub-domain. I also created a .wsgi file for the app, and pointed WSGIScriptAlias at it. I then restarted Apache. However, when I go to the new sub-domain, I get a 404 error message. Two questions: Is there a step missing above? Or is simply creating a new Apache config file in sites-available enough to 'tell' Apache about a new sub-domain? If there's something else going wrong, how can I debug it? The ErrorLog and CustomLog specified in the config file are both blank. apache2.conf, which I guess is Apache-wide configuration, specifies ErrorLog /var/log/apache2/error.log, but this is yet another blank file.

    Read the article

  • How can I setup apache+mod_proxy so when I connect to mod_proxy on interface X, it sends the traffic

    - by aspitzer
    We use a service that allots us X number of requests per IP and has allows us to setup 5 IPs with such a limit (I know.. it seems stupid they could not just up the limit 5x on one IP). Pretend I have a linux box with the following address on the internet: 66.249.90.104 - that is an Google IP and not mine... so feel free to try to hack into it :) I setup apache+mod_proxy as a forwarding proxy (ProxyRequests On). i.e. you can setup firefox to use 66.249.90.104:8080 as a proxy, and all firefox traffic comes out as 66.249.90.104. So far so good. Problem: Now I add more alias interfaces so the total looks like this: eth0: 66.249.90.104 eth0:1 66.249.90.105 eth0:2 66.249.90.106 eth0:3 66.249.90.107 eth0:4 66.249.90.108 I run apache+mod_proxy (single apache instance) which binds to all interfaces, but no matter which address I connect to use the forwarding proxy, all traffic goes out to the internet as 66.249.90.104 I have also tried running 5 different apaches, each binding to its own interface only, but that still sends the outbound request through 66.249.90.104. I was hoping to get it to work as follows: I connect to 66.249.90.108 and make a proxy request, and it goes out as 66.249.90.108. I connect to 66.249.90.107 and make a proxy request, and it goes out as 66.249.90.107. etc. Has anyone else had to deal with this issue? The fall back solution would be to just run apache on 5 separate boxes, but I would prefer it to all work on one box. Thanks!

    Read the article

  • MsMpEng.exe (Windows Defender?) uses a lot of CPU at startup and runs two instances on a single core

    - by dlamblin
    I'm using Windows XP Professional SP2 on a single core AMD64 processor, and I've got two instances of MsMpEng.exe starting up when I start up and log in. They use 64MB and 32MB of ram and 140MB and 80MB of virtual memory, and fluctuate around 80% CPU usage for about 5 minutes at start up. They are (I read) associated with Windows Defender, but I'm concerned about: There's two of them, everything I read generally has only one reported. They might be scanning each other, and I want that to stop. They might be getting scanned by avgrsx.exe (AVG Free 8) (uses about 16Mb v ram) They might also be scanning moe.exe (assosciated with ms live mesh, which I'm considering getting rid of) Lastly I have Microsoft Security Essentials. I don't know the process name associated there. The main concern of mine (apart from the double instances) is that these are all trying to prioritize scanning each other at once except maybe moe.exe. This might seem legitimate but is likely a useless drain on resources. Have I made a mistake in having all of these installed, or is there a way to inform them not to do whatever they're doing that's taking about 5+ minutes at start up? [I also have Google Desktop, but I'm keeping that.] Comment if none of this makes sense to you.

    Read the article

  • Apache Logs - Not Showing Requested URL or User IP

    - by iarfhlaith
    Hey all, I'm having a problem with a server that keeps falling over. Looking through the Apache error logs it appears to come from a rogue PHP script. I'm trying to track this down using Apache's error_log and access_log but the server log format isn't giving me the detail I need. I suspect the log format isn't sufficient, but I've reviewed the Apache documentation and I've included the switches that I think I need to see. Here's my LogFormat configuration in the httpd.conf file: `LogFormat "%h %l %u %t \"%r\" %s %b %U %q %T \"%{Referer}i\" \"%{User-Agent}i\"" extended CustomLog logs/access_log extended` Using the %U %q %T switches I expected to see the requested URL, query string, and the time it took to serve the request, but I'm not seeing any of this information when I tail the log. Here's an example: 127.0.0.1 - - [01/Jun/2010:14:12:04 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:05 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:06 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:07 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:08 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" 127.0.0.1 - - [01/Jun/2010:14:12:09 +0100] "OPTIONS * HTTP/1.0" 200 - * 0 "-" "Apache/2.2.15 (Unix) mod_ssl/2.2.15 OpenSSL/0.9.8e-fips-rhel5 mod_bwlimited/1.4 (internal dummy connection)" Have a made a mistake in configuring the LogFormat or is it something else? Also, each request appears to come from the localhost. How come it's not giving me the remote user's IP address? Thanks, Iarfhlaith

    Read the article

  • Microsoft Web Farm Framework 502 error

    - by hermiod
    My apologies if this is a really stupid question, but I have installed and set up the new Microsoft Web Farm Framework in a UAT environment with a controller machine, a primary, and a secondary server. I have installed one of our in-house applications on to the primary server and it has been successfully provisioned across to the secondary. I have tried accessing the server via the url http:\mis-uat-002\testfarm\reviewer where mis-uat-002 is the name of the farm controller, testfarm is the name of the farm, and reviewer is the name of the application. When using this url I get a "502 - Web server received an invalid response while acting as a gateway or proxy server." error. If I access the application directly via http:\mis-uat-003\reviewer (mis-uat-003 is the primary farm server, reviewer is the application) then it works no problem. Is there some additional set up that needs to be done in order for the farm controller to do its job? It should be noted that this is a UAT environment so is not on a domain of any sort. However, if this was successful and the WFF came out with V1.0 fairly soon, we would be looking to run this on a Windows 2008 domain.

    Read the article

  • Server Directory Not Accessible

    - by GusDeCooL
    I got strange things happen on live server, but normal in local server. My local server is using mac, and my live server is linux. Consider i try to access some files http://redddor.babonmultimedia.com/assets/images/map-1.jpg This work correctly. http://redddor.babonmultimedia.com/assets/modules/evogallery/check.php Return 404, I'm pretty sure my file is in there and there is no typo mistake. How come it give me 404? There is only one .htaccess on the root server and it's configuration is like this. # For full documentation and other suggested options, please see # http://svn.modxcms.com/docs/display/MODx096/Friendly+URL+Solutions # including for unexpected logouts in multi-server/cloud environments # and especially for the first three commented out rules #php_flag register_globals Off #AddDefaultCharset utf-8 #php_value date.timezone Europe/Moscow Options +FollowSymlinks RewriteEngine On RewriteBase / <IfModule mod_security.c> SecFilterEngine Off </IfModule> # Fix Apache internal dummy connections from breaking [(site_url)] cache RewriteCond %{HTTP_USER_AGENT} ^.*internal\ dummy\ connection.*$ [NC] RewriteRule .* - [F,L] # Rewrite domain.com -> www.domain.com -- used with SEO Strict URLs plugin #RewriteCond %{HTTP_HOST} . #RewriteCond %{HTTP_HOST} !^www\.example\.com [NC] #RewriteRule (.*) http://www.example.com/$1 [R=301,L] # Exclude /assets and /manager directories and images from rewrite rules RewriteRule ^(manager|assets)/*$ - [L] RewriteRule \.(jpg|jpeg|png|gif|ico)$ - [L] # For Friendly URLs RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?q=$1 [L,QSA] # Reduce server overhead by enabling output compression if supported. #php_flag zlib.output_compression On #php_value zlib.output_compression_level 5

    Read the article

  • EC2: How dangerous is it to turn off fsck for EBS volumes?

    - by Janine
    I have been tearing my hair out trying to figure out why my EC2 instances (made from my own custom AMIs) were taking many tries to come up properly. They would fail with the following error: fsck.ext3: No such file or directory while trying to open /dev/sdf For both of the EBS volumes I was attaching during startup. Finally, I figured out the problem. I had put this in /etc/fstab: /dev/sdf /export ext3 defaults 1 2 /dev/sdi /export2 ext3 defaults 1 2 The 2 tells the system to fsck the drives on the way up. Changing this to /dev/sdf /export ext3 defaults 1 0 /dev/sdi /export2 ext3 defaults 1 0 Avoids the problem completely, but now the volumes are never going to be fsck'd. How much does this matter? Once the instance goes into production it's going to be running pretty much 24/7, so not many fscks would be happening anyway, but still... this just feels like a bad idea. I have not been able to find anyone else even reporting this problem (there are people with the same error message, but different causes). It seems unbelievable that I could be the only person to ever make this mistake, but perhaps I'm just talented that way. :) If there is another solution to the problem I would love to hear it; I have not been able to find one.

    Read the article

  • Changing dual-monitor settings without closing the laptop lid on OS X

    - by hekevintran
    I have a Unibody MacBook hooked up to an external display. By default when I boot up, the system will go to dual-monitor mode. I want to use only the external display. The Apple supplied solution to this problem is to close the lid of the laptop which puts the machine into sleep mode and then move the mouse around to wake it up again. Because the machine is being woken up with the lid closed, when the displays are detected the system finds only the external. After the system is functional again, you can open the lid if you want and the laptop screen will be non-functional until you either tell the system to detect displays from the system preferences or you turn off the external display. Every time I want to use only the external display, I must reach my hand over to close the lid, wait for the machine to sleep, jigger the mouse, wait for the machine to wake up, and finally open the lid again because I don't want the machine to overheat. I feel that this is very stupid to have to do. Why is there no button or menu option that says "don't use this screen"? Is there any third-party software way to change the screen setup that does not involve physically closing the lid and playing a game of "are you sleeping" in order to switch such a simple software setting? We are in the 21st Century and honestly this is childish.

    Read the article

  • Bugzilla email issue

    - by xian
    My bugzilla system keep hit the following error: There was an error sending mail from '[email protected]' to '[email protected]':Can't send data I think that is some problem with my setting and configuration. First is the urlbase I have tried setting it to bugzilla.example.com, and http://127.0.0.1:81/, and http://10.0.0.236/ (My laptop IP address, I use this laptop to set up bugzilla) but the error still persists. Actually what should I put in the urlbase field? Parameter = Email Under mail_delivery_method, i choose SMTP. Under mailfrom, I put bugzilla-daemon. smtpserver, I tried leaving it blank, or setting it to 220.181.12.12 before, but could not solve my problem For my sql, the following is the data and command I used: C:\mysql\bin>mysql --user=root -p mysql Enter password: 1234 (When I install mysql into my laptop, it ask me to key an username and password, i have key in username as 'cvuser' and password as '1234', but here never ask me to key in any username) Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.5.15 MySQL Community Server (GPL) mysql> GRANT ALL PRIVILEGES ON bugs.* TO 'bugs'@'localhost' IDENTIFIED BY '123456'; Query OK, 0 rows affected (0.03 sec) In C:\Bugzilla\localconfig, I put the following info: # # How to access the SQL database: # $db_host = "localhost"; # where is the database? $db_port = 3306; # which port to use $db_name = "bugs"; # name of the MySQL database $db_user = "bugs"; # user to attach to the MySQL database # # Enter your database password here. It's normally advisable to specify # a password for your bugzilla database user. # If you use apostrophe (') or a backslash (\) in your password, you'll # need to escape it by preceding it with a \ character. (\') or (\\) # $db_pass = '123456'; Can someone tell me where my mistake is? I have googled for this issue for few days but still cannot find the solution.

    Read the article

  • Server Restart's and Respective Orders

    - by TheD
    EDIT:Not meaning to be disrespectful to any of the answers, but, the main question was whether rebooting a DC at the beginning of a cycle, then all the other servers, or rebooting it at the end once all the others are back online - is there a reason for doing it either way? I'm still not sure based on current responses. This will most likely seem like a fairly, maybe even stupid, question, but it's something I have been wondering about. As part of a regular process for clients servers are restarted remotely after patches and every client tends to have a similar order - but there always seems to be a small debate when it comes down to when do you reboot your DC. For example, 4 servers, 1 DC, 1xExchange, 1xBESX and 1xRandom, lets say it has some CRM software installed, is it best to reboot the DC first, then Exchange, then BESX and so on - or reboot all the servers, then reboot the DC last? - Perhaps it doesn't matter at all and it's just a case of how you have always done it. Would it change in a Hyper-V environment for example, with a physical DC, 1 VHost with all your servers virtualised on that Host? Rebooting the VHost and Virtual Machines first, then the DC at the end, or vice versa? Thanks!

    Read the article

  • Maxtor 500GB external hard drive not being detected but power is going to it?

    - by ClarkeyBoy
    I have 2 * Maxtor Onetouch 4 Lite 500GB external hard drives (part no. 9NT2A4-500). They both used to work fine on my old laptop (an Acer) but I have not used them for about a year, since my laptop was stolen and I got this one (also an Acer [Aspire 7738G]). I have one plugged into the mains with one of the leads I believe was supplied with them. It appears to be receiving power as it is warm and the power light (on the unit itself) is on; also the mains adapter is fairly warm. I also have it plugged into my laptop with a USB lead which I have tested on my mp3 player (so I know it works). However my hard drive is not showing on my computer. I have tried checking for new hardware, installing the software that was supplied with it, checking drive letters in case it is registered as C: or something stupid, checking for problems etc... I can't find any cause for it to do this. It does appear to be starting up and, possibly, shutting down and restarting constantly (that's what it sounds like altho I can't be certain). I have had both hard drives stored in different places for the last year and they're both doing the same thing.. if it was only one then I'd guess it had got damaged or corrupted or something but since it is both I doubt this is it. The only things in common with both of them are the leads and the laptop, however I know the USB lead works and guess the mains lead works as there is power going to the unit. Has anyone come across this before or does anyone have any idea what the cause / solution to the problem is? Any help would be greatly appreciated. Regards, Richard

    Read the article

  • Linksys router cannot change default password

    - by Jessica M.
    My wireless internet suddenly stopped working today. I have Windows 7 and Lindsys WRT54G router. I tried to log into the Linksys setup router page by typing in www.192.168.1.1 into firefox and it prompted me for a username and password as usual. The problem is when i tried to enter my regular username and password it did not work. I finally solved my problem when I came across this post here and the very last post solved my problem. It suggested I try username: root / password: admin. For some reason the username and password has been changed. When i tried username: root / password: admin , it worked and allowed me to get into the Linksys setup page. The problem is I can't change the username or password anymore. Every time I want to log into my Linksys setup page I have to enter username: root / password: admin. I can't change the "WPA shared key" (password). For the security settings I selected WPA2Personal + AES. Also the post said "If the firmware was upgraded to non-linksys firmware - the default will be different" . The problem is I didn't update anything and I'm worried that someone installed a virus or something or somehow changed the firmware on my router. How did I get non-Linksys firmware on my router? EDIT: I figured out how to change the password when I log into the Lynksys setup page. Administration -- Management -- password. I still don't understand if my router firmware was changed or who changed it or if it happened by mistake.

    Read the article

  • Windows 8: Clean Install Fails BEFORE country Screen

    - by Mark
    G'day, I've been trying to (clean) install Windows 8 on my PC now for over two weeks now, and it's really getting old. You can see here that I've outlined my issue to the Windows Technical Community, which has resulted in... absolutely no help at all. http://answers.microsoft.com/en-us/windows/forum/windows_8-windows_install/windows-8-clean-install-fails-before-country/64229d0a-0220-45a9-bdc6-c41062df8a75 Tl;dr? Yeah, me too. Basically, I've shrunk the HDD, it's working (I'm on that PC on XP). I've put both the x64 and the x86 DVD's, AND an another HDD with W8 installed on it from my test machine, and they ALL Fail to load. (I get the slanty windows logo, and after about 10 seconds BOOM. Sad face error screen.) I really don't want to have to remove the video card, or the unevenly/not partnered DIMM in the 2nd Memory channel - because the case is .. stupid, and in an awkward spot, but I'm running out of ideas! PS. I tried turning on ACHI tonight. All that resulted in was XP wigging out about new drivers & explorer.exe crashing. Fun times! :(

    Read the article

  • Building a Media Center PC with Comcast Cable...?

    - by Rob
    Alright - so this might be a stupid question but I've never been all that much into TV. I currently have Comcast cable. I've just got the 'basic' 2-60 package or whatever; I've just always plugged the cable into the back of my TV. I've never had a cable box. Recently, Comcast has been pulling channels off of my line-up. Most recently, the stole the TV Guide channel from me. I'm told this is part of a push to get customers to switch to their digital line-up. But, I'm also told it requires some sort of digital receiver for each TV you've got. I don't want to buy a bunch of these digital receivers and I don't want to pay the monthly rental fee...but I have heard of how awesome media center PCs are and some really cool things they can do. And, I've got loads of PC parts sitting around. So, can someone guide me through this a bit? Are there computer video cards or TV tuners that are going to work with Comcast's digital cable? What kind of price range are we looking at?

    Read the article

  • SVNParentPath directory authorization

    - by James
    The question is a bit stupid but I can't get it sorted. I have a server with SVN that uses the SVNPath directive in httpd.conf and all works fine with path authorizations. Now I'm installing a second serer where I'm going to use SVNParentPath directive and I've got it all running except I can't get the authorization part quite right. From what I understand it's the same as when you use SVNPath but you need to specificy the repo name before the folder name.. My SVNParentPath is /srv/svn/ and I created a directory /srv/svn/testproj and then ran svnadmin create /srv/svn/testproj Now i'm configuring my authorization file: [/] * = svnadmin = rw adusgi = rw [testproj:/svn/testproj] demada = rw degari = rw scarja = rw Now if I try to commit /svn/testproj using user svnadmin or adusgi all is fine. If I try for example demada it doesn't work... (I've run the htpasswd2 commands for the user obviously. The directory is correct or atleast thats how I use the directory with the SVNPath server thats already running, the part I think I'm getting wrong is the repo name, I just used the directory name but what am I really supposed to put there?? Thank you, James

    Read the article

  • Deleted vmware ESXi snapshot file - any way to recover?

    - by Mark Allison
    Hi there, I wanted to make some changes to a file server VM today on ESXi 4. The machine is a Debian Lenny guest with two virtual disks - one is 8GB and the other is 500Gb (data). In order to protect the machine from unwanted changes, I made a snapshot of the machine. I went ahead and made my changes and it didn't work out well. So, I powered off the VM and went into snapshot manager and reverted to snapshot. However I reverted to an older snapshot and not the one I just made by mistake. I then (idiotically) deleted the snapshot I just made in snapshot manager. This has resulted in me losing about one year's worth of data. Is there any way to recover this deleted snapshot file? I'm using vmware esxi 4. When I browse the VMWare repository I can see various vmdk files - is it possible the data I need is still there? What should I look for? Thanks, Mark.

    Read the article

  • Google-Bot fell in love with my 404-page

    - by 32bitfloat
    Every day my access-log looks kind of this: 66.249.78.140 - - [21/Oct/2013:14:37:00 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [21/Oct/2013:14:37:01 +0200] "GET /vuqffxiyupdh.html HTTP/1.1" 404 1189 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" or this 66.249.78.140 - - [20/Oct/2013:09:25:29 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.75.62 - - [20/Oct/2013:09:25:30 +0200] "GET /robots.txt HTTP/1.1" 200 112 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.78.140 - - [20/Oct/2013:09:25:30 +0200] "GET /zjtrtxnsh.html HTTP/1.1" 404 1186 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" The bot calls the robots.txt twice and after that tries to access a file (zjtrtxnsh.html, vuqffxiyupdh.html, ...) which cannot exist and must return a 404 error. The same procedure every day, just the unexisting html-filename changes. The content of my robots.txt: User-agent: * Disallow: /backend Sitemap: http://mysitesname.de/sitemap.xml The sitemap.xml is readable and valid, so there seems to be no reason why the bot should want to force a 404-error. How should I interpret this behaviour? Does it point to a mistake I've done or should I ignore it?

    Read the article

  • Is it possible to install all packages from an APT repository?

    - by Kristoffer Hagen
    Is it possible to install all packages from an APT repository? I know it is possible to do it manually, but then you would need to know all the package names, and I don't. Any suggestions? Thanks. Update: Well, you guys are going to kill me for this, but the reason for my madness is that I want to install all the packages from BackTrack into my Ubuntu installation. I really don't like the idea of having it in a VM and having a separate partition for it is even more out of the question. I know that the folks at BackTrack doesn't like it when people leech their repositories, but that's what you get for releasing open source software. Stupid? maybe.. A valid reason? probably not.. Do I still want it? Yes. Another edit: I have now given up on this as it seems impossible to get it to work even by manually installing packages.

    Read the article

  • Error in Apache: /var/run/apache2 not found

    - by Julen
    This is more self-answered question but since it drove me crazy I would like to share with the community and maybe someone can tell me why it happened or what it caused. The thing is I wanted to install in my Ubuntu 10.4 machine a CGI app, one built in the samples that come with the gSOAP toolkit. My intention was to access those from ASP .NET machine. Regular Ubuntu does not come with Apache so I install it from Sypnatic. Pretty easy. I followed this How to Install Apache2 webserver with PHP,CGI and Perl Support in Ubuntu Server. Instead of apache.conf I tweaked httpd.conf since a college here used that file instead of the first to put his Apache running. Besides I was able to access his CGI from my ASP .NET but mysteriously I could not from mine, I was getting always "The request failed with HTTP status 503: Service Temporarily Unavailable". Checking Apache error.log I found these messages: No such file or directory: unable to connect to cgi daemon after multiple tries: /home/julen/htdocs/cgi-bin/calcserver And looking more carefully whenever I restarted Apache I got this other message No such file or directory: Couldn't bind unix domain socket /var/run/apache2/cgisock. cgid daemon failed to initialize I am pretty new with Ubuntu and I could not think that Apache and Synaptic made a mistake in the installation process of the server, but it is true that the /var/run/apache2 was missing whereas in my college's computer was not. I tried to find and "elegant" solution but I found a post from 2006 that had an slight reference to it. Finally I decided to create the folder myself (as root) and then everything worked fine. Hope this helps others if they encounter a similar problem. Still I have the doubt why the folders was not created in the first place. Best, Julen.

    Read the article

  • How can I download a copy of an S3 public data set?

    - by tripleee
    i was naively assuming I could do something like s3cmd sync s3://snap-d203feb5 /var/tmp/copy but I seem to have the wrong idea of how to go about this. I cannot even get a simple thing to work; vnix$ s3cmd ls s3://snap-d203feb5 Bucket 'snap-d203feb5': ERROR: Bucket 'snap-d203feb5' does not exist I guess the identifier I have is not for a "bucket" but for a "public data set". How do I go from one to the other? Do I have to start up an EC2 instance and create a bucket for this? How? The instructions at http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/using-public-data-sets.html seem to assume I want to use the data in an EC2 instance, but in this case, I'd just like to browse a bit, at least for a start. By the by, copy/pasting the "US Snapshot ID" causes a nasty traceback from Python; they publish the ID with a weird Unicode (I presume) dash which cannot directly be copy/pasted. Is there a mistake when I copy it? And what's the significance of "US" in there? Can't I use the data outside North America??

    Read the article

  • Reasonable automatic HTML to PDF conversion (in UNIX/Linux environment)

    - by Alex Balashov
    Is there a way to generate PDF documents from HTML files automatically in Linux where the PDF offers some kind of reasonable level of resemblance to the input file? A command-line tool - as opposed to an interactive GUI of some kind - is key. I have tried htmldoc and some related cousins, of course. But these tools are hopelessly stone-age; htmldoc doesn't support CSS at all. You won't find a lot of HTML documents these days that don't have at least some CSS styling. I don't really care about stupid effects or minor embellishments, but the issue is that CSS is at the core of most layouts these days; not many folks are using 6 layers of nested tables anymore. So, if the conversion tool has no grasp of CSS whatsoever, it's not just a matter of "the document doesn't look quite right"; it is likely to not meet the minimum standard of usability at all. It has been suggested to me by some folks to try to use the Gecko rendering engine to generate images that can be converted to PDFs, but I have no idea how one would go about doing this, let alone easily. I have no trouble believing that there are good commercial tools that do this, but I'm really looking for an open-source package if possible, as the endeavour itself is an open-source one and doesn't pay. Thanks in advance!

    Read the article

  • gentoo zlib removed = portage corrupted

    - by Shamanu4
    Hello I'm using gentoo not for a long time and have made such mistake: I've removed zlib package from system. Now i've got my portage system corrupted: # emerge --sync Traceback (most recent call last): File "/usr/bin/emerge", line 36, in <module> from _emerge.main import emerge_main File "/usr/lib64/portage/pym/_emerge/main.py", line 41, in <module> from _emerge.actions import action_config, action_sync, action_metadata, \ File "/usr/lib64/portage/pym/_emerge/actions.py", line 44, in <module> from _emerge.depgraph import backtrack_depgraph, depgraph, resume_depgraph File "/usr/lib64/portage/pym/_emerge/depgraph.py", line 40, in <module> from _emerge.FakeVartree import FakeVartree File "/usr/lib64/portage/pym/_emerge/FakeVartree.py", line 11, in <module> from portage.dbapi.vartree import vartree File "/usr/lib64/portage/pym/portage/dbapi/vartree.py", line 56, in <module> import re, shutil, stat, errno, copy, subprocess File "/usr/lib64/python2.6/subprocess.py", line 430, in <module> import pickle File "/usr/lib64/python2.6/pickle.py", line 1258, in <module> import binascii as _binascii ImportError: libz.so.1: cannot open shared object file: No such file or directory How can I reinstall zlip package and repair system?

    Read the article

  • Windows 8.1 - Won't boot (0xc000000f), bcdedit fails

    - by user3014097
    I’m pretty much stuck at this point. So, backstory: Windows was installed on one of the SSDs currently in my tower. I bought a new SSD to install Windows (8.1 64 bit) on. Windows installation went fine, booted up, and formatted the old SSD from within Windows (this seems to have been a mistake, but I didn’t realize that at the time). Despite formatting the old SSD, whenever I tried to boot I was told that there were 2 Windows installations. Apparently, when I formatted the old drive, not all of the partitions were removed. So, I booted up with the repair utility, went into cmd, and deleted the non-primary partitions on the old SSD (there were 2 – think they were system and recovery, although I’m forgetting now). Reboot – computer won’t boot. Getting the 0xc000000f “The boot selection failed because a required device is inaccessible” error. Troubleshooting so far: Automatic repair doesn’t fix anything (I’ve never had luck with it though) If I go to install a new version of Windows, the drives and partitions are all there. The SSD is functioning, I at least know that. I’ve essentially gone through this guide: https://neosmart.net/wiki/recovering-windows-bootloader/ Unfortunately, I’m not getting anywhere. I’m not even entirely sure how to describe the errors I’m getting, so I’ve just included pictures of every step (I can't actually post them though so I just included a photobucket link). http://s319.photobucket.com/user/DGalt11/library/Computer%20Issue Am I completely screwed here (i.e. reformat and reinstall?)? Thanks

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >