Search Results

Search found 499 results on 20 pages for 'robots'.

Page 8/20 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Robots &amp; Pencils Bring iOS Dev Camp/Dev School to Winnipeg

    - by D'Arcy Lussier
    My buddy Paul Thorsteinson from Robots and Pencils has come up with an elaborate way to collect his Mac power adaptor that I keep forgetting to mail to him – he’s coming to town with Jonathan Rasmusson to run an iPhone Dev Camp and two-day Dev School here in Winnipeg! From the email he sent me: We are going to be bringing our successful iOS dev school out to the 'Peg in October as well has hosting a dev camp on the Friday night (comparable to a .net user group type deal).  If you know any peeps in Manitoba who are interested in these, please pass along!  .Net developers are welcome to come and heckle as well ;) Winnipeg iPhone Dev Camp October 26th Marlborough Hotel, 5:30pm Cost: $10 http://ios-dev-camp-winnipeg-eorg.eventbrite.com/ ^for devs of any level interested in meeting other devs hearing talks of all levels.  Food and networking Winnipeg iPhone Dev School October 27th, 28th, Marlborough Hotel Cost: $899 + GST http://academy.robotsandpencils.com/training ^For devs looking to get their feet wet in iOS dev Paul has spoken at Prairie Dev Con before and is vastly knowledgeable in mobile development. You can see his work in Spy vs Spy, Catch the Princess, World Explorer for Minecraft, Deco Windshield (yes they run their entire business on their iPad), Anthm, Own This World and too many other apps. If you’re into iOS development, looking to get in, or wanting to improve your skills, consider these great professional development opportunities! D

    Read the article

  • how to restrict access to all .txt file in apache except robots.txt?

    - by user3162764
    I am configuring apache2 on debian and would like to allow only robots.txt to be accessed for searching engines, while other .txt files are restricted, I tried to add the followings to .htaccess but no luck: <Files robots.txt> Order Allow,Deny Allow from All </Files> <Files *.txt> Order Deny,Allow Deny from All </Files> Can anyone help or give me some hints? I am new comer to apache, thanks a lot.

    Read the article

  • Are programmers a bunch of heartless robots who are lacking of empathy? [closed]

    - by Graviton
    OK, the provocative title got your attention. My experience as a programmer and dealing with my fellow programmers is that, a programmer is also usually someone who is so consumed by his programming work, so absorbed in his algorithmic construction that he has little passion/ time left for anything else, which includes empathy for other people, love and care for the people whom he love or should love ( such as their spouses, parents, kids, colleagues etc). The better a person is in terms of his programming powers, the more defective he is in terms of love/care because both honing programming skills and loving the surrounding takes time and one has only so much time to be allocated among so many different things. Also, programming ( especially INTERESTING programming job, like, writing an AI to predict the future search trend) is a highly consuming job; it doesn't just consume you from 9 to 5, it will also consume you after 5 and practically every second of your waking hours because a good programmer can't just magically switch off his thinking hat after the office lights go off ( If you can then I don't really think you are a passionate programmer, and the prerequisite of a good programmer is passion). So, a good programmer is necessarily someone who can't love as much as others do because the very nature of the programming job prevents him from loving others as much as he wants to. Do you concur with my observation/ reasoning?

    Read the article

  • Captcha in my Joomla site it not blocking spam robots.

    - by jax
    In my joomla install I have removed the email registration and instead added a Captcha field to the PHP code using the recaptcha.net method. For some reason I am still getting what I think are spam users (robots) but I don't know how they would get around the Captcha field. Anything I should check?

    Read the article

  • Why are C, C++, and LISP so prevalent in embedded devices and robots?

    - by David
    It seems that the software language skills most sought for embedded devices and robots are C, C++, and LISP. Why haven't more recent languages made inroads into these applications? For example, Erlang would seem particularly well-suited to robotic applications, since it makes concurrent programming easier and allows hot swapping of code. Python would seem to be useful, if for no other reason than its support of multiple programming paradigms. I'm even surprised that Java hasn't made a foray into general robotic programming. I'm sure one argument would be, "Some newer languages are interpreted, not compiled" - implying that compiled languages are quicker and use fewer computational resources. Is this still the case, in a time when we can put a Java Virtual Machine on a cell phone or a SunSpot? (and isn't LISP interpreted anyway?)

    Read the article

  • How to prevent robots from automatically filling up a form?

    - by sombe
    I'm trying to come up with a good enough anti-spamming mechanism to prevent automatically generated input. I've read that techniques like captcha, 1+1=? stuff work well, but they also present an extra step impeding the free quick use of the application. (I'm not looking for anything like that please). I've tried setting some hidden fields in all of my forms, with display: none; However, I'm almost certain a robot (which is essentially a program) can be configured to trace that form field id and simply not fill it. Do you implement/know of a good anti automatic-form-filling-robots method? Is there something that can be done seamlessly with HTML AND/OR server side processing, and be (almost) foolproof? (and please no JS, one could simply disable it, and there goes my anti-spam method). Btw I'm trying not to rely on sessions for this (like, counting how many times a button is clicked to prevent overloads).

    Read the article

  • mod_rewrite settings causes server to throw HTTP 500 errors instead of 404

    - by FractalizeR
    Hello. I have a server with VBulletin forum (working under Apache 2.2, CentOS). The default settings for it in .htaccess are as follows: RewriteEngine on RewriteCond %{HTTP_HOST} ^gsmforum\.ru RewriteRule (.*) http://www.gsmforum.ru/$1 [R=301,L] # If you are having problems or are using VirtualDocumentRoot, uncomment this line and set it to your vBulletin directory. RewriteBase / RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # Forum RewriteRule ^threads/.* showthread.php [QSA] RewriteRule ^forums/.* forumdisplay.php [QSA] RewriteRule ^members/.* member.php [QSA] RewriteRule ^blogs/.* blog.php [QSA] ReWriteRule ^entries/.* entry.php [QSA] RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # MVC RewriteRule ^(?:(.*?)(?:/|$))(.*|$)$ $1.php?r=$2 [QSA] If I try to access any non-existent URL on forum like www.example.com/ajdsjaskasajs, server throws HTTP 500 error. Apache log says: [Sun Apr 25 17:24:32 2010] [error] [client 82.211.152.12] Request exceeded the limit of 10 internal redirects due to probable configuration error. Use 'LimitInternalRecursion' to increase the limit if necessary. Use 'LogLevel debug' to get a backtrace., referer: http://www.gsmforum.ru/forumdisplay.php?424-%CD%EE%E2%EE%F1%F2%E8-%EF%F0%EE%E3%F0%E0%EC%EC%E0%F2%EE%F0%EE%E2 If I switch LogLevel to Debug I get something like this: [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt.php [Sun Apr 25 17:30:46 2010] [debug] core.c(3059): [client 95.25.70.85] redirected from r->uri = /robots.txt [root@server2 logs]# tail httpd_error.log [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript.php, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru [Sun Apr 25 17:31:27 2010] [debug] core.c(3059): [client 217.118.79.27] redirected from r->uri = /clientscript/vbulletin_css/style-d95b06dc-00001.css, referer: http://74.125.77.132/search?q=cache:bGPJ8XkSvlMJ:www.gsmforum.ru/showthread.php%3Ft%3D62479+%D0%A3%D0%BC%D0%B5%D0%BD%D1%8C%D1%88%D0%B5%D0%BD%D0%B8%D0%B5+%D0%BF%D0%B8%D0%BD%D0%B3%D0%B0+3G+%D0%BC%D0%BE%D0%B4%D0%B5%D0%BC&cd=3&hl=ru&ct=clnk&gl=ru If I remove or comment the last (#MVC) line from .htaccess all is fine. Can you advise me what is the problem with mod_rewrite settings? Why does the last line cause infinite recursion?

    Read the article

  • Does the Instant Preview in Google webmaster tools takes Robot.txt in account?

    - by rockyraw
    Is that the way to go If I want to visually see what the googlebot see? I'm trying to check a folder which I have just blocked in my robots.txt. If I fetch the folder as google bot, It fetches ok, so that doesn't tell me nothing about whether the block is working I know there's a tool to check for blocking, but it is dependent on the input of the robots.txt Therefore I've tried the Instant preview, and I don't get a preview for what the bot sees ("pre-render), so I think that means that it's because the robots.txt blocks it; however - I don't see the bot tried beforehand to access my updated robots.txt, so I'm not sure how does it know that this folder is blocked? (it does preview another new folder, that is not blocked)

    Read the article

  • Evidence for automatic browsing - Log file analysis

    - by Nilani Algiriyage
    I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidence for automatic browsing, like web robots, spiders, bots, etc. I used python robot-detection 0.2.8 for detecting robots in my log files, but I know there may be other robots (automatic programs) which have traversed through the web site but robot-detection can not identify. So I want to ask: Are there any specific clues that can be found in log files that human users do not leave but automated software would? Do they follow a specific navigation pattern? I saw some requests for favicon.ico - does this implicate that it is a automatic browsing?. I found this article and this question with some valuable points.

    Read the article

  • Disable outbound links without letting others know that

    - by tadoman
    Is there a way I can tell google not to follow external links ( pointing to other sites) without letting other know. I know you can disable outbound links by putting rel=nofollow or something in robots.txt. But that's something others can see as well. I'm just wondering if there's a way to tell google not to follow those links without letting others know that... like a setting in webmaster tools or something similar ( there's definetly one way. I could set an exception in my conf file for my server to check the user agent to be "googlebot" and then serve a different version of robots.txt. So that when a different user would check that link it would return a different robots.txt thant the one served to googlebot. However I'm not too sure google would be too happy about this) Thank you

    Read the article

  • Blocking 'good' bots in nginx with multiple conditions for certain off-limits URL's where humans can go

    - by Glenn Plas
    After 2 days of searching/trying/failing I decided to post this here, I haven't found any example of someone doing the same nor what I tried seems to be working OK. I'm trying to send a 403 to bots not respecting the robots.txt file (even after downloading it several times). Specifically Googlebot. It will support the following robots.txt definition. User-agent: * Disallow: /*/*/page/ The intent is to allow Google to browse whatever they can find on the site but return a 403 for the following type of request. Googlebot seems to keep on nesting these links eternally adding paging block after block: my_domain.com:80 - 66.x.67.x - - [25/Apr/2012:11:13:54 +0200] "GET /2011/06/ page/3/?/page/2//page/3//page/2//page/3//page/2//page/2//page/4//page/4//pag e/1/&wpmp_switcher=desktop HTTP/1.1" 403 135 "-" "Mozilla/5.0 (compatible; G ooglebot/2.1; +http://www.google.com/bot.html)" It's a wordpress site btw. I don't want those pages to show up, even though after the robots.txt info got through, they stopped for a while only to begin crawling again later. It just never stops .... I do want real people to see this. As you can see, google get a 403 but when I try this myself in a browser I get a 404 back. I want browsers to pass. root@my_domain:# nginx -V nginx version: nginx/1.2.0 I tried different approaches, using a map and plain old nono if's and they both act the same: (under http section) map $http_user_agent $is_bot { default 0; ~crawl|Googlebot|Slurp|spider|bingbot|tracker|click|parser|spider 1; } (under the server section) location ~ /(\d+)/(\d+)/page/ { if ($is_bot) { return 403; # Please respect the robots.txt file ! } } I recently had to polish up my Apache skills for a client where I did about the same thing like this : # Block real Engines , not respecting robots.txt but allowing correct calls to pass # Google RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Googlebot/2\.[01];\ \+http://www\.google\.com/bot\.html\)$ [NC,OR] # Bing RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ bingbot/2\.[01];\ \+http://www\.bing\.com/bingbot\.htm\)$ [NC,OR] # msnbot RewriteCond %{HTTP_USER_AGENT} ^msnbot-media/1\.[01]\ \(\+http://search\.msn\.com/msnbot\.htm\)$ [NC,OR] # Slurp RewriteCond %{HTTP_USER_AGENT} ^Mozilla/5\.0\ \(compatible;\ Yahoo!\ Slurp;\ http://help\.yahoo\.com/help/us/ysearch/slurp\)$ [NC] # block all page searches, the rest may pass RewriteCond %{REQUEST_URI} ^(/[0-9]{4}/[0-9]{2}/page/) [OR] # or with the wpmp_switcher=mobile parameter set RewriteCond %{QUERY_STRING} wpmp_switcher=mobile # ISSUE 403 / SERVE ERRORDOCUMENT RewriteRule .* - [F,L] # End if match This does a bit more than I asked nginx to do but it's about the same principle, I'm having a hard time figuring this out for nginx. So my question would be, why would nginx serve my browser a 404 ? Why isn't it passing, The regex isn't matching for my UA: "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.30 Safari/536.5" There are tons of example to block based on UA alone, and that's easy. It also looks like the matchin location is final, e.g. it's not 'falling' through for regular user, I'm pretty certain that this has some correlation with the 404 I get in the browser. As a cherry on top of things, I also want google to disregard the parameter wpmp_switcher=mobile , wpmp_switcher=desktop is fine but I just don't want the same content being crawled multiple times. Even though I ended up adding wpmp_switcher=mobile via the google webmaster tools pages (requiring me to sign up ....). that also stopped for a while but today they are back spidering the mobile sections. So in short, I need to find a way for nginx to enforce the robots.txt definitions. Can someone shell out a few minutes of their lives and push me in the right direction please ? I really appreciate ANY response that makes me think harder ;-)

    Read the article

  • why my website doesn't ranked by alexa? [closed]

    - by arshen
    i created a website with WordPress and post 10+ article in period of two month, but alexa doesn't rank my website. i tried to change my theme, URL and other related things and submit my website URL manually to alexa dashboard, while i have amount of 200 page view in a day but its still not ranked. my website URL: http://daskaht.ir robots file: http://daskhat.ir/robots.txt alexa page: www.alexa.com/siteinfo/daskhat.ir domain whois: whois.domaintools.com/daskhat.ir and website seo rank: www.woorank.com/en/www/daskhat.ir

    Read the article

  • Getting a lot of backslash underscore errors from webmaster tools

    - by Vermino
    I'm using a wordpress site and I thought I got all the kinks out of it. For some reason Webmaster tools is crawling my website and showing a lot of 404 errors which are from "/_" like additional pages that's i've never created. I just can't figure out what is creating these to google crawlers and then displaying a 404. my robots txt http://www.redcherryshrimp.net/robots.txt my sitemap created from Yoast plugin http://www.redcherryshrimp.net/sitemap_index.xml I have Yoast(creates the sitemap) and Jetpack plugins installed

    Read the article

  • Is it possible to tell a search engine not to index a specific section of an HTML page? [closed]

    - by Justin
    Possible Duplicate: Preventing robots from crawling specific part of a page I know you can use robots.txt to ignore entire pages or sections of your site, but is there a way to tell cralwers like the Googlebot to ignore specific sections of an HTML page? I found this blog post that discusses one method, but it appears only to work for the Google Search Appliance, not the Googlebot. Is there some method for at least Google for to do this?

    Read the article

  • Basic SEO Strategies - How to Get Your Website at the Top of the Search Results

    Search Engine Optimisation means making changes on your website so that it can be crawled more easily by search engines and found in search listings. By making changes on your website you can increase targeted visitors to your site. Search Engines use robots to crawl websites and the robots are only able to read content that is text. Your website results are displayed based on how relevant the content is in relation to the keyword that is being searched for in the search engine.

    Read the article

  • Apache + Codeigniter + New Server + Unexpected Errors

    - by ngl5000
    Alright here is the situation: I use to have my codeigniter site at bluehost were I did not have root access, I have since moved that site to rackspace. I have not changed any of the PHP code yet there has been some unexpected behavior. Unexpected Behavior: http://mysite.com/robots.txt Both old and new resolve to the robots file http://mysite.com/robots.txt/ The old bluehost setup resolves to my codeigniter 404 error page. The rackspace config resolves to: Not Found The requested URL /robots.txt/ was not found on this server. **This instance leads me to believe that there could be a problem with my mod rewrites or lack there of. The first one produces the error correctly through php while it seems the second senario lets the server handle this error. The next instance of this problem is even more troubling: 'http://mysite.com/search/term/9 x 1-1%2F2 white/' New site results in: Bad Request Your browser sent a request that this server could not understand. Old site results in: The actual page being loaded and the search term being unencoded. I have to assume that this has something to do with the fact that when I went to the new server I went from root level htaccess file to httpd.conf file and virtual server default and default-ssl. Here they are: Default file: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / # force no www. (also does the IP thing) RewriteCond %{HTTPS} !=on RewriteCond %{HTTP_HOST} !^mysite\.com [NC] RewriteRule ^(.*)$ http://mysite.com/$1 [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Default-ssl File <IfModule mod_ssl.c> <VirtualHost _default_:443> ServerAdmin webmaster@localhost ServerName mysite.com DocumentRoot /var/www <Directory /> Options +FollowSymLinks AllowOverride None </Directory> <Directory /var/www> Options -Indexes +FollowSymLinks -MultiViews AllowOverride None Order allow,deny allow from all RewriteEngine On RewriteBase / RewriteCond %{SERVER_PORT} !^443 RewriteRule ^ https://mysite.com%{REQUEST_URI} [R=301,L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.+)\.(\d+)\.(js|css|png|jpg|gif)$ $1.$3 [L] # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] # codeigniter direct RewriteCond $0 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^.*$ index.php [L] </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/ssl_access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> # SSL Engine Switch: # Enable/Disable SSL for this virtual host. SSLEngine on # Use our self-signed certificate by default SSLCertificateFile /etc/apache2/ssl/certs/www.mysite.com.crt SSLCertificateKeyFile /etc/apache2/ssl/private/www.mysite.com.key # A self-signed (snakeoil) certificate can be created by installing # the ssl-cert package. See # /usr/share/doc/apache2.2-common/README.Debian.gz for more info. # If both key and certificate are stored in the same file, only the # SSLCertificateFile directive is needed. # SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem # SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key # Server Certificate Chain: # Point SSLCertificateChainFile at a file containing the # concatenation of PEM encoded CA certificates which form the # certificate chain for the server certificate. Alternatively # the referenced file can be the same as SSLCertificateFile # when the CA certificates are directly appended to the server # certificate for convinience. #SSLCertificateChainFile /etc/apache2/ssl.crt/server-ca.crt # Certificate Authority (CA): # Set the CA certificate verification path where to find CA # certificates for client authentication or alternatively one # huge file containing all of them (file must be PEM encoded) # Note: Inside SSLCACertificatePath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCACertificatePath /etc/ssl/certs/ #SSLCACertificateFile /etc/apache2/ssl.crt/ca-bundle.crt # Certificate Revocation Lists (CRL): # Set the CA revocation path where to find CA CRLs for client # authentication or alternatively one huge file containing all # of them (file must be PEM encoded) # Note: Inside SSLCARevocationPath you need hash symlinks # to point to the certificate files. Use the provided # Makefile to update the hash symlinks after changes. #SSLCARevocationPath /etc/apache2/ssl.crl/ #SSLCARevocationFile /etc/apache2/ssl.crl/ca-bundle.crl # Client Authentication (Type): # Client certificate verification type and depth. Types are # none, optional, require and optional_no_ca. Depth is a # number which specifies how deeply to verify the certificate # issuer chain before deciding the certificate is not valid. #SSLVerifyClient require #SSLVerifyDepth 10 # Access Control: # With SSLRequire you can do per-directory access control based # on arbitrary complex boolean expressions containing server # variable checks and other lookup directives. The syntax is a # mixture between C and Perl. See the mod_ssl documentation # for more details. #<Location /> #SSLRequire ( %{SSL_CIPHER} !~ m/^(EXP|NULL)/ \ # and %{SSL_CLIENT_S_DN_O} eq "Snake Oil, Ltd." \ # and %{SSL_CLIENT_S_DN_OU} in {"Staff", "CA", "Dev"} \ # and %{TIME_WDAY} >= 1 and %{TIME_WDAY} <= 5 \ # and %{TIME_HOUR} >= 8 and %{TIME_HOUR} <= 20 ) \ # or %{REMOTE_ADDR} =~ m/^192\.76\.162\.[0-9]+$/ #</Location> # SSL Engine Options: # Set various options for the SSL engine. # o FakeBasicAuth: # Translate the client X.509 into a Basic Authorisation. This means that # the standard Auth/DBMAuth methods can be used for access control. The # user name is the `one line' version of the client's X.509 certificate. # Note that no password is obtained from the user. Every entry in the user # file needs this password: `xxj31ZMTZzkVA'. # o ExportCertData: # This exports two additional environment variables: SSL_CLIENT_CERT and # SSL_SERVER_CERT. These contain the PEM-encoded certificates of the # server (always existing) and the client (only existing when client # authentication is used). This can be used to import the certificates # into CGI scripts. # o StdEnvVars: # This exports the standard SSL/TLS related `SSL_*' environment variables. # Per default this exportation is switched off for performance reasons, # because the extraction step is an expensive operation and is usually # useless for serving static content. So one usually enables the # exportation for CGI and SSI requests only. # o StrictRequire: # This denies access when "SSLRequireSSL" or "SSLRequire" applied even # under a "Satisfy any" situation, i.e. when it applies access is denied # and no other module can change it. # o OptRenegotiate: # This enables optimized SSL connection renegotiation handling when SSL # directives are used in per-directory context. #SSLOptions +FakeBasicAuth +ExportCertData +StrictRequire <FilesMatch "\.(cgi|shtml|phtml|php)$"> SSLOptions +StdEnvVars </FilesMatch> <Directory /usr/lib/cgi-bin> SSLOptions +StdEnvVars </Directory> # SSL Protocol Adjustments: # The safe and default but still SSL/TLS standard compliant shutdown # approach is that mod_ssl sends the close notify alert but doesn't wait for # the close notify alert from client. When you need a different shutdown # approach you can use one of the following variables: # o ssl-unclean-shutdown: # This forces an unclean shutdown when the connection is closed, i.e. no # SSL close notify alert is send or allowed to received. This violates # the SSL/TLS standard but is needed for some brain-dead browsers. Use # this when you receive I/O errors because of the standard approach where # mod_ssl sends the close notify alert. # o ssl-accurate-shutdown: # This forces an accurate shutdown when the connection is closed, i.e. a # SSL close notify alert is send and mod_ssl waits for the close notify # alert of the client. This is 100% SSL/TLS standard compliant, but in # practice often causes hanging connections with brain-dead browsers. Use # this only for browsers where you know that their SSL implementation # works correctly. # Notice: Most problems of broken clients are also related to the HTTP # keep-alive facility, so you usually additionally want to disable # keep-alive for those clients, too. Use variable "nokeepalive" for this. # Similarly, one has to force some clients to use HTTP/1.0 to workaround # their broken HTTP/1.1 implementation. Use variables "downgrade-1.0" and # "force-response-1.0" for this. BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 # MSIE 7 and newer should be able to use keepalive BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown httpd.conf File Just a lot of stuff from html5 boiler plate, I will post it if need be Old htaccess file <IfModule mod_rewrite.c> # index.php remove any index.php parts RewriteCond %{THE_REQUEST} /index\.(php|html) RewriteRule (.*)index\.(php|html)(.*)$ /$1$3 [r=301,L] RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)/$ /$1 [r=301,L] # codeigniter direct RewriteCond $1 !^(index\.php|assets|robots\.txt|sitemap\.xml|favicon\.ico) RewriteRule ^(.*)$ /index.php/$1 [L] </IfModule> Any Help would be hugely appreciated!!

    Read the article

  • Inside Amazon’s Warehouses

    - by Jason Fitzpatrick
    If you’re expecting the inside of Amazon’s warehouses to be some sort of rigidly organized robot-filled warehouse of tomorrow, you’ll be quite surprised to find that storage technique they employ is called “chaotic storage”. International Business Times paid a visit to a major Amazon warehouse and took a tour. Rather than finding robots they found: Amazon must rely on barcodes and human hands to find the ordered items and drop them into the proper bins — without robots, Amazon utilizes a system known as “chaotic storage,” where products are essentially shelved at random. By storing items randomly instead of categorically, the warehouse has a much better flow of material. Even without robots or automation, Amazon can compile a “picking list” where each item needs to be taken off the shelf and scanned again before it can be shipped. The real advantage to chaotic storage is that it’s significantly more flexible than conventional storage systems. If there are big changes in a product range, the company doesn’t need to plan for more space, because the products or their sales volumes don’t need to be known or planned in advance if they’re simply being stored at random. HTG Explains: Does Your Android Phone Need an Antivirus? How To Use USB Drives With the Nexus 7 and Other Android Devices Why Does 64-Bit Windows Need a Separate “Program Files (x86)” Folder?

    Read the article

  • Google Analytics and Whos.amung.us in realtime visitors, why such an enormous discrepancy?

    - by jacouh
    Since years I use in a site both Google Analytics and Whos.amung.us, both Google analytics and whos.amung.us javascripts are inserted in the same pages in the tracked part of the site. In real-time visitors, why such an enormous discrepancy ? for example at the moment, Google analytics gives me 9 visitors, whos.amung.us indicates 59, a ratio of 6 times? Why whos.amung.us is 6 times optimistic than Google Analytics in terms of the realtime visitors? Google whos.amung.us My question is: whos.amung.us does not detect robots while Google does? GA ignores visitors from some countries, not whos.amung.us? Some robots/bots execute whos.amung.us javascript for tracking? While no robots/bots can execute the tracking javascript provided by Google Analytics? To facilitate your analysis, I copy JS code used below: Google analytics: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'MyGaAccountNo']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> Whos.amung.us: <script>var _wau = _wau || []; _wau.push(["tab", "MyWAUAccountNo", "c6x", "right-upper"]);(function() { var s=document.createElement("script"); s.async=true; s.src="http://widgets.amung.us/tab.js";document.getElementsByTagName("head")[0].appendChild(s);})();</script> I've aleady signaled this to WAU staff some time ago, NR, I've not done this to Google as they don't handle this kind of feedback. Thank you for your explanations.

    Read the article

  • De-index URL parameters by value

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have parameters with certain values appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • How to avoid tons of `instanceof` in collision detection?

    - by Prog
    Consider a simple game with 4 kinds of entities: Robots, Dogs, Missiles, Walls. Here's a simple collision-detection mechanism in psuedocode: (I know, O(n^2). Irrelevant for this question). for(Entity entityA in entities){ for(Entity entityB in entities){ if(collision(entityA, entityB)){ if(entityA instanceof Robot && entityB instanceof Dog) entityB.die(); if(entityA instanceof Robot && entityB instanceof Missile){ entityA.die(); entityB.die(); } if(entityA instanceof Missile && entityB instanceof Wall) entityB.die(); // .. and so on } } } Obviously this is very ugly, and will get bigger and harder to maintain the more entities there are, and the more conditions there are. One option to make this better is to have separate lists for each kind of entity. For example a Robots list, a Dogs list etc. And than check for collisions of all Robots with Dogs, and all Dogs with Walls, etc. This is better, but I still don't think it's good. So my question is: The collision detection system spotted a collision. Now what? What is the common way to react to the collision? Should the system notify the entity itself that it collided with something, and have it decide for itself how to react? E.g. entityA.reactToCollision(entityB). Or is there some other solution?

    Read the article

  • Protecting Apache with Fail2Ban

    - by NetStudent
    Having checked my Apache logs for the last two days I have noticed several attempts to access URLs such as /phpmyadmin, /phpldapadmin: 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /w00tw00t.at.blackhats.romanian.anti-sec:) HTTP/1.1" 404 415 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpMyAdmin/scripts/setup.php HTTP/1.1" 404 405 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:35 +0100] "GET /phpmyadmin/scripts/setup.php HTTP/1.1" 404 404 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /pma/scripts/setup.php HTTP/1.1" 404 399 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:36 +0100] "GET /myadmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 121.14.241.135 - - [09/Jun/2012:04:37:37 +0100] "GET /MyAdmin/scripts/setup.php HTTP/1.1" 404 403 "-" "ZmEu" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.72.235 - - [09/Jun/2012:07:11:06 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 188.132.178.34 - - [09/Jun/2012:08:39:05 +0100] "HEAD /manager/html HTTP/1.0" 404 166 "-" "-" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:09 +0100] "GET /robots.txt HTTP/1.1" 404 432 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:10 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 95.108.150.235 - - [09/Jun/2012:09:42:11 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; YandexBot/3.0; +http://yandex.com/bots)" 194.128.132.2 - - [09/Jun/2012:16:04:41 +0100] "HEAD / HTTP/1.0" 200 260 "-" "-" 66.249.68.176 - - [09/Jun/2012:18:08:12 +0100] "GET /robots.txt HTTP/1.1" 404 430 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 66.249.68.176 - - [09/Jun/2012:18:08:13 +0100] "GET / HTTP/1.1" 200 424 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 212.3.106.249 - - [09/Jun/2012:18:12:33 +0100] "GET / HTTP/1.1" 200 388 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/ HTTP/1.1" 404 379 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:34 +0100] "GET /phpldapadmin/htdocs/ HTTP/1.1" 404 386 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:35 +0100] "GET /phpldap/ HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /phpldap/htdocs/ HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:36 +0100] "GET /admin/ HTTP/1.1" 404 372 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/ HTTP/1.1" 404 377 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/ldap/htdocs/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:38 +0100] "GET /admin/phpldap/ HTTP/1.1" 404 380 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldap/htdocs/ HTTP/1.1" 404 387 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:39 +0100] "GET /admin/phpldapadmin/htdocs/ HTTP/1.1" 404 392 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /admin/phpldapadmin/ HTTP/1.1" 404 385 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:40 +0100] "GET /openldap HTTP/1.1" 404 374 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:41 +0100] "GET /openldap/htdocs HTTP/1.1" 404 381 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:42 +0100] "GET /openldap/htdocs/ HTTP/1.1" 404 382 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/ HTTP/1.1" 404 371 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:44 +0100] "GET /ldap/htdocs/ HTTP/1.1" 404 378 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:45 +0100] "GET /ldap/phpldapadmin/ HTTP/1.1" 404 384 "-" "-" 212.3.106.249 - - [09/Jun/2012:18:12:46 +0100] "GET /ldap/phpldapadmin/htdocs/ HTTP/1.1" 404 391 "-" "-" Is there any way I can use Fail2Ban or any other similar software to ban these IPs in situations when my server is being abused this way (by trying several "common" URLs)?

    Read the article

  • Single page not appearing in Google Search

    - by Dan
    Description I have a static franchise website which has various sub pages each dedicated to an individual franchisee. For each franchisee the page, the only thing slightly similar between all of them are the page titles, they follow this structure: <title> Welcome to THE_COMPANY - PRODUCT_DESCRIPTION Services, THE_LOCATION </title> THE_COMPANY and PRODUCT_DESCRIPTION are the same across all franchisees, however THE_LOCATION changes depeding on where they are located in the UK. Each franchisee page has the following <meta /> tags: <meta name="DC.creator" content="user"/> <meta name="DC.format" content="text/html"/> <meta name="DC.language" content="en"/> <meta name="DC.date.modified" content="2014-01-23T11:22:31+00:00"/> <meta name="DC.date.created" content="2014-01-23T11:22:09+00:00"/> <meta name="DC.type" content="Page"/> <meta name="DC.distribution" content="Global"/> <meta name="robots" content="ALL"/> <meta name="distribution" content="Global"/> The main content on each franchisee page is completely different. The Problem There is one particular franchisee page, located in Area A.. Which will not appear in Google Search results at all. However every single other franchisee (if you Google Search for "THE_COMPANY, THE_LOCATION" is number 1). And if I do the same search on Bing, Yahoo or DuckDuckGo, the Area A franchisee is the first result on all of them. Has Google for some reason black listed one page on the site? What I Have Tried Ensuring the page is referenced in my sitemap.xml file 'Fetching as Google Bot' the link www.the_company.co.uk/areaa When that came back as OK I would submit to index Resubmitting the sitemap.xml file in Webmaster Tools Linking to the Area A page from another pages content For this I also waited about 3 weeks before checking again to give Google time to re-index Making a change to the page content and waiting another 2 / 3 weeks Removing the page completely and recreating it with an alternative URL The closest thing I have found to this issue is this StackOverflow question but this particular franchisee has existed for almost a year, it used to appear on Google searches however no longer does. I'm guessing the Panda update wasn't too happy with something on the page, but it hasn't effected anything else on the site and I am at a loss for things to try. I would greatly appreciate any information or thoughts as to what could have caused this Thanks. Update In line with Daniel Fukudas answer below, I have followed some of his steps but everything seems to check out alright: HTTP Headers HTTP/1.1 200 OK => Date => Tue, 25 Feb 2014 16:31:29 GMT Server => Zope/(2.12.16, python 2.6.6, linux2) ZServer/1.1 Content-Length => 40078 Expires => Sat, 01 Jan 2000 00:00:00 GMT Content-Type => text/html;charset=utf-8 Content-Language => en Vary => Accept-Encoding Connection => close Robots <meta /> tag: <meta name="robots" content="ALL"/> I have updated this <meta /> tag to read content="INDEX" instead now. robots.txt: User-agent: * Disallow: User-Agent: Googlebot Disallow: /*sendto_form$ Disallow: /*folder_factories$ Using site:THE_COMPANY.co.uk: Searching for 'AREA A site:THE_COMPANY.co.uk' does not return the page, but regardless of that searching just for site:THE_COMPANY.co.uk will not necessarily return every indexed page, or so I understand... Update It appears Google likes to drop pages every now and then from the index, despite my steps above, I left the site alone and the page appeared back in the SERPs by itself.

    Read the article

  • Help with create action in a different show page

    - by Andrew
    Hi, I'm a Rails newbie and want to do something but keep messing something up. I'd love some help. I have a simple app that has three tables. Users, Robots, and Recipients. Robots belong_to users and Recipients belong_to robots. On the robot show page, I want to be able to create recipients for that robot right within the robot show page. I have the following code in the robot show page which lists current recipients: <table> <% @robot.recipients.each do |recipient| %> <tr> <td><b><%=h recipient.chat_screen_name %></b> via <%=h recipient.protocol_name</td> <td><%= link_to 'Edit', edit_recipient_path(recipient) %>&nbsp;</td> <td><%= link_to 'Delete', recipient, :confirm => 'Are you sure?', :method => :delete %></td> </tr> <% end %> </table> What I'd like to do is have an empty field in which the user can add a new recipient, and have tried the following: I added this to the Robots Show view: <% form_for(@robot.recipient) do |f| %> Enter the screen name<br> <%= f.text_field :chat_screen_name %> <p> <%= f.submit 'Update' %> </p> <% end %> and then this to the Robot controller in the show action: @recipient = Recipient.new @recipients = Recipient.all Alas, I'm still getting a NoMethod error that says: "undefined method `recipient' for #" I'm not sure what I'm missing. Any help would be greatly appreciated. Thank you.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >