Search Results

Search found 13262 results on 531 pages for 'tools utilities'.

Page 40/531 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Major Google not follow increase since introducing 301 to site

    - by jakob
    Recently we implemented Varnish in front of our web nodes so that the backend would get some rest from time to time. Since varnish is case sensitive and our app was not we implemented a 301 in varnish to redirect to small case. Example: You search for PlumBer StockHOLM you will get a 301 redirect to plumber stockholm and then plumber stockholm will be cached. This worked as a charm, but when checking the Google webmaster tools we suddenly got a crazy amount of Status - Not able to follow errors. As you can see in the image below: This of course stirred up some panic and I started to read up on the documentation once again. If I pressed on one of the links I got to the help section where i found this: Well this is strange, but as the day progressed more and more errors were thrown by Google. We took the decision to make varnish return 200 instead of the 301. Now when testing the links that appears in the Not able to follow section I get a 200 back. I have tested with Chrome, curl and lynx reader and everything looks ok but the amount of errors are still increasing. What is a little bit comforting is that the links that appears in the Not able to follow section are dated before the 200 change in varnish. Why do I get these errors and why do they keep increasing? Did google release something new on October 31? Maybe I do not understand the docs correctly?

    Read the article

  • Database Connectivity Test with UDL File

    - by Ben Griswold
    I bounced around between projects a lot last week.  What each project had in common was the need to validate at least one SQL connection.  Whether you have SQL tools like SSMS installed or not, this is a very easy task if you are aware of the UDL (Universal Data Link) files.  Create a new file and name it anything as long as it has the .udl extension. Open the file, choose a provider: Click Next >> or navigate to the Connection Tab to provide connection information.  Once you provide server and login credentials, the database list will populate.  At this point, you know the connection is valid. but go ahead and click the Test Connection button anyway. On the final tab, you can provide extra connection information like Application Name which can come in handy.  The All tab is beneficial if you want to build a valid connection string to include in your own applications.  If you save the file and then open in Notepad, you’ll find that said connection string: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=(local);Application Name=TestApp I hope this tip helps save you some time.  How do you test if you don’t have SSMS installed?

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • Recommended: IntelliCommand for Visual Studio 2010/2012

    - by WeigeltRo
    The Morning Brew is a great news source for developers for many years now. In its most recent post it mentioned an extension for Visual Studio 2010 and 2012 called IntelliCommand that implements something that I had wanted for quite some time: Some kind of dynamic help for hotkeys. IntelliCommand shows a popup when you press and hold Ctrl, Shift or Alt (or combinations thereof) for a configurable amount of time, or after you press the first key combination of a chord shortcut key (e.g. Ctrl-E) and wait for an (independently configurable) amount of time. In the following screenshot I pressed and released Ctrl-E, and after a short delay the popup appeared: The extension is available in the Visual Studio Gallery, so finding, downloading and installing it via the Extension Manager is extremely simple: The default delays (2000 / 1600 milliseconds) are a bit long for my liking, but this can be changed in Tools – Options: So far things are working great on my machine. Some known issues do seem to exist, though (e.g. that the extension doesn’t work on non-EN versions of Visual Studio). See the author’s comments in the announcement blog post and in the Visual Studio Gallery for more information.

    Read the article

  • Reset / Remove - Google Keywords

    - by Herr Kaleun
    Summary: My site is ranking for filthy keywords and i would like to remove them from google ranking/keywords. Background: My server was hacked using the timthumb exploit/security vulnerability, apparently i was the last person on earth to read the news about the exploit, several months after it appeared. Anyway, the "hacker" was so friendly to modify the index.php file in such a fashion, that it generated random sexual oriented keywords if the website is fetched as google-bot. So if you would fetch it as google bot/it gets indexed, you would get randomly generated keywords like: sex videos teenager teen sex adult sex preteen A LINK TO A RANDOM CONTENT OF MY WEBPAGE anime sex videos a rough list something similar to that, about 180-200 per page. I've discovered it far too late, so that google had me indexed for the words "sex" and certain adult oriented keywords, about roughly 2000. I've removed all the content, toke the site down, replaced the index.php with a static HTML and added a "ERROR 410" title to the website so that the content is no longer here and removed permanently. I've also applied for a manual review of my website, about 1.5 months ago but still, the keywords are there, and very strange, some of the keyword rankings actually "improve" over time. Here are some screenshots from webmasters tools: Question: How can i remove this filthy keywords and re-rank my website as a "normal" website on the fastest way? I want to "REMOVE" the keywords if possible. Please help me or point me into a direction. Thank you

    Read the article

  • Duplicate content appearing for multi lingual sites

    - by Rocky Singh
    I have a site which has a default url say "http://www.blahblah.com/" (which is default in english language). In my site there is support for multi languages. I am having few links at my home page say "English" "French" "Spanish" etc. and on clicking these links user is redirected to these links: http://www.blahblah.com/en-us/ (English) http://www.blahblah.com/fr-ca/ (French) http://www.blahblah.com/spanish-culture/ (Spanish) and based on culture in the url I am showing the content accordingly to end users in their desired language. Now, this was how my site is. The issue I am getting is with SEO. I noticed Google is considering (I checked via Google web masters) my site pages as duplicate like: 1. http://www.blahblah.com/documents/ and http://www.blahblah.com/en-us/documents/ 2. http://www.blahblah.com/news/ and http://www.blahblah.com/en-us/news and similarly all the pages are considered as a duplicate content in Google webmasters tools. I am worried of this, since I think my site is getting penalized in ranking because of this. Could you drop some idea how to overcome this situation?

    Read the article

  • What is a good support knowledge base tool?

    - by Guillaume
    I have been searching for a tool to help my team organize its knowledge for resolving recurring support cases. I know this question will probably be closed, but I'll try my change anyway because I know that I can have some good answers about that. Context: our team is developing and supporting an huge applications (lots of different screens and workflow processes. We already have a good tool for managing our documentation, but we are struggling with support cases. Support action involve often quite a lot of manual steps to fix stuff and the knowledge for these actions is more 'oral transmission' than modern tools. We need an efficient way to store them in a knowledge base to be able to retrieve similar cases based on patterns (a stacktrace, an error message, a component name, a workflow step, ...) and ranked by similarity. Our wiki search is not very powerful when it come to this kind of search and the team members don't want to 'waste' time writing a report that will never be found... Do you know efficient knowledge base tool for this kind of use case ?

    Read the article

  • https:// search results appearing on Google for purely http:// site

    - by hydrurga
    I started weeding through my site's search results from Google today, using a site: search, to determine if there are any links that cause 404s and thus need redirecting. To my amazement I noticed numerous https:// results relating to various pages. My site doesn't have a SSL certificate, doesn't serve such pages, doesn't internally link to https:// pages, doesn't include any such files in its sitemap.xml and, for all of these, never has. I decided to do a Google search for https://<my site> and found one site that incorrectly refers to the root of my site with a https:// prefix - I will try to contact them to get them to correct this. I'm not sure however how Googlebot managed to index the non-root files as https://. I can't find any external links to them and surely, without certification, Googlebot should have stalled at the first request? I've just added the following lines to the site's .htaccess (although the surfer still has to navigate through the browser's "This site is a security risk. Abandon hope all ye who enter here!" message(s) first to get there): RewriteEngine On RewriteCond %{HTTPS} on RewriteRule ^(.*)$ http://www.<my site>.org/$1 [R=301,L] replacing <my site> with my domain name. My big question is this though - I would like to use the Google Webmaster Tools Remove URLs feature to remove the https:// pages from the index. Can I be guaranteed that this will only remove the https:// versions of each relevant page and not the valid http:// versions? My thanks to anyone who can help me out with this particular question and the issue in general.

    Read the article

  • Sudden drop in Total Indexed pages and increase in 'Not Selected' number.

    - by Pravin
    My blog is around 1 year old and have PR2. The average daily pageviews upto last 1 week were 1800. The total number of posts are 180. Though I have only 180 total posts, the total number of Indexed URL was increasing and it was as high as 510. But in the month of Sept2012, the total number of Indexed pages dropped from 510 to 214. The drop was sudden and it is now increasing very slowly. Also, the other main concern is huge increase in 'Not Selected' number. It is currently 814. I have never posted any post again and never copied any idea from any other blog. But I do use internal linking to some older post those are related to the new posts. The questions are:; Why there is sudden drop in the 'Total Indexed' pages. Why there was increase in total indexed pages to 500 even though the total posts were only 180. As the drop in 'Total Indexed' was in the month of sept2012, I was getting same organic traffic and it was steadily increasing till last week and then there was a 50 drop in the total pageviews. Why. Now, again the traffic is becoming to normal but still there is a problem. Is increase in the 'Not selected' number is a cause of drop in 'Total Indexed'? How to prevent or reduce the number of 'Not Selected' even though I do not have any duplicate post withing blog. Is the 'internal linking' to older post creating 'Not selected' problem? Should I edit my 'Robot.txt' to avoid crawling of labes that may be creating duplicate posts or something like that, if so, what is correct robot.txt. I have uploaded the screenshot of the graph of Webmaster Tools. Please take a look and give suggestions. Please help. Thank you in advance.

    Read the article

  • Why google return soft 404 when I redirect on signup page?

    - by Hettomei
    Since one month, I've got an increased "soft 404" reported by google webmaster tools but work well for users. I made some fix but can't figure out how to solve it. Configuration (maybe useless): I have a website built with rails 3.1 Authentication is handled by the gem Devise Problem: On this page http://en.bemyboat.com/yacht-charter/9965-sailboat-beneteau-oceanis-43 when you click on "Ask a Boat request" (a simple form, in GET to : http://en.bemyboat.com/boat_requests/new/9965) you are redirected with the http status 302 to sign in, and then sent back to the new page if successfully sign in. Google tells me that the link on "ask a boat request" returns a soft 404. I can't make this form in "POST" (which will solve the problem) because we need to automatically redirect user to the good page after sign in. (the Gem Devise memorize the "get" link) To simplify, the question is: how to protect a private page with authentication, reached with a simple "get" and not to be penalized by google with "soft 404". Thank you. PS : this website suffer a lot about english translation... please don't care.

    Read the article

  • How do you communicate improvements in tools and process to the development team?

    - by birryree
    Hi everyone, My team does a lot of internal tooling and infrastructure work - you can think of us as a small scale version of the teams Facebook, Etsy, Netflix, etc. who build all the infrastructure for scaling their services up to thousands/tens of thousands of servers and supporting millions of users. Lately, we've been running full steam ahead improving much of the tools we use internally, like tools for automatically creating new servers, setting up new application instances, etc. An end result of this has been decreased developer frustration, but increased 'ignorance' by most of the developer team about how to use our tools correctly and effectively. More often than not, my team will be asked by other teams to help them use the tools. Solutions we've thought up or things already in place: All our code is relatively simple and self-explanatory, with good comments where necessary, so developers could read the scripts. Counterargument: You can guess this isn't a particularly good idea, having people read our tools' code to figure out how to use it. All our code is committed to Subversion with very detailed commit messages about changes, developers could read the commit emails. Counterargument: Expect the developers to read all our commits? Ludicrous. Wiki - we have an internal company wiki, that we try to maintain with up to date information, but as we are moving so fast, the wiki has to keep pace as well. Counterargument: As mentioned, we move fast in my team, as more improvements on our tools are added daily. Again still relies on people to read something that might change constantly. Email the team? We could email the team when we have a glut of improvements to communicate. So as you can all see, we are trying to find new ideas, and explore options we haven't thought of yet. Anyone else ever been in a similar situation and have some guidance?

    Read the article

  • Poor backlink profile - search rankings not updated for 2+ months

    - by fistameeny
    I am carrying out some work on a website that is a PR2 with a few good quality, relevant backlinks (PR4-6). It has a presence on Twitter that is updated regularly, a Google Places listing, and listings on some decent directories (Qype etc). The site was rebuilt into Drupal 7 two months ago, with all the basics done - URL rewriting, XML Sitemap submitted to Google, and most importantly, good quality, structured content. I've noticed that Google is still showing "old" URL's from the previous version of the site that was ditched 8 weeks ago. I think the site may be penalised under the Penguin update, as a previous SEO company created many low quality links from link farms/directories. My question is what the correct way to deal with this is. Bing Webmaster Tools can "disavow" links, and I guess I can attempt to contact the link farms to have them removed. I've already submitted a request to Google to request that we have the penalty removed as we're trying to tidy up a bad history. We submit updated sitemaps to Google and Bing daily, and have built some further decent quality, relevant links. Is there anything further I can do?

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • Project Management Software / 1 maybe 2 developers

    - by Ominus
    I am looking for software that I can use to "manage" multiple projects (5 - 10). Here are the features I would like but any recommendation is welcome. Bug/Feature tracking on a per project basis. Some way to keep all documents, diagrams, specs, requirements, in one place with the project. Better yet a tool where all these things or most of them could be authored. Task management during the development phase with milestones and estimates/actuals. Git integration I have been doing contract work and i have been doing really well for myself as far as getting projects but its becoming VERY hard to manage everything in an efficient manner. I am trying to learn about best practices when it comes to software programming methodologies and the more I read the more i realize that I am just managing these projects poorly. I am getting things done but the more I take on the less "solid" everything is. I am afraid if I don't get some good solid tools/practices in place I am going to do my customers and myself a disservice. The problem is that there are SO many options that its hard to weed through them all. I was at a point today where I had decided that I would just code my own (there is some irony here)! Obviously everyone has their likes dislikes I would love to hear from some of you lone programmers and how you manage everything since our needs aren't exactly the same thing that a large team might need. I also want a solution that can scale to 2 maybe 3 developers if I end up hiring some people to help with my work load. Thanks again for your usual insights!

    Read the article

  • GUI based backup utility [closed]

    - by Chethan S.
    Possible Duplicate: Comparison of backup tools I have read favorable reviews for 'Back In Time' for the purpose stated above. Still I am posting this question as I have some demands in my mind. Few years back I was using ThinkVantage Rescue and Recovery by IBM on my Lenovo PC under Windows. That provided me nice features like compressed backups, boot time options - OS Repair, Restore entire OS, restore entire system to an older date, restore individual files etc. Out of these the feature I liked the most was compressed backups. Similar features are available in software like Norton Ghost too. In Back In Time I was surprised to see that the snapshot takes up same amount of space as that of the original contents, no compression at all. Furthermore, I was not able to find options to change the compression ratio etc. under settings. According to me compression of backups is a must have feature. Therefore, can anyone suggest me any other utility which can serve the purpose. I insist on GUI based tool since I don't want to mess up with backups!

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • Request Removal of naked domain from Google Index

    - by Pedr
    I have a site which was temporarily available at both example.com and www.example.com. All traffic to example.com is now redirected to www.example.com, however during the brief period that the site was available at the naked domain, Google indexed it. So Google now has two versions of every page indexed: www.example.com www.example.com/about_us www.example.com/products/something ... and example.com example.com/about_us example.com/products/something ... For obvious reasons, this is a bad situation, so how can I best resolve it? Should I request removal of these pages from the index? There is still content at these URLs, but they now redirect to the www subdomain equivalent. The site has many hundreds of pages, but the only way I can see to request removal is via the Remove outdated content screen in Webmaster Tools, one URL at a time. How can I request removal of an entire domain (ie. the naked domain) without it effecting the true site located at the www subdomain? Is this the correct strategy given that all the naked domains now redirect to their www equivalent?

    Read the article

  • Is it worth replacing mouse by standalone trackpad for heavy code-editing? [on hold]

    - by heltonbiker
    I recently got more interested in improving my tools, workspace and worflow. The first sting came with a sore finger due to a crappy keyboard, and then after some research I fell in love with the "mechanical keyboard is what you need" doctrine, bought one (cherry MX Brown if you're curious), and am very happy with the results. Currently I am replacing my previous text editor (Geany) with Sublime Text 3, and am also very happy and feeling much more powerful and professional :) Well, but while I re-read all the ancient debates about VIM vs whatever-else, the following excerpt from a blog post got me thinking again about the mouse vs keyboard, and the "moving around from the very home row" (in VIM) versus gesturing away with the tiny and unstable mouse cursor: Reaching for a mouse may indeed slow you down, but developers are commonly on machines where the trackpad is a micro-hand movement away. Most novice programmers can click on a character on screen faster than an expert Vimmer can type 20jFp; or LkEEE or /word or any other nasty way Vimmers have to use. The point of a mouse is to make arbitrary on screen jumps efficient, and it’s very good at doing that. Don’t you ever think you can beat a mouse. Well, although there is some bitterness in this statement, it makes a lot of sense, and EVEN MORE if you consider your direct input to be a TRACKPAD conveniently placed in front of your spacebar (which oddly is where I like to put my mouse, rotated 90° ccw, due to a serious tendonitis in my right shoulder, already healed, but you knod...). So, the question is: Has anyone replaced mouse by a standalone trackpad, to work in code editing in a desktop machine (that is, with a sandalone keyboard)? Was it worth the change?

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Somehow Google considers a properly 301'd URL as 200 and is still indexing the new content in old page?

    - by user2178914
    We redirected all the old URL's to new ones properly using htaccess. The problem is Google, somehow is still finding content in the old page(which it shouldn't) and stores it in the cache rather than the new URL. For eg: Old Page- http://www.natures-energies.com/iching.htm New Page- http://www.natures-energies.com/index.php?option=com_content&view=article&id=760 If you type the old URL into the browser it redirects If you fetch the old URL as Googlebot in the webmaster tools the header says 301/permanently redirected. If I try to crawl as any other bot it still says 301 redirected. Even if you click the old link in Google it redirects to the new URL. Only in its cache it shows the old URL and moreover it shows the new content in it! I am stumped on how Google manages to grab the new content and puts in the old URL instead of the new one! One more interesting thing is that if I try a cache for the new page it shows the cache of the new content with old URL! Any help would be appreciated. I am at end of my wits. I think i have tried almost everything. Is there anything that I'm missing to see? You can use this search to find the old url's. Maybe you'll some patterns that i missed. site:www.natures-energies.com inurl:htm -inurl:https|index

    Read the article

  • What is the most effective approach to learn an unfamiliar complex program? [closed]

    - by bdroc
    Possible Duplicate: How do you dive into large code bases? I have quite a bit of experience with different programming languages and writing small and functional programs for a variety of purposes. My coding skills aren't what I have a problem with. In fact, I've written a decent web application from scratch for my startup. However, I have trouble jumping into unfamiliar applications. What's the most effective way to approach learning a new program's structure and/or architecture so that I can start attacking the code effectively? Are there useful tools for their respective languages (Python and Java are my two primary languages)? Should I be starting with just looking at function names or documentation? How do you veterans approach this problem? I find this has to be with minimal help from coworkers or contributors who are already familiar with the application and have better things to do than help me. I'd love to practice this skill in an open source project so any suggestions for starting points (maybe mildly complex) would be great too!

    Read the article

  • Daemon Tools Lite Error- Unable to add adapter

    - by Lirik
    I just got a ThinkPad T510 with Windows 7 Professional and I installed Daemon Tools Lite, but I keep getting an error when I run it: Unable to add adapter. I tried starting it as an administrator, but I got the same problem. Does anybody know how to fix this?

    Read the article

  • Reverse-engineer SharePoint fields, content types and list instance—Part2

    - by ybbest
    Reverse-engineer SharePoint fields, content types and list instance—Part1 Reverse-engineer SharePoint fields, content types and list instance—Part2 In the part1 of this series, I demonstrated how to use VS2010 to Reverse-engineer SharePoint fields, content types and list instances. In the part 2 of this series, I will demonstrate how to do the same using CKS:Dev. CKS:Dev extends the Visual Studio 2010 SharePoint project system with advanced templates and tools. Using these extensions you will be able to find relevant information from your SharePoint environments without leaving Visual Studio. You will have greater productivity while developing SharePoint components and you will have greater deployment capabilities on your local SharePoint installation. You can download the complete solution here. 1. First, download and install appropriate CKS:Dev from CodePlex. If you are using SharePoint Foundation 2010 then download and install the SharePoint Foundation 2010 version If you are using SharePoint Server 2010 then download and install the SharePoint Server 2010 version 2. After installation, you need to restart your visual studio and create empty SharePoint. 3. Go to Viewà Server Explorer 4. Add SharePoint web application connection to the server explorer. 5. After add the connection, you can browse to see the contents for the Web Application. 6. Go to Site Columns à YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Site Columns. 7. Go to ContentTypesà YBBEST (Custom Group of you own choice) and right-click the YBBEST Folder and Click Import Content Types. 8. After the import completes, you can find the fields and contentTypes in the SharePoint project below. Of course you need to do some modification to your current project to make it work. 9. Next, create list instances using list instance item template in Visual Studio 10. Finally, create lookup columns using the feature receivers and the final project will look like this. You can download the complete solution here.

    Read the article

  • iPhone 3GS lens data for Hugin / Panorama Tools

    - by david-ocallaghan
    I'm using the Hugin frontend to Panorama Tools and it requires camera and lens data for the images it handles (details here: http://wiki.panotools.org/Hugin_Camera_and_Lens_tab). Can anyone tell me the appropriate values for the iPhone 3GS camera? Lens: rectilinear Horizontal Field of View: ? focal length: 3.85mm crop factor / focal length multiplier: ?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >