Search Results

Search found 15972 results on 639 pages for 'debugging tools'.

Page 72/639 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • Facebook Like javascript related to Time Spent Downloading a page Increase in GWT?

    - by donaldthe
    Hi, I installed the Facebook Like button Javascript version on my website on December 15th. Take a look at this report from Google Webmaster Central. Crawl stats Googlebot activity in the last 90 days The crawl stats are from Googlebot which as far as I know doesn't execute Javascript. Could the Facebook Like Javascript code, "The XFBML version" be related to large spike in Time spent downloading a page? (By the way the huge spike in November was caused by a mistake where every image request was getting a 301.) I'm not sure what caused the spike to go down by half somewhere in December. It may have been related to a faulty setting in web.config. I'm at a loss as to what I can do about this or even how to tell if this is my problem or Googlebots crawl problem. Here is the Facebook code I am using to create the like button. It is right after the opening body tag <div id="fb-root"></div> <script> window.fbAsyncInit = function() { FB.init({appId: 'xxxxx', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); ` and this creates the like box: <fb:like show_faces="false"></fb:like> If the Javascript can't be the problem any ideas on where to start looking would be appreciated.

    Read the article

  • My Xmap generated sitemap is not being submitted

    - by user2014989
    I m using Joomla Xmap component for creating sitemap. Here is the URL of my Xmap generated Sitemap: http://www.acethehimalaya.com/index.php?option=com_xmap&sitemap=1&view=xml I tried to submit my sitemap to Google but the problem I'm facing is that the URL doesn't get submitted and I'm having the issue that it says the sitemap is empty. Can Xmap generated sitemaps not be submitted, or am I doing anything wrong?

    Read the article

  • Should SQL Server tools target wide screen formats instead of portrait formats?

    - by Greg Low
    There was a short discussion on the SQL Down Under mailing list this morning about screen resolutions for working with the SQL Server tools. In particular, the issue was about how unusable the tools are on the 1366x768 resolution notebooks that now seem to be the most common. While finding a notebook with an appropriate resolution is obviously the answer at this time, I started thinking that the product itself needs to address this. SQL Server tools currently target a portrait 4:3 shape for minimum...(read more)

    Read the article

  • Google is not treating two Australian schools as separate sites when both are subdomains of qld.edu.au

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and thus domain). All schools in the state are issued a <school-name>.qld.edu.au subdomain, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as calvary christian college townsville. The green data here is for one school (the Townsville school, as per search term), and the red data is for the other school. I've put a demotion in for this 6 months ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime in the next few days. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au? We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Feature Usage Reporting in Early Access Programs

    After doing Web development, you can get very used to the luxury of having basic information about your users' machines and browsers. With their permission, you can also get the same information from an application, and can even get more targeted anonymous information that will tell you how the features are used. Kevin explains how this can be used with early access builds to improve the reliability and usability of applications.

    Read the article

  • How can I determine the trending pages on my site?

    - by Dogweather
    I'm looking to what what the "hot" pages are on one of my sites. I want to see for various timeframes, what the top-50 pages are. I'm going to create a data feed with this info which will be input to another app. I have Apache logs, and complete control of the machine to install what I want. I'm mostly wondering if there's something out there already that I can use, or if I have to implement it myself, what good algorithms or strategies might be. Thanks.

    Read the article

  • Two "subdomains" crossing search results

    - by LuckySpoon
    My question relates to two websites, each of which is a "Calvary Christian College", however in two totally different locations and unrelated to each other entirely (except by name, and domain). All schools in the state are issued a .qld.edu.au, in this case calvary.qld.edu.au and calvarycc.qld.edu.au. Now what's interesting is that these domains are crossing each other in sitelinks for searches such as "calvary christian college townsville" (if you check the sitelinks 2/6 are to a different domain). I've put a demotion in for this ages ago (we control calvary.qld.edu.au), however we're seeing no change on the results page. I have been able to get the owners of calvarycc.qld.edu.au to submit demotions for our domain, which should go in sometime this week. What can we do to tell Google that these websites are not interchangeable, despite both appearing as "subdomains" of qld.edu.au. We can possibly open channels of communication with the administrators of qld.edu.au but will need to tell them what we need to change, and at this point I'm out of ideas.

    Read the article

  • Automatically keep your local git repos clean

    - by kerry
    Most developers using git are probably aware of a command ‘git gc’ that has to be run from time to time when you notice your git commands are running a little slow. This command cleans up your git repo and makes sure everything is nice and tidy. If you have not run this command lately, you will notice a huge performance increase in your git commands after running. It’s a bit annoying to have to run this command when you notice that your git performance is suffering. The command also takes a while if you have not run it recently. With this in mind, I decided to create a method to automatically run this command from time to time. So I decided to overload cd similar to how rvm does. All you have to do is paste the method in your .profile file and it will run the command every time you enter a directory with a git repo. You’ll notice a little pause when entering the directory, it’s not insufferable but if you would prefer, you can add an & to the end of the command to have it run in the background. I chose the pause over the pid output of the background command. Here it is in all it’s glory. View the code on Gist.

    Read the article

  • Backup and the evil RETAINDAYS option

    - by TiborKaraszi
    "So what bad has this option done?", you probably as yourself. Well, not much, but I find it evil because it confuses people, especially those new to SQL Server. I have many times seen people specifying something like 3, and expect SQL Server to keep the three most recent backups in the backup file and overwrite everything which is older than that. Well, that is not what the option does. But before we go into details, let's look at an example backup command which is using this option: BACKUP DATABASE...(read more)

    Read the article

  • I've changed my URL schema. How do I tell Google to index the new schema and forget the old one?

    - by growse
    I had a site where the urls were constructed like this /index.php/Topic /index.php/AnotherTopic These were indexed in google, and search results returned that pointed to these. However, I've recently replatformed that site, and reconfigured it so the above urls would be: /index.php?title=Topic /index.php?title=AnotherTopic The original urls are returning 404s. The site is linking to the correct URL schema internally, but Google is retaining the original schema in its search results. I've updated and resubmitted the sitemap which only contains the new schema. Also, Google's webmasters tool is going slightly bananas at the fact there's now a spike in 404 errors in its crawl results. What would be the best approach to get Google to 'forget' about the old schema, and instead index the new schema? Should I try blocking /index.php/ in robots.txt? Should I be returning 301 codes instead of 404 for the original urls?

    Read the article

  • How can I track hits to areas of my web application?

    - by Tyson
    We have a growing web application, and we currently use Google Analytics and Chartbeat to track usage and engagement (although we're open to alternatives). Unfortunately, both are geared towards content-based sites where everything is about the URL. Our URLs contain object IDs, making them less useful independently, and causing us to grow beyond Google Analytics' 50,000 unique URLs per day. How can we track hits to areas of our web application, essentially ignoring parts of the URLs?

    Read the article

  • SEO - 2 websites in the same domain

    - by user6448
    I have my domain (http://www.foobar.com, for example) and my website talks about technology. I want to have another website (with other content, not about technology) inside of it (http://www.foobar.com/loremipsum). I can find http://www.foobar.com in Google Search, but http://www.foobar.com/loremipsum no ... What should I do to index this website? Thank you. update My main site (http://www.foobar.com/, a WordPress installation) has an .httaccess: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Does it affect the index of http://www.foobar.com/loremipsum ? Thank you.

    Read the article

  • Creating an online community - use templates or self-develop?

    - by ican ican
    PHPMotion, Joomla or develop my own? I'm thinking of developing a common interest online community. It will be have UGC, stats, etc.. functionality, and perhaps an online store. Though cost is an issue at this time, I want to be professional and effective. Should I use existing free platform templates, like PHP, Joomla, or should I develop my own? What are the advantages/disadvantages of either option? As a rough estimate, how much will it cost me to develop and manage my own? And how long will it take. In general what should I be careful about on this journey?

    Read the article

  • Redirect subdomain (weblog) to new domain without access to .htaccess

    - by fafa
    I've a problem that I can't find the solution for on the web. I have a blog that has PR 1 and it's subdomain "aaaa.domain.com" that "domain.com" is a blog server. Now I want buy a domain "newdomain.com" and I want tell google webmaster to redirect the old subdomain to this new domain and send traffic to my new domain. I can't access .htaccess to use a 301 redirect. The only thing that I can do is put html code in the html. How can I do this? When I use "Change of Address" in google webmaster it say:"Restricted to root level domains only".

    Read the article

  • Microsoft Small Basic for .NET

    Microsoft Small Basic is intended to be fun to use. It is that, and more besides. It has a great potential as a way of flinging together quick and cheerful applications, just like those happy days of childhood. Tetris anyone?

    Read the article

  • Should I submit my RSS feed as Google Sitemap?

    - by Svish
    I currently have no sitemap for a website I'm creating. I do have an RSS feed which includes the N latest updated posts on the site. It doesn't include everything on the site though, just blog posts. Creating a full sitemap would be a bit of a hassle I think. Should I submit the feed instead? Is there a difference between using a regular sitemap and a feed? Is it important to have a sitemap? What happens when you do/don't?

    Read the article

  • Moving from root to www

    - by chris
    I've read tons of questions but I can't find the answer to this seemingly obvious issue. I'm moving my WordPress site from domain.com to www.domain.com so that I can use CloudFlare. I did the change in WP and everything works fine, with domain.com redirecting correctly. Do I need to add the www site in GWT (and remove domain.com) or will it keep tracking the website correctly thanks to the redirections?

    Read the article

  • June 2013 release of SSDT contains a minor bug that you should be aware of

    - by jamiet
    I have discovered what seems, to me, like a bug in the June 2013 release of SSDT and given the problems that it created yesterday on my current gig I thought it prudent to write this blog post to inform people of it. I’ve built a very simple SSDT project to reproduce the problem that has just two tables, [Table1] and [Table2], and also a procedure [Procedure1]: The two tables have exactly the same definition, both a have a single column called [Id] of type integer. CREATE TABLE [dbo].[Table1] (     [Id] INT NOT NULL PRIMARY KEY ) My stored procedure simply joins the two together, orders them by the column used in the join predicate, and returns the results: CREATE PROCEDURE [dbo].[Procedure1] AS     SELECT t1.*     FROM    Table1 t1     INNER JOIN Table2 t2         ON    t1.Id = t2.Id     ORDER BY Id Now if I create those three objects manually and then execute the stored procedure, it works fine: So we know that the code works. Unfortunately, SSDT thinks that there is an error here: The text of that error is: Procedure: [dbo].[Procedure1] contains an unresolved reference to an object. Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[Table1].[Id] or [dbo].[Table2].[Id]. Its complaining that the [Id] field in the ORDER BY clause is ambiguous. Now you may well be thinking at this point “OK, just stick a table alias into the ORDER BY predicate and everything will be fine!” Well that’s true, but there’s a bigger problem here. One of the developers at my current client installed this drop of SSDT and all of a sudden all the builds started failing on his machine – he had errors left right and centre because, as it transpires, we have a fair bit of code that exhibits this scenario.  Worse, previous installations of SSDT do not flag this code as erroneous and therein lies the rub. We immediately had a mass panic where we had to run around the department to our developers (of which there are many) ensuring that none of them should upgrade their SSDT installation if they wanted to carry on being productive for the rest of the day. Also bear in mind that as soon as a new drop of SSDT comes out then the previous version is instantly unavailable so rolling back is going to be impossible unless you have created an administrative install of SSDT for that previous version. Just thought you should know! In the grand schema of things this isn’t a big deal as the bug can be worked around with a simple code modification but forewarned is forearmed so they say! Last thing to say, if you want to know which version of SSDT you are running check my blog post Which version of SSDT Database Projects do I have installed? @Jamiet

    Read the article

  • NetBeans IDE 7.2 Release Candidate Available

    - by TinuA
    The first release candidate build of NetBeans IDE 7.2 is available for download. Download the release candidate build, try out the new features and give your feedback in the NetBeans 7.2 Community Acceptance Survey. Let the NetBeans team know if 7.2 is ready for full release! You can give additional feedback on the NetBeans mailing lists and forums, file reports, and contact the NetBeans team via Twitter. The final release of NetBeans IDE 7.2 is planned for July.

    Read the article

  • Re-indexing website with clean URL's

    - by artsi
    So I have a website with URL's like this: http://www.domain.com/profile.php?id=151 I've now cleaned them up with mod_rewrite into this: http://www.domain.com/profile/firstname-lastname/151 I've fetched and re-indexed my website after the change. What is the best way to make the old dirty ones disappear from search results and keep the clean ones? Is blocking profile.php with robots.txt enough?

    Read the article

  • How can I ease the work of getting pixel coordinates from a spritesheet?

    - by ThePlan
    When it comes to spritesheets they're usually easier to use, and they're very efficient memory-wise, but the problem that I'm always having is getting the actual position of a sprite from a sheet. Usually, I have to throw in some aproximated values and modify them several times until I get it right. My question: is there a tool which can basically show you the coordinates of the mouse relative to the image you have opened? Or is there a simpler method of getting the exact rectangle that the sprite is contained in?

    Read the article

  • Dependency Management tool for REST endpoints

    - by ShaggyInjun
    I work in a Rest Oriented envrionment. The number of endpoints is quite large and span multiple applications. The dependencies between the endpoints are large in number as well and not very well planned. Applications have cyclic dependencies amongst each other. Unfortunately, there is no central location where all the endpoints are documented and declare dependencies ( the endpoints that they inturn call ). Is there a tool that will help in such dependency management. I tried searching for a tool online, but not know what such a thing would be called, I am unable to find anything. P.S. Google only helps those who know what they need help with. :(

    Read the article

  • How to delete all your old website data from the internet?

    - by Akky Awesøme
    I had my website on rohbits.com but for some reasons I had to delete it and recreate it with this URL wwww.rohbits.com/blog. My problem is that the old links are still visible on google search and when people click on those links, they land on a 404 Error page of the hosting company. I want to either delete all the previous data from the search engines or have an 404 Error page of my own so that I can tell my visitors where the actual website is. I have already redirected all the traffic which comes to rohbits.com to www.rohbits.com/blog but when they click on the expired links, they get this error page. One sample expired link is this one: http://rohbits.com/wordpress-tricks.

    Read the article

  • How to get rid of crawling errors due to the URL Encoded Slashes (%2F) problem in Apache

    - by user14198
    The Google web crawler has indexed a whole set of URLs with encoded slashes (%2F) for our site. I assume it has picked up the pages from our XML sitemap file. The problem is that the live pages will actually result in a failure because of the Url Encoded Slashes Problem in Apache. Some solutions are mentioned here We are implementing a 301 redirect scheme for all the error pages. This should make the Google bot delete the pages from the crawling errors (no more crashing pages). Does implementing the 301s require the pages to be "live"? In that case we may be forced to implement solution 1 in the article. The problem is that solution 1 will pose a security vulnerability..

    Read the article

  • Frequency to submit sitemap to search engines

    - by user577691
    i have went live with my site and being new to search engines and SEO fields not sure what should be the best way to handle sitemap.xml. I have created sitemap.xml and submitted it to Google using webmaster tool Yahoo/Bing using Bing Webmater Ask.com now since site will get updated every 2-3 times per week i am not sure what should be the best approach. Do i need to submit sitemap.xml again If i need to submit sitemap.xml again and again what should be the frequency to submit that Please suggest the best approach

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >