Search Results

Search found 30345 results on 1214 pages for 'website analytics tools'.

Page 123/1214 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • How to Grab More Visitors to Your Website

    Grabbing more visitors is an achievement that cannot come overnight but one that requires to be cultivated for quite some time before the benefits begin to surface. The most important step to achieving this is increasing web visibility by installing a free version of Web SEO which one can use to get all the good keywords that can optimize your web pages. This should then be used for editing the web pages and then submitting the site to search engines for long term listings.

    Read the article

  • Updating Pages after migration of website

    - by DLackey
    My web site was coded in Coldfusion and over the years has obtained a good ranking. I recently migrated the front-end to a Wordpress site and wanted to know what is ideal way of updating Google and the various search engines of of the updates. For example, the home page of index.cfm is no longer valid since it's index.php. I've submitted an updated sitemap.xml file to Google. I'm sure my site will slip some while the search engines re-index my site but I'd like to try and minimize this as much as possible with the holidays coming up (my site is a service oriented site that caters to people who travel during the holidays). Right now, the old .cfm pages are still online but are re-routed to the appropriate Wordpress page (for example, about.cfm is now routed to /about/ using a cflocation tag.). Not sure if I should pull down the .cfm pages all together or leave them in place until the new pages are picked up by the search engines. Any advice would be helpful.

    Read the article

  • Direct Sales Consultant? Why You Need a Website

    Picture this: You meet a prospective customer named Ellen in a line-up at your local supermarket. It turns out that not only is Ellen interested in your product, she likes buying from people she knows. She asks for your card and promises to call you.

    Read the article

  • Why does Bing webmaster tool's SEO analyzer complain about multiple <h1> tags?

    - by Mathew Foscarini
    I used the Bing webmaster tool's SEO analyzer on my website, and it reported: There are multiple tags on the page. It recommends that there should only be one <h1> tag on the page. The page is a listing of blog posts for a category. So each blog entry is structured like this. <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> <article> <header><h1><a>...</a></h1></header> <p>summary...</p> </article> How is this invalid? I thought this was the correct way to describe a post in HTML5.

    Read the article

  • ASP.NET website deployment [on hold]

    - by Rei Brazilva
    I am getting my hands wet with ASP and I have been following the tutorials. I deployed the site and in Azure and it worked great. Today I started actually designing the site. And when I published, it looks as if it doesn't read any of the files I just updated, added, and modified. It works on my localhost, but not in the Azure. I thought when you publish, everything goes up, including the new files. I don't have enough reputation to add a picture so, you'll forgive me. SO, basically, how do I get my entire site uploaded? In case anyone does stop by, I was able to pull this out just recently: CA0058 Error Running Code Analysis CA0058 : The referenced assembly 'DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246' could not be found. This assembly is required for analysis and was referenced by: C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\WebsiteManager\bin\WebsiteManager.dll, C:\Users\lotusms\Desktop\LOTUS MARKETING\ASP.NET\WebsiteManager\packages\Microsoft.AspNet.WebPages.OAuth.2.0.20710.0\lib\net40\Microsoft.Web.WebPages.OAuth.dll. [Errors and Warnings] (Global) CA0001 Error Running Code Analysis CA0001 : The following error was encountered while reading module 'Microsoft.Web.WebPages.OAuth': Assembly reference cannot be resolved: DotNetOpenAuth.AspNet, Version=4.0.0.0, Culture=neutral, PublicKeyToken=2780ccd10d57b246. [Errors and Warnings] (Global) Could this have something to do with the problem?

    Read the article

  • Jquery website is not opening in UBUNTU but in XP, Everything is fine

    - by Raman Sethi
    I know it is weird, But I just discovered this, jquery.com is not opening in my ubuntu firefox or other KDE browser and hence many sites that copy codes from code.jquery.com also hanged. Is there any solution to this problem. I have found the problem It is actually with the DNS servers I am using, Google DNS, 8.8.8.8 and 8.8.4.4, whenever I use these DNS in ubuntu my system stop responding to some sites, actually they are connected nicely, but the request end up in waiting.. I dont understand why...??? I checked my DNS with cat /etc/resolv.conf Even after using Google DNS, it is showing DNS servers I received automatically after connecting to the service provider. I am connecting using Network Manager, not using DNS I provided but using the default one. Any Solution??

    Read the article

  • Cannot install PC Tools firewall

    - by Philip
    I am unable to install PC Tools firewall..It is fine up until after rebooting then the PcTools icon in the system tray hangs and indicates it is "initializing." I have waited 5 minutes and no change. I have tried multiple times.The Vista firewall was turned off prior to attempting to the attempted installation. Help!Thank You

    Read the article

  • Website misclassified by websense

    - by Jeff Atwood
    I received the following email from a user of one of our websites: This morning I tried to log into example.com and I was blocked by websense at work because it is considered a "social networking" site or something. I assume the websense filter is maintained by a central location, so I'm hoping that by letting you guys know you can get it unblocked. per Wikipedia, Websense is web filtering or Internet content-control software. This means one (or more) of our sites is being miscategorized by websense as "social networking" and thus disallowed for access at any workplace that uses websense to control what websites their users can and cannot access during work hours. (I know, they are monsters!) How do we dispute this websense classification error, as our websites should generally be considered "information technology" and never "social networking"? How do we know what category websense has put our sites in, so we can pro-actively make sure they're not wrong?

    Read the article

  • Trouble with my website and IE7

    - by Hamish Hagaheygui
    Hi there, sorry im new here but i have a serious problem; I changed the CSS style sheet for IE7 for my site http://bumblebbids.com , but now when using IE7 it has no graphics and all of the scripts code is printed ; I have tried replacing the original stylesheet but it made no difference, so please suggest ways that i can fix this? If not, would it be possible to have a 'Incompatible browser' message that only appears for IE7 users? Thanks

    Read the article

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • GWT: Generate more complete crawl error report

    - by Mike
    I'm a developer in charge of managing Webmasters and related issues (including correcting crawl errors) for dozens (hundreds, maybe?) of active sites and as part of my duties I create a report of every discrepancy, including all pages generating a 404 and all pages that link to those pages. Currently within Webmaster Tools I'm able to download a csv file of all pages with a 404 response, but I'm then having to manually click on every single one of those links and copy the "linked from" field to paste into my spreadsheet. This is extremely tedious and seems unnecessary; I would expect the ability to download all that data at once. I'm ultimately looking for the end result of one csv file that has every url with a 404, but also has every url that links to each one of them. Am I overlooking this functionality somewhere or does anyone have a good solution? Edit 1 (2/11/2013): Example of what the csv output looks like now: URL,Response Code,News Error,Detected,Category http://www.abcdef.com/123.php,404,,11/12/13,Not found http://www.abcdef.com/456.php,404,,11/12/13,Not found Which is great, but let's say 123.php has 5 pages that link to it. Now I have to duplicate that row in my spreadsheet 4 more times, then go into Webmasters, get all the url's that link to the page, and add that data to my spreadsheet. The output I would prefer: URL,Response Code,Linked From,News Error,Detected,Category http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/123.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage1.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage2.php,,11/12/13,Not found http://www.abcdef.com/456.php,404,http://www.ghijkl.com/naughtypage3.php,,11/12/13,Not found Note the (hypothetical) addition of a "Linked From" column, as well as the fact there are only 2 unique URL's now (like before) but all of the "Linked To" pages are shown in one report. Edit 2 (2/12/2013): To clarify, my question is less about detecting and correcting 404's, but more about generating a report of what Google has listed as errors. Oftentimes, these errors aren't even valid anymore but I still need documentation to show that Google detected a problem and that problem is now fixed. Many of the "linked from" url's I find are actually outdated, cached resources. For example, I'll frequently see that the linked-from url is the sitemap, which is actually an old sitemap cached by Google that points to an old page. Neither the sitemap or old page exist, but they still appear in my crawl error reports because they are cached resources.

    Read the article

  • The Advantages of Using SEO For Your Website

    If you've engaged in online marketing then you will know that there are numerous ways in which to implement this. Some of these methods include, pay per click, email marketing, lead generation, social media and plenty more. There is one method however which should be used as part of any online marketing strategy; this is search engine optimisation (SEO).

    Read the article

  • How to Optimize Your Website Using the Title Meta Tag

    When I browse the internet I am shocked to see just how many big websites which don't correctly utilize the TITLE META tag to extract as much weight as possible to their keyword they are optimizing for. From my experience the TITLE tag is one of the most powerful onsite SEO factors. We all are aware that back links with anchor text using the targeted page keyword is the most important factor but it never fails to amaze me just how much people ignore this powerful META tag.

    Read the article

  • Website Optimization For Maximum Traffic

    All kinds of advertisement have grown into an extremely major venue throughout the entire internet. Nearly all companies have placed ads on the internet. But with tons of web sites being viewed and s... [Author: Frank Breinling - Web Design and Development - June 08, 2010]

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • Is having your own website important?

    - by Josh K
    How necessary or important is it? I try to keep a running list of blogs or sites to follow, but a lot of the time I pull up someone's profile and notice there isn't anything there. Is it really important? I understand are different levels of programming (from C/C++ system programmers to Rails and even Haskell and J) and not everyone works in a language easily worked with for web based applications. Not everything is web-centric, however with the advent of many popular and sometimes free services I don't think it's unreasonable to expect a majority of programmers to have a personal site.

    Read the article

  • Top 10 Tips For Better SEO Website Design

    What makes the difference between mediocre SEO and stellar SEO? The answer is good onsite optimization that tells the search engines what you're all about. Read these tips to find out how to best communicate your site's topic to the search engines.

    Read the article

  • Money in from website

    - by oshirowanen
    EDIT 1: It seems that paypals micropayment system is currently my best option to retaining as much of the $1 as possible. Does anyone know of a way to retain even more of the $1? ORIGINAL QUESTION: I need to receive money from users from my webpage. They will only pay very small amounts, i.e. $1 max, but the total will probably go upto $10,000.00. What is the best way to receive this money from a webpage? When I say "the best way", i mean a method of getting the money from where I lose as little of it as possible in terms of fees for receiving the money.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • How to discredit hacked links pointing at my company's website

    - by Dan Gayle
    The competition of one of my company's websites has started a really dirty campaign of acquiring hack links. One of their ingenious tactics has been to seed in links to OUR site withing their hack bot, making US look like we might be responsible for it or using us to cover their tail. These are .gov and .edu sites. Is there any way possible to discredit these links? To disavow them at all? EDIT: Penguin has really effected this question, IMO. Does anyone know if there is a revised opinion on disavowing backlinks to your site?

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >