Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 173/389 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • Why do different browsers return different search results at Google and how can I prevent it?

    - by Sei
    I am running some websites and am constantly checking keywords' rankings on google.com. And it is really important for us to see the organic search result without logging in or setting a specific location. Since this morning my colleague and I have checked the same ranking on both IE and firefox, the result surprisingly is very different (it almost feel like IE was logged in because the ranking is much higher, while in reality it is not). I have changed computer and the same problem occurred. It did not happen before. Can anyone tell me why is it?

    Read the article

  • Marketing for Scheduled Online Events

    - by JT703
    Last year I started working with a team on our first major web project (We, the Pixels). I believe the idea is very solid, but it has a hard requirement for a group of people being on the site for the randomly scheduled events. We are having problems getting people to come and stay for these events. What is the proper marketing approach needed to bring people to the site for these events? We have recently done the following in an attempt to fix the problem: Added email notification of new events being created Added privileges based on rank Added text throughout the site encouraging setting up the events in the future so other users can have time see that it exists. Gotten involved in with other communities that would find the site interesting in order to promote (market) the site Advertised using Google Adwords Is there an standard marketing approach for such a case as this?

    Read the article

  • Off-site Cardholder Data Storage

    - by LinuxGnut
    Is there a service or site out there that will store cardholder data for me? I don't need any kind of transaction processing or recurring billing... I just need somewhere that I can store data on until someone in my company is able to look at it. The specific need is allowing customers to input data that will be used for credit checks. Name, Address, Credit Card(s), and the such. Google Checkout, PayPal, NetSuite, and Authorize.net seem to be what everyone suggests to me, but they don't offer what I need -- they're just payment gateways.

    Read the article

  • Web Safe Area (optimal resolution) for web app design?

    - by M.A.X
    I'm in the process of designing a new web app and I'm wondering for what 'Web Safe Area' should I optimize the app layout and design. By Web Safe Area I mean the actual area available to display the website in the browser (which is influenced by monitor resolution as well as the space taken up by the browser and OS) I did some investigation and thinking on my own but wanted to share this to see what the general opinion is. Here is what I found: Optimal Display Resolution: w3schools web stats seems to be the most referenced source (however they state that these are results from their site and is biased towards tech savvy users) http://www.w3counter.com/globalstats.php (aggregate data from something like 15,000 different sites that use their tracking services) StatCounter Global Stats Display Resolution (Stats are based on aggregate data collected by StatCounter on a sample exceeding 15 billion pageviews per month collected from across the StatCounter network of more than 3 million websites) NetMarketShare Screen Resolutions (marketshare.hitslink.com) (a web analytics consulting firm, they get data from browsers of site visitors to their on-demand network of live stats customers. The data is compiled from approximately 160 million visitors per month) Display Resolution Summary: There is a bit of variation between the above sources but in general as of Jan 2011 looks like 1024x768 is about 20%, while ~85% have a higher resolution of at least 1280x768 (1280x800 is the most common of these with 15-20% of total web, depending on the source; 1280x1024 and 1366x768 follow behind with 9-14% of the share). My guess would be that the higher resolution values will be even more common if we filter on North America, and even higher if we filter on N.American corporate users (unfortunately I couldn't find any free geographically filtered statistics). Another point to note is that the 1024x768 desktop user population is likely lower than the aforementioned 20%, seeing as the iPad (1024x768 native display) is likely propping up those number (the app I'm designing is flash based, Apple mobile devices don't support flash so iPad support isn't a concern). My recommendation would be to optimize around the 1280x768 constraint (*note: 1280x768 is actually a relatively rare resolution, but I think it's a valid constraint range considering that 1366x768 is relatively common and 1280 is the most common horizontal resolution). Browser + OS Constraints: To further add to the constraints we have to subtract the space taken up by the browser (assuming IE, which is the most space consuming) and the OS (assuming WinXP-Win7): Win7 has the biggest taskbar footprint at a height of 40px (XP's and Vista's is 30px) The default IE8 view uses up 25px at the bottom of the screen with the status bar and a further 120px at the top of the screen with the windows title bar and the browser UI (assuming the default 'favorites' toolbar is present, it would instead be 91px without the favorites toolbar). Assuming no scrollbar, we also loose a total of 4px horizontally for the window outline. This means that we are left with 583px of vertical space and 1276px of horizontal. In other words, a Web Safe Area of 1276 x 583 Is this a correct line of thinking? I'm really surprised that I couldn't find this type of investigation anywhere on the web. Lots of websites talk about designing for 1024x768, but that's only half the equation! There is no mention of browser/OS influences on the actual area you have to display the site/app. Any help on this would be greatly appreciated! Thanks. EDIT Another caveat to my line of thinking above is that different browsers actually take up different amounts of pixels based on the OS they're running on. For example, under WinXP IE8 takes up 142px on top of the screen (instead the aforementioned 120px for Win7) because the file menu shows up by default on XP while in Win7 the file menu is hidden by default. So it looks like on WinXP + IE8 the Web Safe Area would be a mere 572px (768px-142-30-24=572)

    Read the article

  • A lot of 408 errors in apache logs - how to prevent them?

    - by Robert Grezan
    I see a lot of 408 errors in my apache2 logs. I increased RequestReadTimeout and KeepAliveTimout but errors are still there. The errors look like this: xx.xx.xx.xx - - [05/Dec/2012:19:33:56 +0100] "-" 408 4561 "-" "-" xx.xx.xx.xx - - [05/Dec/2012:19:33:56 +0100] "-" 408 4561 "-" "-" I heard that these errors are related to Chrome optimization and some users did reported our site returning 408 internal error. It is interesting that we get two 408 error from same IP in sequence then it that IP start working.

    Read the article

  • Strategy for hosting 700+ domains, each with static HTML site

    - by jonschlinkert
    I have a portfolio of more than 700 domain names, and ideally I'd like to put up a single-page HTML/CSS/JavaScript webpage for each domain. Is there a system/strategy/workflow that will allow me to: Automate the deployment of new websites, quickly and easily without having to manually initiate each new website in an admin panel. For instance, I've seen dropbox-based solutions that claim to make it simple to setup new websites on your dropbox account, but you still have to set each one up in an admin interface first. It would be so much easier to have a folder naming convention that allowed the user to easily clone/copy/duplicate sites inside their Dropbox App folder (https://www.dropbox.com/developers/blog/23) to create new ones. Sounds interesting, however... It's easy to managing CNAMEs on the registrar-side, is there a way to quickly associate CNAMEs with new websites, maybe gh-pages-style (https://help.github.com/articles/setting-up-a-custom-domain-with-pages)? With GitHub's gh-pages, all you have to do is drop a file called CNAME into your repo, with the domain name you want associated with the repo inside the file. gh-pages isn't a good solution for what I'm doing though unfortunately. I'm also a front-end developer, specializing in rapid web development and "front-end build systems", so I building and maintaining static assets for hundreds of sites is no problem. It's the hosting-side that I really struggle with. Any suggestions?

    Read the article

  • Google Keyword Competition rating

    - by Eric
    Google offers a Keyword application that allows me to see the number of time a particular query has been made in Google. There is a column in the results named "Competition" (Actually its Concurrence in French, I'm just translating). Its a rating from 0 to 1, as in percentage. What indicator is that? EDIT * Is this something useful I should rely on? I'm not sure about how to interpret this data. Should I go for less competitive keywords with a lower number of searches or not worry about it and go for the highly searched keywords anyway? Is 50% considered high? what about 75% ? I have a very niche market that sell expensive offline services, so the very long tail is my goal (I assume). If you didn't already figured out, I'm very new to SEO =)

    Read the article

  • Should I give preferential treatment to proxy users on my ecommerce site?

    - by Question Overflow
    I am setting up an ecommerce site that caters to a worldwide audience. I would imagine that visitors would come from everywhere, and for whatever reasons, some would be connecting through proxy servers. My site uses a server that is configured to rate limit connections from the same ip address to protect itself from a DOS attack. So, if a proxy server is heavily used by my visitors, then it would appear to be a DOS. This is problematic in a sense that it is hard to tell whether the users are genuinely browsing my site or if a DOS is taking place. So my question is, should I give preferential treatment to proxy users on my ecommerce site? If yes, how should this be done. If not, why not?

    Read the article

  • URL rewriting via forward proxy

    - by Biggroover
    I have an app that runs inside my firewall and talks out to multiple end points via HTTP/HTTPS on a non-standard port e.g. http://endpoint1.domain.com:7171, http://endpoint2.domain.com:7171 What I want to do is route these requests through a forward proxy that then rewrites the URL to something like http://allendpoints.domain.com/endpoint1 (port 80 or 443) then on the other end have a reverse proxy that unwinds what I did on the forward proxy to reach the specific endpoints. The result being that I can route existing app requests through to specific endpoints across the internet without having to change my app software. My questions are: is this even possible? is it a good idea, are their better ways to do this? Can this be done with IIS and Apache as the proxies?

    Read the article

  • Moving large amounts of data between shared hosts

    - by Bryan M.
    I recently acquired a client who is a photographer and was interested in moving web hosts since his current host had threatened to throw him off due to CPU spiking. The migration went fairly easily, with about 350MBs of website and media files. Then I discovered about 60GBs of client galleries he had failed to mention. I am unable to move this much data myself, since I'm capping out at about 20kb/s on the FTP connection. Has anyone encountered a situation where they needed to migrate this much data between cheap hosting? Should we contact the hosting companies about this (he is moving from Westhost to MediaTemple)?

    Read the article

  • Robots meta tag with "noimageindex"

    - by jimy
    I have some doubt regarding noimageindex value in meta robots tag. If I add this tag on my page say http://www.example.com/somepage/someaction.php and on that page the images are served from another page say http://www.exampleimg.com. Then will the tag has meaning. I mean to say will the images will be ignored by bots? Or exampleimg is not affected by that tag. And all images will be indexed? Note: We want to stop indexing of the images on that particular page.

    Read the article

  • What is the best way to deal with 404s that are all trying to point to the same page that are from an external site?

    - by Lee
    I started getting 404s showing up in my Google Webmaster's Tools from a site linking to a specific category but with odd characters at the end of the url. So Something like this: http://example.com/category/puppies%EF%BC%9A.textwidget%E8%A6%81%E7%B4%A0%E7%B7%A8%E9%9B%86 Google Webmaster says that there are about 120 of these links and I can imagine there will be more to come. What is the best way to handle these links from an seo point-of-view? I have heard 301 redirecting too many links at one time can cause Google to ding the site but I don't want this site to continue posting broken links. Any help on this would be appreciated.

    Read the article

  • Is there a way I can filter traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools?

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that I can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Google Tag Manager can help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york

    Read the article

  • Updating Google sitemap for mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

  • Is Wordpress a good CMS for a Event Site? [closed]

    - by Roland
    I plan on building a gallery/exhibition event site. so Locations are usually always the same an fall into 3 categorys (gallery, offspace, institution). then there is the Exhibition title the date and the participating artists. So I was wondering if Wordpress could handle such a site. it should be very data driving though, so all the information is in a list view on one site and can be ordered and queryed (which artists took part in which exhibitions and so on) Please tell me the cons and pros of using Wordpress for such a site and problems I could run into if I might plan to broaden the scope later on. thanks!

    Read the article

  • Firefox and Chrome Display "top: -5px differently"

    - by Kevin
    Using Google Web Toolkit, I have a DIV parent with a DIV and anchor children. <div class="unseen activity"> <div class = "unseen-label"/> <a href .../> </div> With the following CSS, Chrome shows the "unseen label" slightly below the anchor. which is positioned correctly in both Chrome and FireFox. However, FireFox shows the label in line with the anchor. .unseen-activity div.unseen-label { display: inline-block; position: relative; top: -5px; } and .unseen-activity a { background: url('style/images/refreshActivity.png') no-repeat; background-position: 0 2px; height: 20px; overflow: hidden; margin-left: 10px; display: inline-block; margin-top: 2px; padding-left: 20px; padding-right: 10px; position: relative; top: 2px; } Please tell me how to change my CSS so that Chrome render the label centered to the anchor. However, I need to keep FireFox happy and rendered correctly.

    Read the article

  • I have a large number of links on every page, for design reasons I want to keep it but is it hurting my SEO

    - by Callum Rexter
    The site is http://www.centralsaddlery.co.uk We have other issues which we are tackling in terms of content etc but the question I have is: "Is my main navigation hurting us in SEO?" Its a lot of links and it's on a lot of pages. If so - what is a way to get google to ignore links below the top level. I had thought google would see that the links are hidden by default and only shown on hover but I can't verify this at all. We absolutely want to keep the menu, our customers like it and so do we - we think it is pretty usable as we have a lot of products to look at. Any advice is appreciated (and any tips for any part of the SEO are welcome too)

    Read the article

  • Alternative to GoDaddy's ConsoliDate feature (change domain expiration date)

    - by Jim
    I've been using GoDaddy to manage about 50 domain names for a few years, but recently decided to move (probably to namecheap) because of the elephant killing incident. One GoDaddy's feature I like a lot is Consolidate, which allows you to change the expiration date of domain names for a small fee. I've searched for a while but didn't find any other registrar that provides this feature. Does anyone know if there's a registrar that allows you to change the expiration date of domains? Thanks!

    Read the article

  • Gzip compress offline?

    - by shoosh
    I've configured my site to serve compressed content by putting this line in .htaccess AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css application/javascript application/json This works perfectly for almost all files except a few large JSON files that are above 200Kb. For some reason they are not being compressed. I see that they don't using the net tab in firebug and the Network section in chrome. So as a workaround I thought I could compress these files offline and have Apache read them compressed. What tool should I use to compress them? is the linux gzip the one? any special flags or something I should use? What should I put in .htaccess so that the server would know to serve these files with content-encoding gzip ?

    Read the article

  • How do I deal with content scrapers? [closed]

    - by aem
    Possible Duplicate: How to protect SHTML pages from crawlers/spiders/scrapers? My Heroku (Bamboo) app has been getting a bunch of hits from a scraper identifying itself as GSLFBot. Googling for that name produces various results of people who've concluded that it doesn't respect robots.txt (eg, http://www.0sw.com/archives/96). I'm considering updating my app to have a list of banned user-agents, and serving all requests from those user-agents a 400 or similar and adding GSLFBot to that list. Is that an effective technique, and if not what should I do instead? (As a side note, it seems weird to have an abusive scraper with a distinctive user-agent.)

    Read the article

  • Port numbers appended to anchor tags

    - by glifchits
    I've built a static site. Locally, when I serve the content with python -m SimpleHTTPServer everything behaves normally, but when I copy the HTML onto the server and browse the site at the server's URL, some links will have a port number appended to the domain. For example: url.com:84/path where the correct path is url.com/path. The port number is usually different, always between 81-85. It is an Apache server. I'm not experienced with web server configuration, and I'm not the admin of the server. Let me know if there is more information that can help solve my problem. ~> cat /etc/*release* SuSE SLES-8 (i386) VERSION = 8.1 UnitedLinux 1.0 (i586) VERSION = 1.0 LSB_VERSION="1.2" DISTRIB_ID="UnitedLinux" DISTRIB_RELEASE="1.0" DISTRIB_DESCRIPTION="UnitedLinux 1.0 (i586)"

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • 301 Redirects for regional variants of a homepage

    - by Adam Jenkin
    I am planning on implementing a website which has regional homepage variants. For Example: mycompany.com/europe mycompany.com/us The rest of the site is region agnostic and content will continue such as: mycompany.com/news mycompany.com/about-us etc For homepage (.com) requests, I plan on redirecting users to the correct homepage variant (via 301). If I cannot determine the correct one, I will fallback to redirecting them to the US homepage (/us). From an SEO point of view, firstly is this ok? or should I be doing anything additional to this for making search engines aware of the regional differences? As crawlers are region agnostic, I plan on directing them to the US page with a 301, or should I have something on the .com page which they use? Being that the regional homepage's will likely be the most visited pages, they should show up in result sitelinks when searching for mycompany (which I think is a good thing). Apologies for the slightly open question - I know anything SEO related is more opinion/best practice than fact but am purely looking for advice.

    Read the article

  • My site meta description changes and pointing to some casino site [on hold]

    - by prateekrc3
    meta description contains link to "http://infoexgraphics.com/spices/?p=no-deposit-casino-bonus" some casino or gaming site I have no idea where this come from. here is the screen shot http://tinypic.com/r/2yzdi7s/8 I have already search code but found none. When i tried to use search operator with keyword "site:jobdaddy.in casino" i found that all of my pages suffered from this behavior? Is my site hacked ?? Pls help My site is jobdaddy.in Thanks :)

    Read the article

  • Google analytics/adwords account and leaking of private data

    - by Satellite
    I am frequently asked to log into clients google analytics and adwords accounts. If I forget to log out before visiting other google properties (google search, youtube etc), this leaves tracks of my views/searches etc, exposing my activities to the client. Summary: Client gives me access to their Google Analytics / AdWords account I log into clients Analytics account and do some stuff Then in another tab I perform some related google searches to solve some related issues Issues solved, I then close the Analytics tab I then visit google.com, perform some unrelated searches I then visit YouTube, view some unrelated videos All Web and YouTube searches are recorded in clients google account, thus leaking potentially sensitive data Even assuming that I remember to log out correctly at step 4 (as I do 95% of the time), anything I do at step 3 is exposed to the client. I would be surprised if this is not a very common issue. I'm looking for a technical solution to ensure that this can never happen. Any ideas?

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >