Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 152/389 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Optimizing data downloaded via 'link' media queries and asynchronous loading

    - by adam-asdf
    I have a website that tries to make sensible use of media queries and avoid 'expensive' CSS for users of mobile devices. My eventual goal is to make it 'mobile-first' but for now, since it is based on Twitter Bootstrap it isn't. I included some background images (Base64 encoded) and styles that would only apply to "full-size" browsers in a separate stylesheet loaded asynchronously via modernizr.load. In Firefox (but not webkit browsers) it makes it so that if you navigate away from the homepage and then return, the content (specifically, all those extras) 'blinks' when it finishes loading...or maybe I should say reloading. If, instead of using modernizr.load, I include that stylesheet via a link... in the head with a media query attribute will it prevent the data from being downloaded by non-matching browsers (mobile, based on screensize) that it is inapplicable to?

    Read the article

  • HTML5 media loading sometimes suspends or aborts: misconfigured Apache?

    - by Joan Botella
    Recently, some code that has been working fine for months started to run unexpectedly. That code is just a media files loading JavaScript function, that uses jQuery. It's pretty long, but in essence it is like this: var $audio=$('<audio>'); $audio.on('canplaythrough',function(e){ $audio[0].play(); }); $audio.attr('src','song.ogg'); Basically, the file only loads sometimes, and sometimes stops loading with a suspend or even an abort event. I have uploaded a little testing HTML to http://www.joanbotella.com/tests/loading , where you can see what's happening. You can download the test files from http://www.joanbotella.com/tests/loading/loadingTest.zip for local testing. I have just checked that opening the test index.html file directly into Firefox, and not through my localhost Apache server, makes the audio files perfectly playable. So, I assume, my hosting and I have the Apache server misconfigured for serving media files. My software versions are: Apache 2.2.22-1ubuntu1.7 , Mozilla Firefox 31.0 , Chromium 36.0.1985.125 and jQuery 1.11.0. Can you help me? Thanks in advance!

    Read the article

  • Best method to do A B testing across to subdomains

    - by Lior
    I want to do an A B test of an entire site for a new design and UX with only slight changes in content (a big brand site that has good Google rankings for many generic keywords. My idea of implementation is doing a 302 redirect to the new version (placing it on www1 subdomain) and allowing only user agents of known browsers to pass. The test version will have disallow all in the robots text. Will Google treat this favorably or do I have to use Google Website Optimizer (which will give me tracking headaches)?

    Read the article

  • Cropping images & SEO

    - by user1181950
    So I have a page with a bunch of images with largely varying sizes. Also the layout of the page is such that the images are all in the shape of square tiles, so just resizing will cause distorted images. What I've been doing previously is when users upload images, I resize and crop them appropriately and display the new image as the thumbnail and load full image when user clicks on it. However, I just realized this is an issue with SEO as google will crawl the thumbnails and stick the thumbnails on Google Images instead of the full images. Is there any way to show a cropped/resized image but have Google Image show the full image? I can do something with css using an enclosing div and overflow:hidden, but I'd imagine the performance on that would be pretty bad. Any suggestions? Thanks! PS. I saw this (Make google index the actual image not the thumbnail), but in my case I have users continuously uploading images, and the database of images is always changing and pretty big (thousands), so sitemap will be pretty unwieldy..

    Read the article

  • How do I fix the Google Webmaster Tools warning: "URL not followed?"

    - by user3611500
    A few days after submitting my sitemap to Google, I received this warning: When we tested a sample of URLs from your Sitemap, we found that some URLs redirect to other locations. We recommend that your Sitemap contain URLs that point to the final destination (the redirect target) instead of redirecting to another URL. The example URL Google gave me is http://iketqua.net/?_escaped_fragment_=CIDTKT/mien-trung/xo-so-kon-tum I checked all possible things that I could think of, but still can't figure out what the warning is about! My sitemap: http://iketqua.net/sitemap.xml

    Read the article

  • Google crawling the site but refusing to index dynamic content

    - by Omeoe
    I am trying to get Google to index an AJAX site (davidelifestyle.com). It's crawlable with JavaScript turned off and I have also recently implemented _escaped_content_ snapshot mechanism but all that's indexed is a home page and PDF files that are not directly available from the home page. Also when I use Fetch as Google in Webmaster Tools, it downloads the dynamic page but does not index it ("Submit to Index" just reloads the page). Any ideas what might be wrong?

    Read the article

  • Tracking URL Goals to an external site from a landing page

    - by Arel
    I have a landing page promoting an iOS app. The page is at vitogo.com. I've set up a goal for When a user clicks on the link to go to iTunes to download the app. I set up a URL destination goal in the property for the site, and can see the goal set up in the reports section. The problem is it isn't tracking any clicks. I've had the goal set up for a while now, and it hasn't tracked anything. Thanks for the help!

    Read the article

  • How do I use IIS7 rewrite to redirect requests for (HTTP or HTTPS):// (www or no-www) .domainaliases.ext to HTTPS://maindomain.ext

    - by costax
    I have multiple domain names assigned to the same site and I want all possible access combinations redirected to one domain. In other words whether the visitor uses http://domainalias.ext or http://www.domainalias.ext or https://www.domainalias3.ext or https://domainalias4.ext or any other combination, including http://maindomain.ext, http://www.maindomain.ext, and https://www.maindomain.ext they are all redirected to https://maindomain.ext I currently use the following code to partially achieve my objectives: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <rewrite> <rules> <rule name="CanonicalHostNameRule" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTP_HOST}" pattern="^MAINDOMAIN\.EXT$" negate="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> <rule name="HTTP2HTTPS" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="off" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Permanent" url="https://MAINDOMAIN.EXT/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> ...but it fails to work in all instances. It does not redirect to https://maindomain.ext when user inputs https://(www.)domainalias.ext So my question is, are there any programmers here familiar with IIS7 ReWrite that can help me modify my existing code to cover all possibilities and reroute all my domain aliases, loaded by themselves or using www in front, in HTTP or HTTPS mode, to my main domain in HTTPS format??? The logic would be: if entire URL does NOT start with https://maindomain.ext then REDIRECT to https://maindomain.ext/(plus_whatever_else_that_followed). Thank you very much for your attention and any help would be appreciated. NOTE TO MODS: If my question is not in the correct format, please edit or advise. Thanks in advance.

    Read the article

  • How can a domain use its own nameservers?

    - by Thomas Clayson
    I have to change the MX DNS records for our company domain name and I've come across this odd situation: A whois search shows up that the nameservers for ourcompany.com are ns1.ourcompany.com and ns2.ourcompany.com. In the DNS settings at the registrar there are no A/Cname records at all. However the nameservers are defined in the DNS settings for the domain on our dedicated server. (Registrar and host are two different companies). Using the DNS lookup on http://www.mxtoolbox.com/ shows that ns2.ourcompany.com is reporting the correct IP for our dedicated server. Its all very odd... the DNS on the dedicated server doesn't seem to have much effect, but its odd that the dns at the registrar's end doesn't have any records. thanks for your help.

    Read the article

  • How to properly URL/domain forward

    - by NRGdallas
    No clue on a title for this, someone feel free to suggest an edit. I have a client that has a website. He owns around 200 domains, and wants each domain to contain content from the main website. The header, footer, and navigation bars will remain the same for each domain, but the actual page content will vary (obviously duplicate content issues, open to suggestions) He wants each individual page to be its own separate domain, rather than a url within the main domain. (page1.com page2.com etc - NOT site.com/page1.html, however the file is actually hosted at site.com/page1.html - all links will direct to site.com/whatever accordingly) What would be the best place to start reading / learning on how to do this, and what concerns/considerations should be taken into mind?

    Read the article

  • Does the google crawler really guess URL patterns and index pages that were never linked against?

    - by Dominik
    I'm experiencing problems with indexed pages which were (probably) never linked to. Here's the setup: Data-Server: Application with RESTful interface which provides the data Website A: Provides the data of (1) at http://website-a.example.com/?id=RESOURCE_ID Website B: Provides the data of (1) at http://website-b.example.com/?id=OTHER_RESOURCE_ID So the whole, non-private data is stored on (1) and the websites (2) and (3) can fetch and display this data, which is a representation of the data with additional cross-linking between those. In fact, the URL /?id=1 of website-a points to the same resource as /?id=1 of website-b. However, the resource id:1 is useless at website-b. Unfortunately, the google index for website-b now contains several links of resources belonging to website-a and vice versa. I "heard" that the google crawler tries to determine the URL-pattern (which makes sense for deciding which page should go into the index and which not) and furthermore guesses other URLs by trying different values (like "I know that id 1 exists, let's try 2, 3, 4, ..."). Is there any evidence that the google crawler really behaves that way (which I doubt). My guess is that the google crawler submitted a HTML-Form and somehow got links to those unwanted resources. I found some similar posted questions about that, including "Google webmaster central: indexing and posting false pages" [link removed] however, none of those pages give an evidence.

    Read the article

  • How to use different php.ini files for different VirtualHosts?

    - by gsingh2011
    I have my site and it's staging subdomain running on the same CentOS machine running apache. The subdomain is created using a VirtualHost, and I use it to find any bugs before I push to production. I want the php.ini file for the staging VirtualHost to be a development one, and the production site will use a production php.ini. How can I configure apache to use different php.ini files? I don't want to use php_value/php_flag for everything, I'd rather just use the php.ini file I already have available. I've tried creating an .htaccess file that looks like this, SetEnv PHPRC /path/to/php.ini/directory This has no effect, as phpinfo() tells me it's still using /etc/php.ini. I've also tried setting PHPIniDir for both virtual hosts (www and staging) and it complains about seeing the directive twice.

    Read the article

  • Tracking referrals between profiles on the same domain in Google Analytics

    - by doctororange
    I have a website at mydomain.com that uses Analytics. I have a blog that resides at mydomain.com/blog/, which also uses Analytics They are on different profiles. The main site uses something like: _gaq.push(['_setAccount', 'UA-XXXXXXXX-6']); While the blog uses: _gaq.push(['_setAccount', 'UA-XXXXXXXX-7']); _gaq.push(['_setCookiePath', '/blog/']); My issues is that this seems not to track referrals from the blog through to the main site when, for instance, the logo which links to the main site is clicked. Ideally, I would like the clicks of this logo to report that the source was mydomain.com/blog/, but because they are at the same domain they seem to register as direct traffic. Have I missed a step in my configuration, or will I have to resort to linking to something like mydomain.com?ref=blog? Thank you.

    Read the article

  • What is the advantage to hosting static resources on a separate domain?

    - by Michael Ekstrand
    I notice a lot of sites host their resources on a separate domain from the main site, e.g. StackExchange using sstatic.net, Barnes & Noble using imagesbn.com, etc. I understand that there are benefits to putting your static resources on a separate host, possibly with an efficient static-file web server like nginx, freeing up the main server to focus on serving dynamic content. Similarly, outsourcing to a shared CDN like cloudfront Akamai is logical. What is the benefit to using a separate domain otherwise, though? Why sstatic.net instead of static.stackexchange.com? Update: Several answers miss the core question. I understand that there is benefit to splitting between multiple hosts — parallel downloads, slimmer web server, etc. But what is more elusive is why multiple domains. Why sstatic.net rather than static.stackexchange.com as the host for shared resources? So far, only one answer has addressed that.

    Read the article

  • How much time it needs google webmaster yo generate content keyword if url masking is enabled? [closed]

    - by user1439968
    Possible Duplicate: What is domain “masking” or “cloaking”? Why should it be avoided for a new web site? my real domain is domain.in. But url masking has been enabled and the masked url is domain2.in .. In that case i have added d url bputdoubts.21backlogs.in to google webmaster a week ago but content keyword hasn't been generated. In this case when can I expect to get the content keywords generated ?? And is there a problem for getting visitors from google search if url masking is enabled ?

    Read the article

  • List of freely available SEO tools (software) for keyword rank checking? [closed]

    - by Craig
    Possible Duplicate: can anyone reccommend a Google SERP tracker? Requirements: Analysis of site positions on the list of keywords in different search engines; Track keyword positions on search engines. I want see if my keyword rankings have moved up or down; Creating reports. I use Excel + Rank Checker addon for Firefox, to analyze the position of the site in search engines for my keyword list. Are there any tools which tested and working properly. Thanks.

    Read the article

  • Problem downloading .exe file from Amazon S3 with a signed URL in IE

    - by Joe Corkery
    I have a large collection of Windows exe files which are being stored/distributed using Amazon S3. We use signed URLs to control access to the files and this works great except in one case when trying to download a .exe file using Internet Explorer (version 8). It works just fine in Firefox. It also works fine if you don't use a signed URL (but that is not an option). What happens is that the IE downloader changes the name from 'myfile.exe' to 'myfile[1]' and Windows no longer recognizes it as an executable. Any advice would greatly be appreciated. Thanks.

    Read the article

  • Consolidating multiple domain names

    - by Mike
    I have a client that has three separately hosted copies of their website, each on a separate domain name. The websites are all essentially the same, bar a few discrepancies caused by badly managed updates in the past. I will soon be launching a completely new website for them, at which point, all three domain names are to resolve to the same web server. One domain name will become the default domain name that they refer to in all their literature, and the other two will simply be used as catch-alls for old links, bookmarks, and so on. I would like to know what people consider the best route to achieve this. My plan so far is: Get the new site up and running on the new webserver. Change the relevant A record of the default domain name to point to the new webserver. a) Keep the existing hosting accounts in operation. Create a list of 301 redirects from old page names on the old site to new page names on the new site. or b) Configure CNAME records for the non-default domain names, each pointing to the new webserver. Create a list of 301 redirects on the new site that redirect from old page names to new page names. If my understanding is correct, 3a will help to maintain whatever search engine rankings the sites already have (I know it's not going to be perfect), while at the same time informing search engines that the old domain names are no longer in use. What's a good approach to take here?

    Read the article

  • Has Microsoft stopped offering the free Internet Explorer Application Compatibility VPC Image for IE 6 testing?

    - by Paul D. Waite
    For some time now, Microsoft has made available free, stripped-down, time-limited Virtual PC images for testing web apps in older versions of IE. The most recent version is here: http://www.microsoft.com/download/en/details.aspx?id=11575 But the XP VPC image has now expired (14th Aug 2011), meaning one can no longer test IE 6 using this method. Have Microsoft made updated XP VPC images available? If not, have they commented on the situation? Do they provide any alternative method to test web apps in IE 6? Update As noted by @PleaseStand, as of 16th Aug 2011, Microsoft has made updated images available that expire on 17th November 2011.

    Read the article

  • Download Monitoring for MovieMusic Portal

    - by VenomVipes
    Our portal is targeted on Mobile Users. We have Music(mp3) Video(3gp) files for download. I expect 300 Parallel Downloads. I want a way to control my Downloads. Like Kicking/Ban a IP or download. Stastics of download. Bandwidth Consumed .... I have root/admin access to my Server. My Question is : Is there a way I can Monitor & Control the OnGoing downloads that visitors are doing from my Site.

    Read the article

  • Where can I get resources to design a website like this? [closed]

    - by Jhon Andrew
    I have a project. To make a CMS for an online game. Can anyone suggest me where I can get resources that I can use like vintage borders, seamless old papers or any vintage like patterns, etc. I would like to come up with something like this website for example: http://www.gamezaion.com/ I hope someone would suggest and/or give me ideas, inspirations, and examples how I can come up to something like it. P.S.: I am getting a hard time designing, because I define my self as a developer not a designer.. Lol.

    Read the article

  • Making Multilingual J! 1.5 + + Joomfish + VM 1.17 more workable

    - by rhand
    I have been working with a multilingual Joomla! 1.5.23 e-commerce website for a client for quite a while and made several customizations. But the client is still not happy he has to adjust content at at least three locations: Joomfish Virtuemart Article Manager Joomfish is nice in the way that it allows you to create multilingual content and copy and paste the source language on the same page, which makes translation work easier but it is annoying in the way you have to edit several custom fields at different locations/ content types. As Joomla! source language content still needs to be created in the article manager first this is the second location the client has to work at. The third location is Virtuemart. Here all the products and product categories are created. And here we added some custom fields as well. Now I was considering upgrading the website to Joomla 1.7 or later on to 1.8. This J! versions have better multilingual support. But I wonder if er can really make the client's life easier. We will still have to copy the source language to a new article and create content in another language. We will still have the issue of content in custom fields that needs to be translated and we will still have to create content. Should I go for another CMS such as Magento or do you think there is a way in a more recent Joomla! version to work with all content in one or max two locations?

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

  • The danger of changing the domain of your portfolio

    - by Mervin
    So I have a online portfolio that is available at mervin-ux-portfolio.com but I am planning to change hosts since the current host I am hosting it with is hitting me with a very high yearly renewal rate. When I was inquiring about domain transfers ,,they told me that since I had not initiated the domain transfer within 14 days of the expiry of the domain ,they cannot do it immediately and it would take about two weeks to to release the domain name. Since I dont like the idea of my site being down for like 2 weeks ,I was wondering if I should start afresh with a new domain on a new host and what were the potential dangers of that ( I have the entire site backup,so creating a replica of the site on the new host wont be hard) I also wont be losing any business or work since I work full time currently but I was just wondering about the challenges in terms of getting my domain name back to the top of search results and basically getting it out there assuming I go the new domain name approach. I know this is strictly not an UX question but I was hoping people could give some suggestions on what I should do

    Read the article

  • Multilingual website without language component in the URL

    - by user359650
    I'm working on a website for Canada which will have French and English versions. For SEO purposes, I would like to avoid using any language tag in URLs because I believe it will have more impact (e.g. example.ca/products better than en.example.ca/products or example.ca/en/products). I believe this is technically possible because the2 languages are sufficiently different that the URLs won't be conflicting with one another (e.g. if you want a "product" page, it will be /products in English, and /produits in French so you know which language the URL is about). Since Google (and most likely others) doesn't rely on the URL (nor HTML tags) to determine the content language I don't see any problems with search engines. To make this possible I've thought about using a cookie distinct from the session cookie (e.g. example.org_language) with long term expiry (e.g. N years) that will memorize the language chosen by the user. That way when people visit the website with a new browser session, they get served the proper language. I have already given up on users being able to switch one page from English to French: when people will chose English or French from the menu they will be redirected to the corresponding version of the home page. Do you foresee any problems with not using a language component in the URL (whether domain or path)? (as long as one makes sure URLS don't conflict).

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >