Search Results

Search found 9721 results on 389 pages for 'quicktest pro'.

Page 103/389 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Does the SPDY protocol eliminate the need for cookieless domains?

    - by Clint Pachl
    With plain HTTP, cookieless domains were an optimization to avoid unnecessarily sending cookie headers for page resources. However, the SPDY protocol compresses HTTP headers and in some cases eliminates unnecessary headers. My question then is, does SPDY make cookieless domains irrelevant? Furthermore, should the page source and all of its resources be hosted at the same domain in order to optimize a SPDY implementation?

    Read the article

  • Image CDN with API?

    - by Dan Gayle
    My company uses flickr and picasa web albums as poor man's content delivery networks (CDN) for image hosting, but I'm curious if anyone has any recommendations on any other services that might be worth looking into, paid and free? Preferably something that has an API so that it can be integrated discreetly on the backend as a WordPress plugin or for other development frameworks. A CDN such as Amazon is cheap, and it works, but the lack of a photo-centric API is what prevents me from using it for general usage.

    Read the article

  • What will be the impact on SEO if we remove our SSL certificate (url become http instead of https)?

    - by pixeline
    For some weird reason, our domain's content is returned for any https request set to any of our server's hosted domain names. https://domain.com leads to our website, with a proper SSL certificate (so, no warning). https://domain2.com, also hosted on our server but without SSL certificate, leads to a warning, and if accepted, to our website's content! The problem is that any search for our keywords in Google shows "fake websites" on top of ours, with the warning et al. It seems unsolvable so we are thinking about switching back ton nonsecure http . I'm just afraid of losing whatever indexing we have. How can i avoid that? Thanks, a.

    Read the article

  • What are some good services for brainstorming domain name ideas? [closed]

    - by Clay Nichols
    Possible Duplicate: Is there a domain search tool on the web that works well? I've run across a few of these but can't remember them right now (and I've probably missed a few good ones). The idea is that you provide some input (a word(s)) and it comes up with synonyms, rhyming words, etc. Ideally, I'd want to have some confidence that they aren't just registering all the domains I come up with.

    Read the article

  • Is PhotoBucket a viable solution to host a website's photo galleries

    - by Evan Plaice
    I'm currently working with a lot of photographers and will probably be picking up development on a professional photography site soon. With that in mind, and I can't stop thinking about a way I can implement a user-friendly photo gallery hosting solution where the site owner can upload images themselves without any webmaster intervention. Kind of like a CMS for image hosting. The idea is: - The user can log in to PhotoBucket - Upload their gallery - Visit an admin section of the site - Enter the new gallery name to the listing And... Voila, the gallery automagically gets displayed on the website in a clean lightbox-style presentation format (ie, no iframe nonsense). I took a brief look at the API and it looks promising. Is this a viable solution? Bonus points if you have implemented something like this with Photobucket and/or another 3rd-party image hosting site. Note: Purchasing a premium account is expected if necessary. The limitations on free accounts at most image hosting sites are just too restrictive to be useful.

    Read the article

  • Similar domains using my business' content, and stealing SEO results

    - by Murciano
    I've been hired to create a website for a restaurant in my city, let's call it "Flying Dragon" Chinese restaurant. The restaurant has never had a website, though the business itself is about ten years old. However, if you Google the restaurant's name, the first site that comes up seems to be affiliated with the restaurant itself, even though it is not. This site - let's say, flyingdragonchinese.com - is also the one that Google has apparently selected, in its results, to be the official website of the restaurant - in essence, the first Google result is flyingdragonchinese.com, and directly beneath it, within the same entry, are the Google reviews and contact information. Upon visiting flyingdragonchinese.com (again, not the actual name), I see that the website has taken the menu content from the restaurant, in the same manner that Yelp does, but it also seems (to the untrained eye) to be the restaurant's official site. Basically, someone has created a fake website for the business (I am not sure why) using its actual menu and contact information, and is hogging the search results. The concept is similar to a "scraping site" except that the information seems to have been stolen manually. The main problem is that visitors to this site will have an inaccurate impression of the restaurant. I feel like the obvious solution is to register a new domain for my site, and simply beat out this competitor (or whatever it is) with smarter SEO and business verification with Google. However, the Conan-the-Barbarian-web-designer part of me wants to somehow bash this other site (deservedly?) into oblivion. But I don't know what I can really do, besides maybe issuing a cease-and-desist letter, or trying to contact the web host for the site, although there is no contact information available on this "fake" site for the site owner. Has anyone ever experienced something like this? Is there any solution?

    Read the article

  • Filtering content from response body HTML (mod_security or other WAFs)

    - by Bingo Star
    We have Apache on Linux with mod_security as the Web App Firewall (WAF) layer. To prevent content injections, we have some rules that basically disable a page containing some text patterns from showing up at all. For example, if an HTML page on webserver has slur words (because some webmaster may have copied/pasted text without proofreading) the Apache server throws a 406 error. Our requirement now is a little different: we would like to show the page as regular 200, but if such a pattern is matched, we want to strip out the offending content. Not block the entire page. If we had a server side technology we could easily code for this, but sadly this is for a website with 1000s of static html pages. Another solution might have been to do a cronjob of find/replace strings and run them on folders en-masse, maybe, but we don't have access to the file system in this case (different department). We do have control over WAF or Apache rules if any. Any pointers or creative ideas?

    Read the article

  • Does having multiple URIs mapping to the same resource help SEO?

    - by Brian Wheeler
    Let's say I have a site with products that have tags, if each resource is available at GET '/products/tagged/:tag_list/:product_permalink' Could that be better for SEO than just one permalink? For example a product tagged "tea" and "coffee" would be available at GET '/products/tagged/tea/:product_permalink' GET '/products/tagged/coffee/:product_permalink' GET '/products/tagged/tea/coffee/:product_permalink' GET '/products/tagged/coffee/tea/:product_permalink' I would imagine that google would appreciate this because it gives multiple URIs with different levels of detail about the product, but I cant really be certain. Anyone have any direct knowledge on the topic? --EDIT-- As John Conde points, this is a horrible idea. What about having the links on my site link to a route such as GET '/products/tagged/:full_tag_list/:product_permalink', and then any time a user changes tags just have a HTTP moved permanently status to the new URL. Therefore duplicate URLs would be highly unlikely and mitigated by the proper response. Would this be better?

    Read the article

  • Are copyright notices really required?

    - by Alasdair
    Ever since I made my first web page 13 years ago I have followed the pattern of showing a copyright notice in the footer of each page. Over the years the format of this notice has changed in the following way; Copyright © <NAME> yyyy. All rights are reserved. Copyright © <NAME> yyyy © yyyy <NAME> © <NAME> This has generally mirrored the format used by Google. However, I recently noticed that they no longer display a copyright notice on their home page nor have one in their source code/meta tags. I see they still display it on most (if not all) other pages. I understand that Google are very keen to keep the word count down on their homepage, which could be the reason for this sacrafice, but my question is more general and relates to all websites. Since I've always just done it out of habit, I'm hoping someone can explain if/when I a copyright notice is actually required to protect your content and rights. Also, when it is required, is there a format in which the notice must adhere to in order to be valid?

    Read the article

  • IIS cache control header settings

    - by a_m0d
    I'm currently working on a website that is accessed over https. We have recently come across a problem where we are unable to view .pdf files or any other type of file that is sent as an attachment (Content-Disposition:attachment). According to Microsoft Knowledge Base this is due to the fact that Cache-Control is set to no-cache. However, we have a requirement that all pages be fully reloaded every time they are visited, so we have disabled caching on all pages (through our ASP code, not through IIS settings). However, I have made a special case of this one page that shows the attachment, and it now returns a header with Cache-Control:private and the expiry set to 1 minute in the future. This works fine when I test it on my local machine, using https. However, when I deploy it to our test server and try it, the response headers still return Cache-Control:no-cache. There is no firewall or anything between me and the server, so IIS itself must be adding these headers and replacing mine. I have no idea why it would do this, and it doesn't really make any sense, but it seems to be the only option at the moment (I haven't yet found any other place in the code that will change the cache headers). Can anyone point me to a possible place where IIS might be setting these header values?

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Why is "www.mysite.com" different from "mysite.com"?

    - by sapeish
    In any browser if I use www.mysite.com or just mysite.com the web page is correctly retrieved, but I am having trouble with Google Analytics and Facebook App. Facebook: To be able to get Likes, I create the Facebook App needed and set the site URL to http://mysite.com/. Using their tool http://developers.facebook.com/tools/debug/ when I test my page using http://mypage.com it works but using http://www.mypage.com fails with the message: Object at URL 'http://www.mysite.com/' of type 'website' is invalid because the domain 'www.mysite.com' is not allowed for the specified application id. Google Analytics: To be able to get traffic statistics, I created a Property and a Profile both with the URL http://www.mypage.com and no statistics were gathered in a week, when I changed the configured URL to http://mypage.com statistics where available a few hours later. What should I do to have statistics and likes for both www.mysite.com and mysite.com ??

    Read the article

  • How to remove duplicate content, which is still indexed, but not linked to anymore?

    - by David
    A bug in the tool, which we use to create search-engine-friendly URLs changed our whole URL-structure overnight, and we only noticed after Google already indexed the page. Now, we have a massive duplicate content issue, causing a harsh drop in rankings. Webmaster Tools shows over 1,000 duplicate title tags, so I don't think, Google understands what is going on. Right URL: abc.com/price/sharp-ah-l13-12000-btu.html Wrong URL: abc.com/item/sharp-l-series-ahl13-12000-btu.html (created by mistake) After that, we ... Changed back all URLs to the "Right URLs" Set up a 301-redirect for all "Wrong URLs" a few days later Now, still a massive amount of pages is in the index twice. As we do not link internally to the "Wrong URLs" anymore, I am not sure, if Google will re-crawl them very soon. What can we do to solve this issue and tell Google, that all the "Wrong URLs" now redirect to the "Right URLs"? Best, David

    Read the article

  • How to Map a CSV or Tab Delimited File to MySQL Multi-Table Database [migrated]

    - by Keefer
    I've got a pretty substantial XLS file a client provided 830 total tabs/sheets. I've designed a multi table database with PHPMyAdmin (MySQL obviously) to house the information that's in there, and have populated about 5 of those sheets by hand to ensure the data will fit into the designed database. Is there a piece of software or some sort of tool that will help me format this XLS document and map it to the right places in the database?

    Read the article

  • How do websites with no content rank so high?

    - by Akito
    I was just searching for, how to close a sim and I got this website on the first page of Google. The website does not have the solution to the problem. there is just a line written about it but the SEO or a trick is played so well that it has got so many real comments. The comments are users asking him to help me close their sims. Now I don't get this. There is no content then how does the site rank so high? Thankyou.

    Read the article

  • What does it mean that hosting ToS DOESN'T allows HOTLINKS and WHY is it like that?

    - by Michal P.
    Free hosting services often doesn't allow to use hotlinks. I'm not sure if I understand it well and what is the reason of such a disaproval. I would like to have pictures for my webpages in Photobucket e.g. which is allowed to have hotlinks and use those pictures using hotlinks on my sites. Is it that what is not allowed? What is a problem for free host server owners to accept such links? Bandwidth of Photobucket is used only as I understand and it is completely legal. I'v read quite enough abt hotlinks, but I can't understand this simple issue.

    Read the article

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Multiple values for a specif custom variable in Google Analytics

    - by Nicola Pacini
    we're trying to get rid of a this question : would it be possible to setup more than one value in a custom variable in Google Analytics, at page level ? Eg: _gaq.push(['_setCustomVar',3,'Tag','Custom Variables',3]); We'd like to track most popular tags on a web site who publishes news, articles and stuff. Contents are categorized (each content belongs to one category) and tagged (1 or more tags for each article). So, we'd like to apply this code: _gaq.push(['_setCustomVar',3,'Tag','Custom Variables',3]); _gaq.push(['_setCustomVar',3,'Tag','Google Analytics',3]); in a page that shows an article with these two tags assigned. What do you think? Honestly I didn't find anything in documentation from Google and some other example sites. Many thanks! Nicola

    Read the article

  • Avoid Ajax loaded content for search engine bots

    - by Majiy
    A website I run has a lot of content that is being loaded using Ajax. The reason for using Ajax is that the content generation takes some time (a few seconds), because it loads data from other websites using their respective APIs. My concern is, that search engine bots will not see any useful content. The solution I've been thinking about would be to serve search engine bots differently, so that the content will be displayed directly for them. Technically, this would not be a big problem. My question is: Will search engines (read: Google) consider this behaviour as website cloaking? Are there other concerns I might not have considered?

    Read the article

  • Why not AJAX'ify entire websites?

    - by Anonymous -
    Is there any solid reasoning as to why sites shouldn't be developed with ajax functionality that loads major parts of each part (assuming there are elements like the header, navigation etc that remain the same)? Surely it would be less resource-intensive since the server wouldn't have to serve content that appears on every page, benefiting both the host and end-user. Answer the question taking into consideration: The sites javascript behaviour degrades gracefully in every instance For my question I'm talking about new sites where this behaviour could be implemented rather from the off, so it doesn't technically cost any money - we're not returning to a finished product to implement it.

    Read the article

  • Could not log-in properly but shows no error in joomla

    - by saeha
    This is what I did, I added variables in \libraries\joomla\database\table\user.php: var $img_content= null; //contains the blob type data var $img_name = null; var $img_type = null; then I added this code in \components\com_user\controller.php: $file = JRequest::getVar( 'pic', '', 'files', 'array' ); if(isset($file['name'])) { jimport('joomla.filesystem.file'); $fileName = $file['name']; $tmpName = $file['tmp_name']; $fileSize = $file['size']; $fileType = $file['type']; $fp = fopen($tmpName, 'r'); $content = fread($fp, filesize($tmpName)); //$content = addslashes($content); fclose($fp); $user->set('img_name', $fileName); $user->set('img_type', $fileType); $user->set('img_content', $content); } that works fine, but I found this problem in logging in with the new user with an uploaded photo, other user with an empty img_content field could login properly. What happens is when I log-in using the user with uploaded photo, it's not redirecting properly it just return to log-in, but when i log-in through backend using other user which is super admin, i could see that user which appears as logged in. I started saving the images in the database because I am having problem with the directory when I have uploaded the site. I think the log-in was affected by the blob type data in the database. Could that be the problem? What could be the solution? -saeha

    Read the article

  • Software for Managing Subscriptions to Website Content?

    - by an00b
    Can you recommend a package that allows me to manage subscriptions to certain content on my website (not necessarily displayable) based on payment levels? Ideally, the software would allow logging in using both site-specific registration and PayPal/Facebook/Twitter/MyOpenId, etc. Preferably, it would also be open source, LAMP-based. One idea that I have in mind is hacking a shopping cart software like Zen-Cart but this may be an overkill if a non-shopping lighter-weight package exists.

    Read the article

  • Is there any negative impact with similar page titles and descriptions on similar sites?

    - by ElHaix
    Currently we have Canadian versions of some websites. We are going to create some American versions, which essentially have everything the same, except the search results are geo-specific to the USA. The format for the results page title and descriptions will remain the same, ie {0} in {1} | Find more {0} etc etc etc... {1}. The search term will most-likely be the same between both sites. Will the relative similarity in the page titles and descriptions between the CDN and USA sites have any negative SEO impact, where the geo location would be the most significant difference?

    Read the article

  • how to assign javascript variable value to the google analytics script? [migrated]

    - by Vinoth Prakash
    I have assigned two values in the two hidden variables at server Side and accessed those values at client side using script. I have written the google analytics code. I have set two custom variable. I need to pass two values which is stored in the javascript variables to the "value" of custom variable. I have assigned the varibales but values not displaying. please telll what error i made in the script. My aspx code <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <br /> Total Pirce&nbsp; &nbsp;: <asp:Label ID="Label1" runat="server" Text="10"></asp:Label><br /> &nbsp;Ship Price&nbsp; &nbsp; : <asp:Label ID="Label2" runat="server" Text="5"></asp:Label> <br /> ------------------<br /> Grand Total : <asp:Label ID="Label3" runat="server" Text="15"></asp:Label><br /> ------------------</div> <asp:HiddenField ID="HiddenField1" runat="server" /> <asp:HiddenField ID="HiddenField2" runat="server" /> </form> <script type="text/javascript"> var serverhid1 = document.getElementById('HiddenField1').value; var serverhid2 = document.getElementById('HiddenField2').value; alert(serverhid1) alert(serverhid2) var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-35156990-1']); //Set Custom Variable _gaq.push(['_setCustomVar', 1, 'TotalPirce', serverhid1 , 3]); _gaq.push(['_setCustomVar', 2, 'Shipping','yes', 3]); _gaq.push(['_setCustomVar', 3, 'GrandTotal',check(), 3]); _gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script> </body> </html> cs Code protected void Page_Load(object sender, EventArgs e) { HiddenField1.Value = Label1.Text; HiddenField2.Value = Label2.Text; }

    Read the article

  • Is there an 'off the shelf' platform for making a new website similar to Elance?

    - by user17747
    I am interested in developing a website that is similar to 'elance' but for a particular vertical. Is there an 'off the shelf' platform you can recommend for getting started with this, or would I need to develop this web service from scratch? When I write 'platform' I am referring to things such as 'shopify' for e-commerce sites, or 'ning' for social websites. I want to create a multi-merchant professional services site. The site would need to support functionality such as: allowing merchants to open and manage their own profile. merchants would accept payments from customers through the site. file transfers between merchants and customers. merchant ratings by customers.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >