Search Results

Search found 9717 results on 389 pages for 'pro metedor'.

Page 86/389 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • Finding the right back-end developer

    - by John Watson
    I am creating a websites for mobile phone tests. Users can post their own tests and combine it with an existing rating of each product. I do only front-end development and I have no idea about back-end - php, sql, etc. I am not sure I should operate the website without this knowledge but my first thought is to get a professional whom I would give my website to, so that he can do the rest. Only thing is that I need to update it regularly and post my own tests. I don't know how that works and how I should approach this. My understanding is that, after I have finished the website (written in HTML, CSS, JavaScript/jQuery), I would go and find a php programmer and tell me to put it online, make it secure, make sure that the open-source facility (users post their own tests) and that it runs smoothly with the host/server I've chosen. Could you tell me if my approach makes sense (is that the way how to do it)? What should I consider when searching the right back-end developer concerning the right price performance, trust, etc. ?

    Read the article

  • CSS Intelligent Merger

    - by BHare
    I am looking for a tool very similar to http://www.tothepc.com/archives/combine-merge-multiple-css-files/ However, given this example: test1.css: #admin { background: #c9d2dc; border-color: #ccc } test2.css: #admin { background: #222; border-bottom: 1px solid #444; border-left: 1px solid #444; padding: 2px; position: fixed; right: 0px; top: 0px; width: 120px; z-index: 2 } It will only allow you to select one or the other. I want to merge them, making it: #admin { background: #c9d2dc; border-color: #ccc border-bottom: 1px solid #444; border-left: 1px solid #444; padding: 2px; position: fixed; right: 0px; top: 0px; width: 120px; z-index: 2 }

    Read the article

  • Is content in option tags indexed?

    - by Silfverstrom
    Is data inside an <option> tag indexed? For example, would the following option tag allow "Volvo", "Saab", "Opel" and "Audi" to be indexed by a crawler? <select> <option value="volvo">Volvo</option> <option value="saab">Saab</option> <option value="opel">Opel</option> <option value="audi">Audi</option> </select> Will search engines put any weight on data in an option form element?

    Read the article

  • SEO for a list of products with filters

    - by dana
    I am a wondering if there is a recommended "best practice" for a product search SEO. I know to create a dynamic sitemap file that lists links to all products in the site. However, I want to implement a a bookmark-able "advanced search". Should I let search engines index any of the results? Take the following parameters for a search on a make believe used car website: minprice (minimum price in dollars) maxprice (maximum price in dollars) make (honda, audi, volvo) model (accord, A4, S40) minyear (minimum model year) maxyear (maximum model year) minmileage (minimum mileage) maxmileage (maximum mileage) Given these parameters, there could be an infinite number of search combinations: Price Between $10,000 and $20,000 /search?minprice=10000&maxprice&20000 Audis with less than 50k miles /search?model=audi&maxmileage=50000 More than 100,000 miles and less than $5,000 /search?minmileage=100000&maxprice=5000 etc. Over time, there may be inbound links to a variety of these types of searches, yet they are all slices of the same data. Should I allow for all of these searches to be indexed?

    Read the article

  • Aligning text to the bottom of a div: am I confused about CSS or about blueprint? [closed]

    - by larsks
    I've used Blueprint to prototype a very simple page layout...but after reading up on absolute vs. relative positioning and a number of online tutorials regarding vertical positioning, I'm not able to get things working the way I think they should. Here's my html: <div class="container" id="header> <div class="span-4" id="logo"> <img src="logo.png" width="150" height="194" /> </div> <div class="span-20 last" id="title"> <h1 class="big">TITLE</h1> </div> </div> The document does include the blueprint screen.css file. I want TITLE aligned with the bottom of the logo, which in practical terms means the bottom of #header. This was my first try: #header { position: relative; } #title { font-size: 36pt; position: absolute; bottom: 0; } Not unexpectedly, in retrospect, this puts TITLE flush left with the left edge of #header...but it failed to affect the vertical positioning of the title. So I got exactly the opposite of what I was looking for. So I tried this: #title { position: relative; } #title h1 { font-size: 36pt; position: absolute; bottom: 0; } My theory was that this would allign the h1 element with the bottom of the containing div element...but instead it made TITLE disappear, completely. I guess this means that it's rendering off the visible screen somewhere. At this point I'm baffled. I'm hoping someone here can point me in the right direction. Thanks!

    Read the article

  • tomcat behind apache

    - by dannynjust
    i am trying to use the mod_proxy_ajp to forward all the request from tocat.example.com to example.com:8080 here is what the tomcat server.xml looks like: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> and here is the apache.conf config: <VirtualHost *:80> ServerName tomcat.example.com ServerAdmin [email protected] ErrorLog logs/tomcat.example.com-error_log CustomLog logs/tomcat.example.com-access_log common <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / ajp://example:8009/ ProxyPassReverse / ajp://example:8009/ </VirtualHost> but it is not working, any idea?

    Read the article

  • Unindexing my tumblr blogs content and moving it to another tumblr blog

    - by sam
    ive been writing a tumblr blog for the past yr or so, ive writen about 300 articles, but now i need to move the blog to another site. (before it was running under blog.mysite.com and i now want it to run under blog.my*new*site.com) I want to keep the archived articles and have them on the new site, so what i was hoping to do was export the blog from tumblr, go into webmaster tools remove all the blogs indexed urls from google webmaster, then make a new tumblr blog and import the posts. Would google see this as new content as ive deleted their indexed copy ? Could i just move the mapping of the tumblr blog to the new subdomain, but in doing this i would lose all the pr and it would still look like duplicate content whats the best way to approach this ?

    Read the article

  • Facebook connect And Yahoo.. How and what exactly happened? Is there way to import facebook friend's email id?

    - by Forte
    Hello, I have seen that yahoo now enables their users to import facebook friend's email addresses into their yahoo addressbook. As far as i know, facebook doesn't allow any API to fetch email addresses of any user on external websites. I have also seen that Yahoo imports email addresses only when the friend's have chosen not to display their contact email to themselves only. Many people in the world trying to implement applications using facebook's API to import email addresses of friend's (Only those email addresses which are visible on user's facebook profile) but API calls always return NULL to their requests. So i would like to know what exactly happened between facebook and yahoo? Does facebook have provided any concessions to Yahoo's addressbook importer application to import facebook user's email addresses? Is there any working API/method/way available to fetch email addresses of facebook friends who have chosen to display their contact email ids on their profile with 1: only visible to friends, 2: visible to everyone privacy settings? I have also seen that, facebook API page clearly listen that email/contact_email field's can be fetched using FQL. Nevertheless there is no official explanation on this issue of returning NULL when email/contact_email is requested from any API call. Regards

    Read the article

  • How to dynamically insert a keyword in an Amazon Search Widget

    - by ElHaix
    Through Amazon Associates, you can create search widgets that have a place for a search term. In the admin, you can set the default search term, but that seems to be tied to the widget ID. I would like to be able to dynamically set the search term for the widget when it is displayed. How can I accomplish this? Note: I am referring to the following banner script: <SCRIPT charset="utf-8" type="text/javascript" src="http://ws-na.amazon-adsystem.com/widgets/q?rt=tf_sw&ServiceVersion=20070822&MarketPlace=CA&ID=V20070822%2FCA%2F[PARTNER-ID]%2F8002%2F84cb1754-d9ab-48de-b96b-574927fa9599"> </SCRIPT> <NOSCRIPT><A HREF="http://ws-na.amazon-adsystem.com/widgets/q?rt=tf_sw&ServiceVersion=20070822&MarketPlace=CA&ID=V20070822%2FCA%2F[PARTNER-ID]%2F8002%2F84cb1754-d9ab-48de-b96b-574927fa9599&Operation=NoScript">Amazon.ca Widgets</A></NOSCRIPT>

    Read the article

  • YAHOO and BING support for Index, Image and Mobile sitemaps

    - by kishore
    I know Google webmaster supports submitting Image, mobile, video and other types of sitemaps. YAHOO also mentions about mobile site map here. But does it support Image and video sitemaps. I could not find if BING supports any of these types other than XML sitemaps. Can someone please point me to any documentation on submitting Index, Image and Mobile sitemaps. Also does YAHOO and Bing support index sitemap files?

    Read the article

  • iOS Touch Icon through XLSX file?

    - by Joe Turner
    I'm setting up some iPads, and pointing Safari to www.mywebsite.com/spreadsheet.xlsx and it's displaying the document fine. That part is OK. I'm just wondering if there is a way to add a iOS Icon to the document so I can save it to the springboard of the iPad? Maybe embedding the document in HTML? PHP could also possibly be used but I'm really not sure how I would go about doing this, has anyone managed anything like this before?

    Read the article

  • Serve most of a domain with Apache, but use mod_proxy to serve some URLs from Lighttpd

    - by Alex Pineda
    So we wish to host some pages on a new server with apache2, and embed some of our old content & functionality from another server with lighttpd in an iframe. I'm looking at this configuration from the apache docs (http://httpd.apache.org/docs/2.2/vhosts/examples.html#page-header) under "Using Virtual_host and mod_proxy" together. <VirtualHost *:*> ProxyPreserveHost On ProxyPass / http://192.168.111.2/ ProxyPassReverse / http://192.168.111.2/ ServerName hostname.example.com </VirtualHost> The only issue is that I want to proxy only on a subdomain, or even better, if I can keep the top domain and proxy only if the url contains a particular path ie. "/myprocess.php". So in essence the DNS will point to the apache2 as the "master router".

    Read the article

  • Singular or Plural Nouns as file names for better Search & SEO friendlyness? [closed]

    - by Sam
    Possible Duplicate: Should I use singular or plural nouns in a domain name and why? Dear folks, two scenarios where file names should be best representing the search volume by audiences searching for it. Scenario 1 website.org/en/logo.php website.org/en/brochure.php website.org/en/poster.php website.org/en/design.php OR Scenario 2 website.org/en/logos.php website.org/en/brochures.php website.org/en/posters.php website.org/en/designs.php Q1. What do you intuitivly think would be the best? Q2. What do the facts in general show? people search for singular or plural in search? Q3. Do Search engines have common rule of thumb for this? Q4. Should I pick either and go with either scenario consistently or does it depend on the word? Thanks very much for your ideas/suggestions. I reall don't know which one to go for.

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • Quoting people for website dev. work

    - by Jason
    Hi All, I have recently given some quotes to a few people. And I need some advice about how things should be done... Q1: I've seen, heard of and read about a lot of developers using free resource sites online to obtain free Privacy Policy, Disclaimers etc for their/customers websites. A customer I quoted the other day expected me to write/get a disclaimer for their site. Who in their right mind would expect a document like that from a Web Developer? I just told them that they need to sort that stuff out themselves with a Lawyer or something, and then to send it to me so I can paste it on a webpage for them. Q2: If you're charging per hour, and you estimate that the project would take 1week to finish (including testing/releasing), but you soon realise that you'll require more time, do you RE-quote them? Or do you just finish off the site at the original quote price? Q3: How do you figure out how much you will charge your customers? Do you charge per-feature, or per hour, or per day, or all of the above? Thanks :)

    Read the article

  • Inserting multiple links to one image in Confluence

    - by Simon
    I am setting up a Wiki in Confluence v3.5.1 I have added a visio diagram (JPG) to a page (this diagram will take up most of the page) - This diagram depicts the workflow between developers and support and clients. I envisage users being able to click on different parts of the diagram and it to open up child pages with more details about that particular process (with videos on 'how-to' do that specific task, like log issues in Jira) However, from what I can see, there is no way from the Confluence editor to add multiple links to the one image, right? I looked at Anchors, but this does not look like it will do the job. So, what is the best option? I remember Dreamweaver having these sorts of tools built in, and there appears to be other utilities that can help put in image map HTML tags, but I cannot see a way of easily editing the HTML in Confluence editor. Also worried about the headache this could cause with managing future changes of the page.

    Read the article

  • Best SEO practices for mobile URLs: 301, rel=canonical, or something else?

    - by Chris
    I am developing a site with a mobile version and am trying to figure the appropriate way to manage the URLs for search engines. So far I've considered: Having a separate mobile site (m.example.com) with rel="canonical" links to the regular site. Putting both the mobile site and full site on one URL (example.com), and doing user agent sniffing. Another opinion: Spencer: "If you have a mobile site at a separate location or URL, you should 301 redirect each and every mobile page to its corresponding page on your main website. Employ user agent detection so that the mobile optimized version is served up if someone's coming in from a hand-held. - http://developer.practicalecommerce.com/articles/1722-Mobile-site-Development-Best-Practices-for-SEO-Usability Both 2 and 3 make it hard for a user who wants to switch to the full site or mobile site manually, but I'm not sure 1 is the best alternative. What's the best way to write URLs for a mobile site?

    Read the article

  • How can I cap cloud-costs?

    - by Joe Simpson
    I am looking into launching a cloud-based product for consumers where with a prepaid account they can start a server with a simple click and load up the software and access it remotely. The technical side of that I can manage, but I am worried about the costs escalating ridiculously high for both me and my customers Is there a way I can Limit how much each server can cost me before it will be deactivated See how much a server is currently costing me (so I can deduct it from their account) with it being extremely reliable as I don't want to have to have a giant bill in any possibility.

    Read the article

  • How do I get Google to crawl my content when it's only displayed when you fill in a form?

    - by Sarang Patil
    I have a webpage. It has a form and the "results" section is blank. When the user searches for items, and a list that pops up, he/she chooses one option from list and then the corresponding results are displayed in results section. I once decided to log every ip,url of person with time that visits my page. One ip was 66.249.73.26, and on doing google search I came to know it is ip of google bot. link for whatmyipaddress google bot Now when I searched for the links that this ip visited, it was like this: search?id=100 search?id=110 ... search?id=200 ... then afterwards it incremented in steps of 1, like 400,401.. But people search for strings and not numbers. And because googlebot searches for numbers like this, I think the corresponding content is never displayed and so my page content is never indexed, even though it has rich content. So I want to ask you is that in order to show google bot all the content that the webpage has, should I list all the results in index page and ask users to enter string to filter results?

    Read the article

  • How important are SEO Friendly URLs [closed]

    - by nute
    Possible Duplicate: Is a URL with a query string better or worse for SEO then one without one? Currently, my URLs look something like http://mydomain.ext/question/5 where question is the Controller and 5 is the ID of the object or article retrieved. In theory I could spend some development time and some server resources to have URLs that would contain more information about the page loaded. However, seeing how websites like Youtube or many others just keep simple URLs with just an ID, I am asking, does it matter? It is worth it??

    Read the article

  • Robots.txt never downloaded but some blocked URLs in GWT

    - by Zistoloen
    There is something I don't understand in Google Webmaster Tools (GWT) for my Wordpress site. In menu "Blocked URLs", it mention that my robots.txt has never been downloaded but there are some blocked URLs. It's kind of weird and not logical. Am i missing something? User-agent : * Disallow: /*? Disallow: /wp-login.php Disallow: /wp-admin Disallow: /wp-includes Disallow: /wp-content Allow: /wp-content/uploads Disallow: */trackback Disallow: /*/feed Disallow: /*/comments Disallow: /cgi-bin Disallow: /*.php$ Disallow: /*.inc$ Disallow: /*.gz$ Disallow: /*.cgi$ Disallow: /author/* I'm afraid my robots.txt doesn't block several URLs I want to block.

    Read the article

  • Canonicalization of single, small pages like reviews or product categories

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/"> HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do?

    Read the article

  • Google Site Search -- How to use as API?

    - by John Isaacks
    I am trying to get an API that I can use to do searches on my own site. Google has something called site search and something called custom search. What is the difference? I make a new site search, then it is listed on a page with "custom search" in the heading. This is really confusing. I just want an API that I can use to search my site. I would prefer json to xml as the results. And if this service is offered by someone other than Google, that is fine too. The ones that I create at Google want me to embed a premade search box into my site. I do not want that, I want an API that I can call from PHP or JS. How can I get this?

    Read the article

  • How to interpret number of URL errors in Google webmaster tools

    - by user359650
    Recently Google has made some changes to Webmaster tools which are explained below: http://googlewebmastercentral.blogspot.com/2012/03/crawl-errors-next-generation.html One thing I could not find out is how to interpret the number of errors over time. At the end of February we've recently migrated our website and didn't implement redirect rules for some pages (quite a few actually). Here is what we're getting from the Crawl errors: What I don't know is if the number of errors is cumulative over time or not (i.e. if Google bots crawl your website on 2 different days and find 1 separate issue on each day, whether they will report 1 error for each day, or 1 for the 1st, and 2 for the 2nd). Based on the Crawl stats we can see that the number of requests made by Google bots doesn't increase: Therefore I believe the number of errors reported is cumulative and that an error detected on 1 day is taken into account and reported on the subsequent days until the underlying problem is fixed and the page it's crawled again (or if you manually Mark as fixed the error) because if you don't make more requests to a website, there is no way you can check new pages and old pages at the same time. Q: Am I interpreting the number of errors correctly?

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >