Search Results

Search found 14789 results on 592 pages for 'pro backup'.

Page 249/592 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Why some video posts from the same blog appear in google with thumbs, while others do not?

    - by jayarjo
    We own media blog - which is basically a big collection of various videos streamed through our branded player. Interesting thing is that some of our posts show up in google search results with a thumb denoting that the post in question is in fact a video. But more often they are not. We basically wonder why? What does affect it and can we control it somehow? All posts (their single pages) have facebook og meta tags in place.

    Read the article

  • To Fix HTTP 400-499 error codes with 301 redirects in .htaccess file

    - by user2131844
    Google previously indexed my websites pages (sitemap.xml) with below format: www.domain.com/2013/04/18/hot?test-gadgets-of-2013-to-include-in-?your-list www.domain.com/2013/02/09/rin?gdroid I have resubmitted the sitemap but there are still 404 errors in Google/Bing engine. Could you please help me to write 301 redirects rule in .htaccess file so when some clicks the URL for: www.domain.com/2013/02/09/rin?gdroid They should be redirected to: www.domain.com/rin?gdroid How we can write rule in .htaccess file to remove date part 2013/02/09/?

    Read the article

  • Will having a website duplicated on multiple top level domains be penalised by search engines [duplicate]

    - by user1020317
    This question already has an answer here: Will having multiple domains improve my seo? 7 answers I'm running a website for a global company, and although we rank first in search engine results here in Ireland, a search done from other countries doesn't rank us as highly. If I register the domain at other top level domain names (eg. example.co.uk, example.nor etc.) and then just mirror the .com site to those other domains, will I be penalised by search engines for having duplicate content? Has anyone else faced a similar problem and found a way to capture the global search engine? Thanks.

    Read the article

  • is box-shadow (CSS3) really not ready to use? (according to "CAN I USE")

    - by mechdeveloper
    I have a problem that I want you to help me, I am currently making a website, I am building that website on HTML5 and CSS3 technology, every feature I'd like to use I check it first in "CAN I USE", the technology I use most is box-shadow, and I already made some great things with it but, I have a doubt about the percentage of browser that don't support that technology, the percentage of browser that do not support box-shadow is around 17.12%, and if you see the conclusions (show options = other options = show conclusions) they say that that feature isn't ready yet because they are "Waiting for Opera Mini 5.0-6.0 to expire", I personally think that the best that we can do in order to make people update their browsers is not support older browser, but ... am I right thinking like this? will I have bad consecuences if I don't support older browsers? is worth to work twice just to support older browsers? should I still working with box-shadow?

    Read the article

  • why my website doesn't ranked by alexa? [closed]

    - by arshen
    i created a website with WordPress and post 10+ article in period of two month, but alexa doesn't rank my website. i tried to change my theme, URL and other related things and submit my website URL manually to alexa dashboard, while i have amount of 200 page view in a day but its still not ranked. my website URL: http://daskaht.ir robots file: http://daskhat.ir/robots.txt alexa page: www.alexa.com/siteinfo/daskhat.ir domain whois: whois.domaintools.com/daskhat.ir and website seo rank: www.woorank.com/en/www/daskhat.ir

    Read the article

  • Meta Description or Title For Post Contents

    - by Raj
    I have a site that has posts without titles. You can think of them as being a lot like Twitter tweets. Should I put the post contents in the meta title tag or the description tag? If I put the post contents in one of the tags what should I put in the other? My challenge is that we have very short amounts of content with no titles. I want to avoid having too many duplicate titles or descriptions. We have things like user name, full name, date, etc.

    Read the article

  • How to properly URL/domain forward

    - by NRGdallas
    No clue on a title for this, someone feel free to suggest an edit. I have a client that has a website. He owns around 200 domains, and wants each domain to contain content from the main website. The header, footer, and navigation bars will remain the same for each domain, but the actual page content will vary (obviously duplicate content issues, open to suggestions) He wants each individual page to be its own separate domain, rather than a url within the main domain. (page1.com page2.com etc - NOT site.com/page1.html, however the file is actually hosted at site.com/page1.html - all links will direct to site.com/whatever accordingly) What would be the best place to start reading / learning on how to do this, and what concerns/considerations should be taken into mind?

    Read the article

  • Which image sharing websites supports file uploading dynamically via api

    - by KoolKabin
    I have been searching for image hosting website that displays images of a user in a nice and managed way. I want to upload the files to that image hosting website in my account of that website from a page in my website. i.e if i have a website abc.com then user browse my website abc.com. Uploads the file to my website. Now I want to transfer the uploaded file to the image hosting website so that it can be viewed by other users of that hosting website and get better visibility to world

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Website hosting and deployment

    - by squixy
    I'm relatively new to Web Development especially when it comes to infrastructure. I have AngularJS application build and served by brunch.io locally. It uses rails-api JSON data. I'd like to deploy my angular application separately from rails server. For now, JS app is placde inside public directory of backend server and deployed together. It isn't elegant nor effective so I want to use some other hosting service. I was thinking about VPS where I could place both Angular and Ruby applications. I read about NodeJS or Nginx that can serve static files, but I don't have any knowledge or experience with these technologies. How is the best way to provide separate frontend and backend applications communicating with each other?

    Read the article

  • Google Authorship: can I display:none for link to profile?

    - by RubenGeert
    I'd like to have my 'mugshot' in Google's SERPs but I couldn't care less about Google+. I don't really want to link my website to Google+ either. Can I use CSS display:none; on the link leading to my profile and still have authorship, which looks like <a href='https://plus.google.com/111823012258578917399?rel=author' rel='nofollow'>Google</a>? Will the nofollow attribute here spoil things? I don't want to lose 'link juice' on Google+ if I don't have to. Now Google should crawl only the HTML but I'm sure they'll figure out the link is not visible (perhaps it's technically even cloaking. Does anybody have experience with this situation? And do I really have to become (reasonably) active on Google+ in order for authorship to show? This answer suggests I do but I didn't read anything on that in Google's guidelines.

    Read the article

  • Error running phusion passenger in standalone mode

    - by msidell
    I'm trying to run standalone phusion passenger so that I can run different ruby rvm configurations on the same host. I already have ruby and passenger running fine on this host. I am following the instructions here. When I run standalone passenger the first time, it appears to successfully install nginx. But then when it tries to run, I get this error: [root@clark directra]# passenger start -a 127.0.0.1 -p 3001 -d --user dweb *** ERROR *** Could not start Passenger Nginx core: nginx: [alert] could not open error log file: open() "/tmp/passenger-standalone.16757/logs/error.log" failed (2: No such file or directory) nginx: [alert] Unable to start the Phusion Passenger watchdog (/var/lib/passenger-standalone/3.0.11-x86-ruby1.9.3-linux-gcc4.1.2-1002/support/ agents/PassengerWatchdog): Permission denied (13) (13: Permission denied) Stopping web server... done FWIW, /tmp is writeable. Any idea what's wrong?

    Read the article

  • Is it possible to make CSS-added text searchable by a browser?

    - by Andrew Stacey
    I run a website that uses CSS pseudo classes to insert text here and there. One of them inserts the value of a CSS counter (whereupon it would require considerable re-engineering of the system to do this without CSS text injection). The specific CSS rule is: .num_defn .theorem_label:after { content: " " counter(definition, decimal); counter-increment: definition; } and this converts "Definition" to "Definition 1" (say). However, the injected text is not searchable by the browser. It doesn't see the 1: if I search for "Definition 1" then it doesn't find it, and if I search for "Definition. Whatever the definition text was" then the browser happily highlights the line except for the inserted 1. So if you imagine the bold text as the highlighting, it would look like: Definition 1 . Whatever the definition text was This is not ideal! People like to refer to definitions by their number and to say "Look at Definition 1 on the page XYZ" (and in contexts where hyperlinks are not available - strange, I know, but it does happen). Thus: Is there any way that I, on the server end, can designate the injected text as "searchable"? If not, is there a simple way at the browser end that this can be enabled?

    Read the article

  • How to fix Google 404 not found Crawl Errors?

    - by Freeme
    I was checking on Google webmater tool for my blog site to see if there's any indication on why my blog traffic decreased to half in one day and i saw 43 Not Found crawl errors and 5 in Sitemap Not Found errors. The 5 Not Found errors in Sitemap were the links to categories. I guess I renamed categories that's why google can't find the links. As for the 43 other Not Found errors, I see blog post titles that contains (' .) EX: McDonald's, O.N.E. They weren't found by google crawler. Blog post with /CachedYou at the end and blog posts with /www.example.com attached at the end, they weren't found by Google crawlers either. My question is how do I correct those Not Found Errors? Thanks

    Read the article

  • Wordpress blog penalized by Google search - what's wrong?

    - by pawelbrodzinski
    I have a blog (http://blog.brodzinski.com), which is wordpress.org blog with pretty popular Thesis theme with almost no other customizations. Some time ago it was penalized by Google search - it simply stopped appearing in search results even for search terms it used to be top result, like my name - Pawel Brodzinski - which isn't anything close to popular search term. To be exact the site has been penalized on Nov 18. It started popping up in search result on Dec 23 but only for a few days. Since Dec 27 it is out again. I know Google guidelines and I'm not aware to break any of them. I submitted reconsideration request after I noticed penalty. It was proceeded and there was no change whatsoever (no surprise as it seems the site was penalized again). I checked diagnostics in webmaster tools and neither any malware was detected nor any strange search terms popped up. I read related threads on Google webmasters forum but found none of solutions working for me. I posted a thread on Google webmasters forum (http://www.google.com/support/forum/p/Webmasters/thread?tid=546339f49d4a03bc&hl=en) and the only answer I got was to check for duplicate content. Well, there is some duplicate content published on the web but it is true for vast majority of blogs and it doesn't seem to be a reason for a penalty. Also before Dec 27 I was able to remove duplicate content from a couple of sites which were republishing my feed but this doesn't change the situation - the site was penalized again. The problem is I have no idea what can be wrong with the website or how to find it out. To make the problem worse I'm no webmaster, I just run a wordpress blog, which supposed to be easy.

    Read the article

  • Best way to setup multiple sites' emails in my Gmail

    - by John
    I've a dozen sites and I want all of their emails come to my one gmail id and I want to reply centrally from Gmail only. I've also added all of those emails in "send email as:" list in Gmail. I could add email forwarders in my Cpanel but in that case I'll not be able to send email whose inboxes haven't been created( for example [email protected]). If I create email account then I'd receive emails in my inbox as well as forwared by the forwarder( to my gmail id). Otherwise I can setup Gmail for my domain. But for a dozen emails I'm not sure if that'd be fine. I see in http://www.google.com/enterprise/apps/business/pricing.html that for up to 10 emails it is free. But then to send email from webhosting the php code will need SMTP login details and leaving my important gmail account details in my webhosting account is very risky given my sites have been compromised twice. What is the best way to centralize all my emails so that I can read/reply/search from single place?

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Is 301 redirect sufficient to solve WWW and HTTP/S duplication?

    - by Thomas Ojo
    I was reading about this article - SEO preference for WWW or HTTP:// protocol redirection? Do www websites rank better than NON-www? I have same problem but I needed a help on this further. What about https:// How will this be treated? Is the redirect 301 sufficient to solve the problem? I have a SEO company that says if possible, i should not have redirect but I don't think this is visible? Does permanent redirect in any way have effect on SEO services if properly done?

    Read the article

  • On-Site Factors that Affect CPC

    - by ashes999
    I have a few websites on various niche topics, all running Adsense. The most promising one currently has a CPC that hovers around $1; the rest have CPCs of $0.25-$0.50. I'm curious to know what on-site factors affect CPC. That is to say, what I can do, legally (in white-hat compliance) to increase my CPC? Some factors that affect CPC but are not within my control (and therefore, beyond the scope of my question -- they're just examples) include: What advertisers are paying for keywords on my site What pages people are landing on etc.

    Read the article

  • Sharepoint 2010 as framework for website?

    - by Kenny Bones
    So I'm looking at several solutions for our new website. And we've looked at ExpressionEngine first and foremost. Now, during brainstorming today, one person said "why don't we use Sharepoint 2010 to build the site on?", and it doesn't seem like a horrible idea. I mean, we're based around Office anyway. We use Lync and have an intranet based on Sharepoint 2010 anyway. Does anyone have any thoughts on this? Would it cost more to develop an internet webpage on Sharepoint 2010 opposed to using ExpressionEngine?

    Read the article

  • large product image structured data and visibility

    - by Mark Resølved
    On an eCommerce site we two images for a product. One medium sized shown on top of the page and one large photo shown on click in an overlay. We use http://schema.org/Product microdata on the page. We'd like the large, initially hidden, photo to be the main image for the product, as it's the better looking one. So it's also referenced in the XML sitemap as <image:image>. So we also put the itemprop"image" attribute on the, hidden large image. But i'm wondering is it a bad idea to use a microdata attribute on a hidden style="display:none;" element? is there a better way to embed the main image in terms of SEO, without showing it initially?

    Read the article

  • How can I set up Friendly URL to Nginx?

    - by MKK
    I'm trying to use dokuwiki with its Friendly URL on Nginx. The problem that I'm facing is, it doesn' show correct path to any link(even stylesheet, and images) on every page It looks that paths are missing wiki/ part. If I click on the image and show its destination, it shows this url http://foo-sample.com/lib/tpl/dokuwiki/images/logo.png But it has to be this below. http://foo-sample.com/wiki/lib/tpl/dokuwiki/images/logo.png and login URL is not working either. If I click on login link, it takes me to http://foo-sample.com/wiki/start?do=login&sectok=ff7d4a68936033ed398a8b82ac9 and it says 404 Not Found I took a look at this https://www.dokuwiki.org/rewrite#nginx and tried as much as possible. However it still doesn't work. Here's my conf files. How can I fix this problem? dokuwiki is set in /usr/share/wiki /etc/nginx/conf.d/rails.conf upstream sample { ip_hash; server unix:/var/run/unicorn/unicorn_foo-sample.sock fail_timeout=0; } server { listen 80; server_name foo-sample.com; root /var/www/html/foo-sample/public; location /wiki { alias /usr/share/wiki; index doku.php; } location ~ ^/wiki.+\.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index doku.php; fastcgi_split_path_info ^/wiki(.+\.php)(.*)$; fastcgi_param SCRIPT_FILENAME /usr/share/wiki$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } /usr/share/wiki/.htaccess ## Enable this to restrict editing to logged in users only ## You should disable Indexes and MultiViews either here or in the ## global config. Symlinks maybe needed for URL rewriting. #Options -Indexes -MultiViews +FollowSymLinks ## make sure nobody gets the htaccess files <Files ~ "^[\._]ht"> Order allow,deny Deny from all Satisfy All </Files> # Uncomment these rules if you want to have nice URLs using # $conf['userewrite'] = 1 - not needed for rewrite mode 2 # Not all installations will require the following line. If you do, # change "/dokuwiki" to the path to your dokuwiki directory relative # to your document root. # If you enable DokuWikis XML-RPC interface, you should consider to # restrict access to it over HTTPS only! Uncomment the following two # rules if your server setup allows HTTPS. RewriteCond %{HTTPS} !=on RewriteRule ^lib/exe/xmlrpc.php$ https://%{SERVER_NAME}%{REQUEST_URI} [L,R=301] <IfModule mod_geoip.c> GeoIPEnable On Order deny,allow deny from all SetEnvIf GEOIP_COUNTRY_CODE JP AllowCountry Allow from .googlebot.com Allow from .yahoo.net Allow from .msn.com Allow from env=AllowCountry </IfModule>

    Read the article

  • Deny access to a folder on hosting server but serve the pages

    - by Sourav
    My hosting server allows to host multiple websites. The directory structure is like this root |_ www.a.com |_ www.b.com |_ www.c.com |_ www.d.com I want to put some PHP files on the www.d.com folder so if some one browse the site from web-browser can get it, but no one can get it's source code [even by loggin in to the root folder]. Is there any way to doing so ? There is a feature called Password protect folder or so, can in help in this case ?

    Read the article

  • .htaccess / 301 redirection question

    - by John K
    All my WordPress post URLs generate subdirectories with duplicate content and I do not know what regular expression to use to consistently 301 redirect domain.com/category/post/random-number/ to domain.com/category/post/ and domain.com/category/post/random-number/another-random-number/ also to domain.com/category/post/. Here is an example of my problem: http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/ http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/1345257927000/

    Read the article

  • Download Monitoring for MovieMusic Portal

    - by VenomVipes
    Our portal is targeted on Mobile Users. We have Music(mp3) Video(3gp) files for download. I expect 300 Parallel Downloads. I want a way to control my Downloads. Like Kicking/Ban a IP or download. Stastics of download. Bandwidth Consumed .... I have root/admin access to my Server. My Question is : Is there a way I can Monitor & Control the OnGoing downloads that visitors are doing from my Site.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >