Search Results

Search found 9728 results on 390 pages for 'meysam pro'.

Page 66/390 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Value of the HTML5 lang attribute

    - by user359650
    I'm working on a website which will offer localized content following the language+region approach as described on this W3.org page (e.g. fr-CA for Canadian French content, and fr-FR for "French French" content). As we consider content for each language+region to be unique, it is crucial to us that search engines properly identify and serve the content accordingly. By looking up on the Internet (e.g. this question), it appears that most people recommend the use of an ISO639 language code in the HTML lang attribute to describe the content language. Following this recommendation, we would en up using <html lang="fr"> which wouldn't enable the differentiation between the aforementioned language+region combinations. When reviewing the HTML4 specification, it seems that using language+region as a language code would be perfectly OK, as the en-US example is given as one possible value. However I couldn't find any confirmation of this in the HTML5 specification which doesn't seem to provide any example as to the possible allowed values. From there I tried to get a de facto answer by looking at what the web giants are doing. I looked at what Facebook are doing: they offer Candian French and French French versions of their websites with (slightly) different content, whilst the HTML lang value remains the same: fr-CA URL: http://fr-ca.facebook.com HTML lang attribute: <html lang="fr"> translation of the word 'email': courriel fr-FR URL: http://fr-fr.facebook.com/ HTML lang attribute: <html lang="fr"> translation of the word 'email': Adresse électronique Q: What is the recommended/standard way of describing content that was localized using the language+region approach in HTML5 ?

    Read the article

  • Creating sub domain on webmin [duplicate]

    - by Vijay
    This question is an exact duplicate of: Webmin - Setting up multiple virtual hosts - Subdomains 1 answer Can anybody help me in creating subdoain through webmin. I want to create subdomain like test.xxxxx.com for this I tried with several reference site but no luck. exp. http://www.trickylinux.net/add-domain-virtualminwebmin.html http://codeboxlabs.com/add-subdomain-webmin-linux/ My current httpd.conf look like: <VirtualHost *:80> SSLEngine off DocumentRoot /var/www/html/******/web DirectoryIndex index.php <Directory "/var/www/html/*****/web"> AllowOverride All Allow from All </Directory> ServerName www.******/.com ServerAlias ftp.*****.com SSLEngine off SSLVerifyClient optional </VirtualHost> Please help me to solve this issue.

    Read the article

  • How do I ask google not to index certain parts of my page?

    - by Gavin Mannion
    I was searching for an old review on my site today and I noticed that Google is indexing the headline text in my latest article list on every page that it appears, obviously I guess. The problem is if I search for my Dragon's Lair review specifically to my site like this http://www.google.co.za/search?sugexp=chrome,mod=9&sourceid=chrome&ie=UTF-8&q=site%3Alazygamer.net+dragons+lair+review Then it returns a ton of pages that aren't appropriate as they aren't related to the review at all. The reason why I care is that I have a second Dragon's Lair review that was posted years ago and now I can't find it. Is there a way to hint to google that certain text isn't relevant to the actual content on the page? is it a terrible idea?

    Read the article

  • 301 redirect www to non-www [duplicate]

    - by Claudiu
    This question already has an answer here: Removing non-www support 4 answers I understand that it is better to make a 301 redirect to make sure that all your links are seen the same on Google. Until now I always used erasmus-plus.ro without the www. for my website. Is it ok to make a redirect from www. to non-www. From my search on Google all users spoke about it the other way around. And somewhere I read that redirects are not good for seo. Is 301 an exception?

    Read the article

  • How to send mass email and not get treated as spam

    - by MonsterMMORPG
    I am the owner of http://www.monstermmorpg.com which is a free to play browser based monster role playing game. I have a very important announcement to send my above 300k members. I already have email sending software but they drop to the spam. So i want to improve my chances of dropping inbox. I am going to give you all details. Alright my domain registrar is : http://www.godaddy.com/ My hosting company is : http://www.leaseweb.com/en This is my setting at leaseweb: This is my DNS settings at Godaddy: This is how I send emails: MailMessage mail = new MailMessage(); mail.To.Add(EmailAdress); mail.From = new MailAddress("MonsterMMORPG NoReplay <[email protected]>"); mail.Subject = "Title Of Mail"; string Body = "Body Of Mail"; mail.Body = Body; mail.IsBodyHtml = true; SmtpClient smtp = new SmtpClient(); smtp.DeliveryMethod = SmtpDeliveryMethod.Network; smtp.UseDefaultCredentials = true; smtp.Host = "85.17.154.139"; smtp.Port = 25; smtp.Send(mail); Thanks for every kind of answer. I did not make any special setting at SMTP:

    Read the article

  • Friendly URLs: is there a max length for search engines?

    - by Olivier Pons
    People from stackoverflow have been working closely with google team to help them make the panda algorithm more efficient, so I guess they've learned a lot from the google team. Thus they may have done very clever friendly URLs to maximize the page rank. I've seen from time to time very long URLs (can't find where) in stackoverflow, but after a certain "amount" of character there were only numbers, kind of "ok passed this length, SEOs will ignore this so let's put only numbers". I've done a huge work on my framework to make very friendly URLs, and my website can come up with URLs like: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/paramedical/ It's very long and I'm wondering if the previous URL won't be mixed with, say, this one: http://www.mysite.fr/recherche/region/provence-alpes-cote-d-azur/departement/bouches-du-rhone/categorie-de-metiers/art/

    Read the article

  • Webmaster tools, Duplicate Meta Descriptions, and Short Meta Descriptions [closed]

    - by Watsy91
    Possible Duplicate: Do meta keywords have any impact on ranking algorithms? I am fairly new to the whole Webmaster Tools concept. I have been looking at all the different options, such as crawl errors, HTML improvements etc... I have been looking at the Duplicate Meta Descriptions and Short Meta Descriptions, was wondering if anyone could suggest ideas on how to go about improving this. It seems that all the Duplicates are from the URL Title and the short description. It would seem to me that most people would have information regarding the page with the same keywords as their titles. Heres an example of one: These are the ultimate hampers in taste, quality and value. Amongst this range of luxury hampers ar /food-hampers/food-hampers-over-100.html /thank-you-gifts/large-gifts-over-100.html To get to the point I just want to know do these things really matter? Would they have a real consequence on my sites rankings? My sites have been falling down the rankings since early this year and I have really started to look at Google Analytics and Webmaster tools to try and indicate certain problems. I have researched the Internet and it seems that some people don't bother and others do!! I know that Stackoverflow has 100s+ people who have went through the above and I would really appricate if they could give me some tips etc. Or in the END does it really matter?? :D

    Read the article

  • Is there such a thing as a Google Result Set simulator?

    - by Dave Adams
    I am always making tweaks to my site, be it in the .htaccess file, some new SEO plugin, different types of content or whatever. For all these changes, I would really like to be able test it immediately and see if the change had any positive or negative effect. I am just wondering if there was some way of doing immediate testing using some simulator instead of having to wait for Google to discover and index it - which could take a long time.

    Read the article

  • Debugging "Premature end of script headers" - WSGI/Django [migrated]

    - by Marcin
    I have recently deployed an app to a shared host (webfaction), and for no apparent reason, my site will not load at all (it worked until today). It is a django app, but the django.log is not even created; the only clue is that in one of the logs, I get the error message: "Premature end of script headers", identifying my wsgi file as the source. I've tried to add logging to my wsgi file, but I can't find any log created for it. Is there any recommended way to debug this error? I am on the point of tearing my hair out. My WSGI file: import os import sys from django.core.handlers.wsgi import WSGIHandler import logging logger = logging.getLogger(__name__) os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' os.environ['CELERY_LOADER'] = 'django' virtenv = os.path.expanduser("~/webapps/django/oneclickcosvirt/") activate_this = virtenv + "bin/activate_this.py" execfile(activate_this, dict(__file__=activate_this)) # if 'VIRTUAL_ENV' not in os.environ: # os.environ['VIRTUAL_ENV'] = virtenv sys.path.append(os.path.dirname(virtenv+'oneclickcos/')) logger.debug('About to run WSGIHandler') try: application = WSGIHandler() except (Exception,), e: logger.debug('Exception starting wsgihandler: %s' % e) raise e

    Read the article

  • Why Google skips page title

    - by Bob
    I have no idea why this is happening. An example http://www.londonofficespace.com/ofdj17062004934429t.htm Title tag is: Unfurnished Office Space Wimbledon – Serviced Office on Lombard Road SW19 But is indexed as: Lombard Road – SW19 - London Office Space If you look in the source code and search for this portion ‘Lombard Road – SW19’ You then find that it's next to an office image alt=’Lombard Road – SW19’. The only thing I could think of is that the spider somehow ‘skips’ our title tag and grabs this bit, and then inserts the name of the site (but WHY?) Is there anything I can do with this? or is this a Google behaviour?

    Read the article

  • Is there a plugin directory software like Firefox Addon or Wordpress Plugin Directory

    - by lulalala
    Nowadays browsers and content management systems all have the plugin/module/addon/theme systems for users to extend at their own will. Developers can also submit plugin to share the plugin with others. The most famous examples are Firefox Addon and Wordpress Plugin Directory. I want to ask that if there is an open source web system specifically designed for this need - to host plugins? Not just one plugin, but all plugins for one software? Ideally it should allow developers to upload the plugin, have a public version for each update, and client software can check if there are any plugin available through the directory's API. As an example, a CMS can have one such directory to host/display theme files. Also is there a better keyword to describe this kind of system?

    Read the article

  • Disable default error pages/error messages in IIS

    - by Antoine
    I have this web application (ASP.Net MVC 3) that on certain conditions returns a custom JSON string with a HTTP status code for an error (403, 415, 500). It is deployed on a Win 2008 R2 server with IIS 7.5 Initially I was gettting the standard HTML pages for the error instead of the JSON data. I removed the error pages for these errors in the app settings. But now my queries which should return some JSON data return a single error line. When the server gives me 403, I have the message "You do not have permission to view this directory or page." (simple line, no HTML around it). What can I do to deactivate this and finally get what the app is returning and not what the server wants to return?

    Read the article

  • My website directories downloads instead of actually opening up from browser

    - by numerical25
    I added some screencast to show what I am having issues with http://screencast.com/t/212t3ANINqk http://screencast.com/t/bR44U1wkvNZl http://screencast.com/t/iDS7APYYsa but the page downloads my subdirectories instead of opening them up and displaying the index file of that page Here is the situation. I am trying to get my web service up using mac ports and I am just trying to configure all the files. I am using php, apache, etc. the page goes to the localhost root but anything beyond that. it can not find. edit Ive tried to add the following to httpd.conf within the <IfModule mime_module> but no hope AddType application/x-httpd-php .php AddType application/x-httpd-php .phtml AddType application/x-httpd-php .php3 AddType application/x-httpd-php .php4 AddType application/x-httpd-php .html AddType application/x-httpd-php-source .phps

    Read the article

  • How to prevent Google Website Optimizer from making Google Analytics spike Direct Traffic and lower Bounce Rate?

    - by Scott
    I am using Google Website Optimizer (GWO) and Google Analytics. Whenever a person (Google Website Optimizer) does a javascript redirect, Google Analytics will change the referrer. When the referrer changes, the traffic source becomes yourself and is changed to Direct Traffic. For Example: A visitor goes to google: searches for my great service. Clicks the link that goes to website page: /home/ At this point, Google Analytics tracks the referrer as Google. However, /home/ has a GWO javacript redirect to a battery of A/B tests. /home-1/ or /home-2/ or /home-3/ When the redirect from /home/ occurs to /home-1/, google analytics on the /home-1/ page now thinks the referrer is yourself and converts the referrer to Direct Traffic since the Direct Traffic bucket is the unknown. I'm really surprised that GWO and GA do this when they both come from google. Now, How does one fix this to prevent the overwrite of the referrer using GWO?

    Read the article

  • Help with Apache mod_rewrite rules

    - by Brian Neal
    I want to change some legacy URL's like this: /modules.php?name=News&file=article&sid=600 to this: /news/story/600/ This is what I have tried: RewriteEngine on RewriteCond %{QUERY_STRING} ^name=News&file=([a-z_]+)&sid=([0-9]+) RewriteRule ^modules\.php /news/story/%2/ [R=301,L] However I still get 404's on the old URLs. I do have some other rewrite rules working, so I am pretty sure mod_rewrite is enabled and functioning. Any ideas? Thanks.

    Read the article

  • Resolving a cname using different DNS

    - by Sandeep Singh Rawat
    I have a domain name (e.g. abc.com) registered in GoDaddy and I have a few subdomains (mail, blog) correctly setup to a different hosts. Now I want to park my domain with a parking host (seohosting.com) which asked me to change my nameserver to their DNS. What I want is to only redirect dns queries for (www or @) cname to seohosting.com while still being able to use my other cname for my own purpose. Is there a way to do this? I dont have the host IP address for parking host.

    Read the article

  • Set up iis7.5 to deny connections outside of LAN for certain folder [migrated]

    - by Darkcat Studios
    Im setting up a combined website and extranet currently, they both read from the same database on the same server as the site is hosted on. The reason being that the website is fed from the data that the staff plug into the extranet interface. it also links in to AD for authorising access to the extranet. I have the extranet in a folder within the website folder. What I want to do is only allow the extranet to be accessed from computers within our LAN, but allow the main website to be freely accessible to internet users. I have it set up as a generic web server currently, so anyone can view anything (well up to the point where the user is asked to log into the extranet of course! I have read a lot on this but nothing I read applies to, or works in IIS7.5

    Read the article

  • PHP CodeSniffer: indentation of 2 is ignored, it just checks 4

    - by Olivier Pons
    # phpcs --version PHP_CodeSniffer version 1.3.3 (stable) by Squiz Pty Ltd. (http://www.squiz.net) # Trying to do this: phpcs --tab-width=2 includes/json/item/categorie.php FOUND 29 ERROR(S) AND 3 WARNING(S) AFFECTING 24 LINE(S) Doesn't work. This doesn't work too: phpcs includes/json/item/categorie.php --tab-width=2 FOUND 29 ERROR(S) AND 3 WARNING(S) AFFECTING 24 LINE(S) If I indent the file with 4 spaces (which I don't want): phpcs --tab-width=2 includes/json/item/categorie.php FOUND 4 ERROR(S) AND 3 WARNING(S) AFFECTING 17 LINE(S) phpcs --tab-width=4 includes/json/item/categorie.php FOUND 4 ERROR(S) AND 3 WARNING(S) AFFECTING 17 LINE(S) phpcs --tab-width=50 includes/json/item/categorie.php FOUND 4 ERROR(S) AND 3 WARNING(S) AFFECTING 17 LINE(S) phpcs includes/json/item/categorie.php --tab-width=50 FOUND 4 ERROR(S) AND 3 WARNING(S) AFFECTING 17 LINE(S) So it's totally ignored. It this a bug?

    Read the article

  • website address point to localhost

    - by munir ahmad
    Today in firefox while surfing the internet, I opened a website it asked "This site may harm your computer" if you want to open this website add it in exception list. I added a website under exception list and trust this website. After that situation, whenever I opend this website, it always points toward the localhost untill internet connected. I have setup localhost through apache (xampp server). If internet not connected this website do not open anything but localhost still work as usaual. How can I remove this situation so that this website do not point to locathost? I am using winxp sp3. Same problem now appear in all browsers too.

    Read the article

  • How to suppress PHPSESSID in URL for Googlebot?

    - by Roque Santa Cruz
    I use cookie based sessions, and they work for normal interaction with our site. However, when Googlebot comes crawling out PHP framework, Yii, needs to append ?PHPSESSID to each URL, which doesn't look that good in SERP. Any ways to suppress this behavior? PS. I tried to utilize ini_set('session.use_only_cookies', '1');, but it does not work. PPS. To get an impression of the SERP, they look like this: http://www.google.com/search?q=site:wwwdup.uni-leipzig.de+inurl:jobportal

    Read the article

  • Does placing Google Analytics code in an external file affect statistics?

    - by Jacob Hume
    I'm working with an outside software vendor to add Google Analytics code to their web app, so that we can track its usage. Their developer suggested that we place the code in an external ".js" file, and he could include that in the layout of his application. The StackOverflow question "Google Analytics: External .js file covers the technical aspect, so apparently tracking is possible via an external file. However, I'm not quite satisfied that this won't have negative implications. Does including the tracking code as an external file affect the statistics collected by Google?

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • Would using a self-signed SSL certificate be appropriate in this scenario?

    - by Kevin Y
    Now I realize this topic has been discussed in a few questions before (specifically this one), but I'm still a little confused about the implications of using a self-signed certificate, and how I would be affected by doing so in this case. After reading various sources, I'm still a little confused about the exact details of using one. The biggest problem with a self-signed certificate, is a man-in-the-middle attack. Even if you are 100% sure that you are on the correct website and you completely trust the site (your email server for example), you could have someone intercept the connection and present you with their own self-signed certificate. You would think that you are using a secure connection with your email server but you are really using a secure connection to an attacker's email server. – SSL Shopper So somebody could switch out my self-signed certificate with their own, and I wouldn't be able to detect it? The way this site phrases it, it makes it sound worse to install a self-signed certificate than to leave your site without a certificate at all. Self-signed certificates cannot (by nature) be revoked, which may allow an attacker who has already gained access to monitor and inject data into a connection to spoof an identity if a private key has been compromised. CAs on the other hand have the ability to revoke a compromised certificate if alerted, which prevents its further use. - Wikipedia Does this mean that the only way someone could switch out their own certificate for mine is for them to find out the private key? I suppose this is more secure, but I'm still slightly confused about what exactly results from using a self-signed certificate. Is the only issue that obnoxious security warning that pops up in your browser when directed to the site, or is there more to it? Now in my case, I want to add the an SSL certificate to a minuscule Wordpress blog I run that I don't expect anyone else will read anytime soon; I mainly started it to get into the habit of blogging, and to learn more about the process of administrating a site (ex. what to do in situations like this one). Whenever I go to the login page and there's an HTTP:// instead of HTTPS://, I cringe a little. Submitting my password feels like I'm shouting my password out loud with hundreds of people listening. I don't plan on adding any other authors to the site, so I am the only person who would ever need to login. This isn't a site I'm trying to get page views from, or one that handles e-commerce or any sensitive info like that, simply my username and password to login with. One of the concerns (that I've gathered so far) of a self-signed certificate is that non-technical users might be scared by the security warning, but this would not be an issue in my case. TL;DR: If scaring visitors away isn't a concern (which it isn't in my case), is it acceptable to use a self-signed certificate for the purpose of encrypting my Wordpress blog's password, or are there added security issues I should be aware of? Essentially, I'm wondering whether adding a self-signed certificate will be safer than leaving my login page the way it is now, or if it adds the potential for more security breaches than leaving it sans-SSL.

    Read the article

  • Am I correctly handling duplicate URLs for my homepage?

    - by Rob Goldstein
    I own a Job Search site named www.conservationjobboard.com and have a concern about how the domain is viewed by search engines. The issue is that when the site was first designed, the default page was left as default.php, but the homepage was actually JobBoard.php. To handle this, the default.php page performed a redirect to the JobBoard.php file when www.conservationjobboard.com/ was requested. The main problem resulted because the redirect was a temporary redirect causing search engines to index conservationjobboard.com/ and conservationjobboard.com/JobBoard.php as 2 separate pages. This has since been corrected to use the .htaccess file so that JobBoard.php is now the default file for the root directory eliminating the need for the redirect. Problem is that search engines still show both URL's in search results (one including JobBoard.php and one that ends with /). Another potential problem is that some of my early backlinks are to conservationjobboard.com/JobBoard.php while the rest are to conservationjobboard.com The 2 outstanding questions are as follows: 1. Is my domain still being penalized by search engines like Google for having duplicate homepage URL's? 2. Are all of the back links to my homepage being considered as the same now or is the total number of back links being split between the 2 different URL's? If you think there are still issues with how we have this set-up, I was wondering if you could give me advice on what we should do differently. Thanks.

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >