Search Results

Search found 11896 results on 476 pages for 'smart pro'.

Page 105/476 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Can search engine robots read file with permission 640?

    - by dkjain
    I am on a shared web hosting linux server. I want search engine robots/spiders to be able to read the robots.txt but not any one typing www.mysite.com/robots.txt. As per the following google group post, the user specifies that by setting file permission to 640, it's possible to deny access to robots.txt file by the world but still enable search engine robots to read them. Is that true? If not how it's possible to deny general public access to robots.txt but still allow Search engine robots to read them.

    Read the article

  • How do you exclude yourself from Google Analytics on your website using cookies?

    - by Keoki Zee
    I'm trying to set up an exclusion filter with a browser cookie, so that my own visits to my don't show up in my Google Analytics. I tried 3 different methods and none of them have worked so far. I would like help understanding what I am doing wrong and how I can fix this. Method 1 First, I tried following Google's instructions, http://www.google.com/support/analytics/bin/answer.py?hl=en&answer=55481, for excluding traffic by Cookie Content: Create a new page on your domain, containing the following code: <body onLoad="javascript:pageTracker._setVar('test_value');"> Method 2 Next, when that didn't work, I googled around and found this Google thread, http://www.google.com/support/forum/p/Google%20Analytics/thread?tid=4741f1499823fcd5&hl=en, where the most popular answer says to use a slightly different code: SHS Analytics wrote: <body onLoad="javascript:_gaq.push(['_setVar','test_value']);"> Thank you! This has now set a __utmv cookie containing "test_value", whereas the original: pageTracker._setVar('test_value') (which Google is still recommending) did not manage to do that for me (in Mac Safari 5 and Firefox 3.6.8). So I tried this code, but it didn't work for me. Method 3 Finally, I searched StackOverflow and came across this thread, http://stackoverflow.com/questions/3495270/exclude-my-traffic-from-google-analytics-using-cookie-with-subdomain, which suggests that the following code might work: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setVar', 'exclude_me']); _gaq.push(['_setAccount', 'UA-xxxxxxxx-x']); _gaq.push(['_trackPageview']); // etc... </script> This script appeared in the head element in the example, instead of in the onload event of the body like in the previous 2 examples. So I tried this too, but still had no luck with trying to exclude myself from Google Analytics. Re-iterate question So, I tried all 3 methods above with no success. Am I doing something wrong? How can I exclude myself from my Google Analytics using an exclusion cookie for my browser?

    Read the article

  • Meta tags again. Good or bad to use them as page content?

    - by Guandalino
    From a SEO point of view, is it wise to use exactly the same page title value and keyword/description meta tag values not only as meta information, but also as page content? An example illustrates what I mean. Thanks for any answer, best regards. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Meta tags again. Good or bad to use them as page content?</title> <meta name="DESCRIPTION" content="Why it is wise to use (or not) page title, meta tags description and keyword values as page content."> <meta name="KEYWORDS" content="seo,meta,tags,cms,content"> </head> <body> <h1>Meta tags again. Good or bad to use them as page content?</h1> <h2>Why it is wise to use (or not) page title, meta tags description and keyword values as page content.</h2> <ul> <li><a href="http://webmasters.stackexchange.com/questions/tagged/seo">seo</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/meta">meta</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/tags">tags</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/cms">cms</a> <li><a href="http://webmasters.stackexchange.com/questions/tagged/content">content</a> </ul> <p>Read the discussion on <a href="#">webmasters.stackexchange.com</a>. </body> </html>

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • How do I prevent tampering with AJAX process page? [closed]

    - by whamsicore
    I am using Ajax for processing with JQUERY. The Data_string is sent to my process.php page, where it is saved. Issue: right now anyone can directly type example.com/process.php to access my process page, or type example.com/process.php/var1=foo1&var2=foo2 to emulate a form submission. How do I prevent this from happening? Also, in the Ajax code I specified POST. What is the difference here between POST and GET?

    Read the article

  • When is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • .htaccess rewrite www to non-www and remove .html

    - by lester8891
    I need rewrite rules to redirect the following: http://www. to http:// /file.html to /file I've tried using a combination of these but each time it results in a redirect loop on one of the situations RewriteBase / RewriteRule ^(.*)\.html$ $1 [NC] RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC, L] RewriteRule ^(.*)$ http://domain.co.uk/$1 [R=301] I figure it's probably something to do with the flags but I don't know how to fix it. Just to be clear it needs to do all these situations: http://www.domain.co.uk to http://domain.co.uk http://www.domain.co.uk/file.html to http://domain.co.uk/file http://domain.co.uk to http://domain.co.uk http://domain.co.uk/file.html to http://domain.co.uk/file Thanks!

    Read the article

  • Easy user management on html site?

    - by James Buldon
    I hope I'm not asking a question for which the answer is obvious...If I am, apologies. Within my html site (i.e. not Wordpress, Joomla, etc.) I want to be able to have a level of user management. That means that some pages I want to be only accessible to certain people with the correct username and password. What's the best way to do this? Are there any available scripts out there? I guess I'm looking for a free/open source version of something like this: http://www.webassist.com/php-scripts-and-solutions/user-registration/

    Read the article

  • http-equiv=content-language alternative - the way of specifying document language

    - by tugberk
    Lots of web sites uses following meta tag to specify the default language of the document: <meta http-equiv="content-language" content="es-ES"> When I go to w3c site: http://www.w3.org/TR/2011/WD-html-markup-20110113/meta.http-equiv.content-language.html#meta.http-equiv.content-language I get this: Using the meta element to specify the document-wide default language is obsolete. Consider specifying the language on the root element instead. What is the way of specifying document language now?

    Read the article

  • Will ranking be affected with a mobile XML sitemap for a mobile site with canonical URLs?

    - by Emil Rasmussen
    We have a site with both a desktop version and a mobile version. Most of the content are the same and both versions have the same URL, but the HTML generated is device specific. Looking at Google's recommendations for smartphone-optimized sites, one could get the impression that the mobile xml sitemap is only for sites with different URLs. Will ranking be affected - negatively or positively - if we add a mobile xml sitemap that effectively will be a duplicate of the desktop sitemap?

    Read the article

  • Will rewriting your .htaccess to 404 to return search results from your site negatively effect your ranking in Google?

    - by leeand00
    Depending on the type of site that you are running, it may or may not be advantageous to display search results instead of a 404 page, when someone visits a non-existent page on your site. I believe that the site I've been maintaining recently would benefit from this as it is the site of a publication. With a publication the more people you can get to read your site the better. But after reading up on how Google ranks the "quality" of your site, where you will appear in SERPs, based on how much the meta text of a page relates to the content of the page, I have to wonder if making a 404 page link to the search results would harm the "quality" of your site in Google eyes.

    Read the article

  • Gracefully terminate a request based service on server

    - by Jatin
    In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user. But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here? And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.

    Read the article

  • Inserting multiple links to one image in Confluence

    - by Simon
    I am setting up a Wiki in Confluence v3.5.1 I have added a visio diagram (JPG) to a page (this diagram will take up most of the page) - This diagram depicts the workflow between developers and support and clients. I envisage users being able to click on different parts of the diagram and it to open up child pages with more details about that particular process (with videos on 'how-to' do that specific task, like log issues in Jira) However, from what I can see, there is no way from the Confluence editor to add multiple links to the one image, right? I looked at Anchors, but this does not look like it will do the job. So, what is the best option? I remember Dreamweaver having these sorts of tools built in, and there appears to be other utilities that can help put in image map HTML tags, but I cannot see a way of easily editing the HTML in Confluence editor. Also worried about the headache this could cause with managing future changes of the page.

    Read the article

  • % new visitor vs. % returning visitor

    - by Torben Gundtofte-Bruun
    I'm not sure how to interpret the results in Google Analytics. I understand that some metrics should be high, and some should be low. But this one I don't get: % new visitor vs. % returning visitor: It's good that users are returning, but surely it's also good to get new, fresh visitors. How do I evaluate this %-vs-% ratio? The higher the better: visits unique visitors pageviews pages per visit avg. visit duration The lower the better: bounce rate drop-offs

    Read the article

  • How important are SEO Friendly URLs [closed]

    - by nute
    Possible Duplicate: Is a URL with a query string better or worse for SEO then one without one? Currently, my URLs look something like http://mydomain.ext/question/5 where question is the Controller and 5 is the ID of the object or article retrieved. In theory I could spend some development time and some server resources to have URLs that would contain more information about the page loaded. However, seeing how websites like Youtube or many others just keep simple URLs with just an ID, I am asking, does it matter? It is worth it??

    Read the article

  • case-specific mod rewrite on Wordpress subdomain multisite

    - by Steve
    I have split a Wordpress blog into multiple category-specific blogs using subdomains, as the topics in the original blog were too broad to be lumped together effectively. Posts were exported from the parent www blog and imported into the subject-specific subdomain blogs. I believe .htaccess provides mod rewrite for all subdomains (including the original www) in a single .htaccess file. I use .htaccess to perform 301 redirect on post categories to the relevant post on the subdomain's blog. eg: RedirectMatch 301 ^/auto/(.*)$ http://auto.example.com/$1 The problem I have is that the category has been retained in the permalink structure in the subdomain blog, so that www.example.com/auto/mercedes is now auto.example.com/auto/mercedes. The 1st URL is redirect to the 2nd, but unfortunately, the 2nd URL is redirected to auto.example.com/mercedes using the same rewrite rule, which is not found, as the permalink on the subdomain's blog retains the parent category of auto. The solution would be to adjust the permalink structure in the subdomain's WP settings, so that the top level category does not duplicate the subdomain. My question would be: how do I then strip a section of the original (www) blog's post URL from the subdomain's URL when redirecting? eg: How do I redirect www.example.com/auto/mercedes to auto.example.com/mercedes? I'm assuming this would be a regular expression trick, which I am not great at. Update: I might have to use: RewriteCond %{HTTP_HOST} !auto.example.com$ in the default Wordpress if loop in .htaccess, and seperate my custom subdomain redirections into a second if loop section.

    Read the article

  • Domain masking (and simple page links)

    - by Halik
    How do you set up the domain (Im using godaddy) to mask the server url but to append the sub-page link. Im thinking something like the wikipedia en.wikipedia.org/wiki/something (or, if it would require httpd.conf access, setting it to append the default subpage link eg. '?page_id=2') Currently I can set up the domain to either be masked completely without showing any sub-page links or to simply redirect my domain to my web server.

    Read the article

  • Meta description of my blog post changes

    - by Aadarsh sojitra
    I have some problems in Meta description tags in my Blogger blog. When I update my pages in search engine with the help of the Fetch as Google feature in GWT, all my blog's results comes with a correct meta description like Today I am back with a reason that "WHY IS ORIGINAL MEMORY IN HARD-DISK IS LESS THAN PRINTED" on a box. If we buy any hard-disk or a pen drive... But after approx 5-6 days, it changes to my blog's default meta description. This is also happening after changing the default meta description of my blog. I want only one answer that why its happening? After deleting my blog and creating a new blog with the same name this problem was solved. Why this problem was solved? - I am asking this question because to solve problems in the future.

    Read the article

  • Web Hosting Advice for Project [duplicate]

    - by Lea Hayes
    Possible Duplicate: How to find web hosting that meets my requirements? I am working on a project that will be released as open source in the latter part of the year. I am starting to think about how the accompanying website will be hosted and would greatly appreciate some advice. Requirements: Domain #1 Information about the project itself (just pages and pictures). Documentation / Wiki Forums Download of project source (approx 3MB archive) Download of various themes and community contributed content (est. sizes 10KB ~ 512KB). Domain #2 Primary company website that offers products and services. This will be primarily pictures and pages. What kind of web hosting would be best for a project like this. I am working on a very tight budget and can only afford to spend up to £250 per year for hosting this. I was considering using some sort of VPS hosting. I found the following companies which seem to offer around this price range, but they have very mixed reviews. http://www.webhosting.uk.com/ http://www.eukhost.com/ Godaddy UK uk2 . net My company is based in the UK, how important is it for me to use UK based hosting? There are plenty of overseas hosting companies that are considerably cheaper. When it comes to bandwidth, how many downloads will bandwidth: 100GB get me? Any advice would be very greatly appreciated!

    Read the article

  • Wordpress html email plugin that doesn't merge with your users?

    - by christian
    I just launched a wordpress site as a redesign of my strictly html site and added my blogger blog to Wordpress to keep it all in one place. I am trying to send an html email from my Wordpress site to all of my blogger subscribers encouraging them to subscribe to my new blog and join my newly created membership program, but every html email plugin (that I have found) imports your your users with you Wordpress users before you can send them an email. Is there a plugin that allows me to send an html email to a list of email address without adding them as users? otherwise I will have to go through and delete all of my users (after importing the list and sending the email) and try to pick out my legitimate users who have already signed up all the while watching for new ones to make sure I don't delete them.

    Read the article

  • Is eCPMs dropping by about 50% in January a usual behavior on Google AdSense?

    - by Andrew G. Johnson
    So I just got semi-serious about running some AdSense sites over the past 6 months and the eCPM's have hovered between 1.38 and 1.42 [yes it's that close] when I look at the eCPM for each month. Obviously some deviation day to day but pretty damn close to a buck forty in aggregate. So far for January I am sitting at 0.80 for an eCPM. I know it's not a huge sample size but the daily pageviews are fairly consistent [actually a bit higher] than where they were in December. I am trying to justify this by thinking that somehow a lot of ad buyers buy inventory for the year and have to get setup to do another big buy now that it's a new calendar year but that thought isn't close to comforting. Is this happening to anyone else? EDIT: I run a lot of websites and the ratios of pageviews are about the same this month to last month but just to be clear the eCPM I posted is for 20 websites in a variety of niches, it doesn't accurately depict any one domain.

    Read the article

  • Install Moodle to subdomain with Softaculous via cPanel

    - by Sean
    I installed Moodle to a directory with Softaculous. Since it doesn't allow installing to a subdomain, after installing it I created a subdomain and pointed the destination (of the subdomain) to the previously created Moodle directory. Now when I go to the subdomain.example.com it says Incorrect access detected, this server may be accessed only through "http://example.com/moodle" address, sorry. Please notify server administrator. I must be doing something wrong, when installing it was very similar to these instructions. Any suggestions would be much appreciated.

    Read the article

  • Hide email adress with JavaScript

    - by Martin Aleksander
    I read somewhere that hiding email address behind JavaScript code, could reduce spam bots harvesting the email address. <script language="javascript" type="text/javascript"> var a = "Red"; var t = "no"; var doc = document; var b = "ITpro"; var ad = a; ad += "@"; ad += b; ad += "."; ad += t; var mt = "ma"; mt += "il"; mt += "to"; var text = ""; if (text == null || text.length == 0) text = ad; doc.write("<"+"a hr"+"ef=\""+mt+":"+ad+"\">"+text+"</"+"a>"); </script> This will not display the actual email-address in the sourcecode of the page, but it will display and work like a normal link for human users. Is it any point of doing this? Will it reduce spam bots, or is it just nonsense that might slow down performance of the page because of the JavaScript?

    Read the article

  • Setting up page goals in Analytics when using progressive enhancement to load content using jquery .load

    - by sam
    I'm using jQuery .load to load content in from other pages into my homepage, so that Google can still see whats going on I've made the <a> tags go to the pages but over ride them in the JavaScript so instead of going the that page it just loads in the content from that page to the main page. Normaly I would just make the page /contact.html a goal. Can I still get it to work as a goal if the content is being loaded in? Can I do something like when the user clicks <a href="contact.html" id="load-contact">contact</a> it logs the clicking of the <a> tag as a goal, rather than the actaul page being visited?

    Read the article

  • Email sent via Google via relayhost being marked as spam

    - by Mark H
    Company email hosted by Google Apps. Company PBX in-house is Elastix. All voicemails received on the extensions of Elastix are supposed to be emailed by the CentOS server (Postfix) to the email address of the employee. Using relayhost on postfix, I am sending those emails through Google Apps (smtp.gmail.com), but some of these voicemail emails end up in the spam. Sending it through Google, and sending it to an email hosted by Google - yet there's spam. Email sent from the Google Apps interface - no complaints of it going to spam - just from the Elastix server. I've just asked our DNS domain guys to add spf records, but is that all that's needed? Some help please!

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >