Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 51/216 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Disallow all user agents except one using .htaccess?

    - by Kian Mayne
    I've been struggling to get this .htaccess working. The aim is to disallow all user agents besides my app. The app sends a GET request with a user agent of lets say 'AcmeUpdater'. Whenever I try to navigate to any file in the folder, I get a 500 - Internal Server Error. Here are the rules I'm using: <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteBase / RewriteCond %{HTTP_USER_AGENT} !^KMUpdaterClient* RewriteRule .* - [F,L] </IfModule> I have updated the .htaccess file as suggested in the answer by Nick, and restarted Apache. After trying a couple of different things, it seems that just the presence of a .htaccess is causing the 500 error. I'm getting nothing in the error logs. The .htaccess file at the document root looks like the following: <IfModule mod_rewrite.c> Options +FollowSymLinks ErrorDocument 404 /index.php?error=404 RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d </IfModule> So I realised that the error logs were in chronological order rather than the reverse chronological I expected (Oops!). The error I'm getting is: </IfModule> without matching <IfModule> section. I removed the </IfModule> and still I get that error. Ideas?

    Read the article

  • Password-free logins using your email address only?

    - by Mario
    The state of logins is horrendous. With each site having it's own rules for passwords, it can be very hard to remember what variation you used on any given site. Logins are pure pain. One thing I love about Craigslist is that it did away with logins altogether. I know this design may not suit every site, but there's something to their design that beckons to be repeated. OpenID is great on sites that have adopted it, but it's still not standard. Would it be feasible/wise to use an email address as a login and provide no password? The site would send a short-term key directly to your email address. You click on the link and you're in. When you're done, you "logout" and your key is terminated. I've toyed with this idea before. What concerns (i.e. spammers, bots, etc.) would make this impractical or unsafe and could they be overcome?

    Read the article

  • images within noscript

    - by Guilherme Nascimento
    Note: My question is not about javascript Note: My question is how to make the HTML accessible to search engines. Note: My question is not about hiding texts, is on block loading of images in order to use LazyLoad. I tested various techniques of blocking the loading of images to use effect LazyLoad (I'm developing in javascript), was the only efficient <NOSCRIPT>: The HTML structure that would, with LazyLoad loading of images is achieved via the viewport (visible area of the website in browser). <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0101.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0201.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0301.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> This is a bad practice for search engines? If it is a bad practice, you could put an example of good practice? If there is any other issue with noscript talking pictures, forgive me. Note: I did not find any doubts about noscript with images.

    Read the article

  • When is meta description still relevant?

    - by Jeff Atwood
    I received this bit of advice about the meta description tag recently: Meta descriptions are used by Google probably 80% of the time for the snippet. They don’t help with rankings but you should probably use them. You could just auto generate them from the first part of the question. The description tag exists in the header, like so: <meta name="Description" content="A brief summary of the content on the page."> I'm not sure why we would need this field, as Google seems perfectly capable of showing the relevant search terms in context in the search result pages, like so (I searched for c# list performance): In other words, where would a meta description summary improve these results? We want the page to show context around the actual search hits, not a random summary we inserted! Google Webmaster Central has this advice: For some sites, like news media sources, generating an accurate and unique description for each page is easy: since each article is hand-written, it takes minimal effort to also add a one-sentence description. For larger database-driven sites, like product aggregators, hand-written descriptions are more difficult. In the latter case, though, programmatic generation of the descriptions can be appropriate and is encouraged -- just make sure that your descriptions are not "spammy." Good descriptions are human-readable and diverse, as we talked about in the first point above. The page-specific data we mentioned in the second point is a good candidate for programmatic generation. I'm struggling to think of any scenario when I would want the Google-generated summary, that is, actual context from the page for the search terms, to be replaced by a hard-coded meta description summary of the question itself.

    Read the article

  • How To Make FileZilla Open All The Required Files With One Click

    - by Omar Tariq
    Is there any way of configuring FileZilla so that I can open all the files on a server that I use to edit with just one click. For example if the files are like this:- /home/abc/def/one.txt /home/abc/def/yet/another/directory/two.txt /home/abc/def/ghi/yet/another/directory/three.txt then it is very time-consuming to navigate through each directory and open the required files. These are only 3 files but what if we have around 10 to 20 files? Yes, copying the path of the directories is one thing. But something that is built-in so that I can just click a button like open all the required files of this connection and it opens all the files in the editor (as set in FileZilla preferences) then that would be great!

    Read the article

  • Multi database link and mix and match email alert

    - by menardmam
    I have a site which is a large database of people that have different knowledge in different domains, such as teaching (maths, french, science etc...) On the site there is a page where you can search people base on different request, such as distance from home, grade, sex. Now, I would like to add a page where people that are looking for mentor will fill a request, and when a tutor in his area of search will match request, a email will be send to this researcher. Because I know for sure, that when in January you look for a math teacher for your 10 year old son, and you find none, you won't go again in February, March... and on and on just to see. Maybe there is one now, you want to be informed when the tutor will get into database automatically (more or less like www.jobboom.com) So the question is, what CMS do I need to be able to do that ? Wordpress, drupal or something custom made?

    Read the article

  • Where can I get a cheap database, no web hosting needed

    - by PhilipK
    I'm building an application which requires a fairly small online MySQL database. I don't need any web hosting. What are some cheap options for an online database? *Edit(a bit more about what I'll be using it for) The database itself is very small it contains market statistics for 5 weeks of time. Once a week the data will be updated, so that it always contains the most recent 5 weeks. Then I will use the data in that to create an XML file which is generated with PHP. The XML file will need to be accessed hundreds-thousands of times per month.*

    Read the article

  • Google Site Search -- How to use as API?

    - by John Isaacks
    I am trying to get an API that I can use to do searches on my own site. Google has something called site search and something called custom search. What is the difference? I make a new site search, then it is listed on a page with "custom search" in the heading. This is really confusing. I just want an API that I can use to search my site. I would prefer json to xml as the results. And if this service is offered by someone other than Google, that is fine too. The ones that I create at Google want me to embed a premade search box into my site. I do not want that, I want an API that I can call from PHP or JS. How can I get this?

    Read the article

  • Retroactively applying a Piwik goal to visitors

    - by Andrew Aylett
    I started receiving a large (for me) amount of traffic on one of my pages yesterday. Today, I thought that it would be useful to track goals from that page -- there's a link to my blog from it. I added the 'visited external link' goal to Piwik, and new visits are being recorded. However, it seems to me that there must be enough data in the database to retroactively apply the goal to past users -- is there a way to achieve that?.

    Read the article

  • URL rewrite and domain frame

    - by Dennis
    I have registered the domain www.posti.sh at nic.sh. The website is on the server www.myskoob.com/postish. Unfortunately, nic.sh does not support frames, i.e. that the domain stays posti.sh as it forwards to www.myskoob.com/postish - so I thought about a URL rewrite on the server. Unfortunately I have no idea how rewriting works - I am thankful for explanations - but I would also like to ask whether this is generally possible. What I need is: The server needs to recognize that the folder postish is accessed Depending on the file that is opened, it needs to rewrite the url to www.posti.sh/<-according filename here- Also, the server needs to understand that a link to www.posti.sh/about.php links to www.myskoob.com/postish/about.php and likewise for other files - at the moment, when I type in posti.sh/about.php it redirects to http://www.myskoob.com/postishabout.php, which does not exist All this should be possible irrespective of whether the url contains a "www" at the beginning or not A plus but not necessary would be that it does not display the .php extensions Would that generally be possible? If not, what would be the alternatives? If anyone knows how to do it, any code and/or way to do it would be much appreciated!

    Read the article

  • DNS records: make mydomain.com work without www

    - by brentonstrine
    I am trying to configure DNS on my domain, however, I can only get it to work with the www prefix. Right now I have the following DNS A records: @ A 123.456.789 www A 123.456.789 http://www.mydomain.com works perfectly. But http://mydomain.com fails. I've tried all of the following: mydomain.com A 123.456.789 mydomain A 123.456.789 * A 123.456.789 But it always goes to port :80 requires a username and password. Except now it's somehow being redirected to a spammers website.

    Read the article

  • Hide email adress with JavaScript

    - by Martin Aleksander
    I read somewhere that hiding email address behind JavaScript code, could reduce spam bots harvesting the email address. <script language="javascript" type="text/javascript"> var a = "Red"; var t = "no"; var doc = document; var b = "ITpro"; var ad = a; ad += "@"; ad += b; ad += "."; ad += t; var mt = "ma"; mt += "il"; mt += "to"; var text = ""; if (text == null || text.length == 0) text = ad; doc.write("<"+"a hr"+"ef=\""+mt+":"+ad+"\">"+text+"</"+"a>"); </script> This will not display the actual email-address in the sourcecode of the page, but it will display and work like a normal link for human users. Is it any point of doing this? Will it reduce spam bots, or is it just nonsense that might slow down performance of the page because of the JavaScript?

    Read the article

  • What is a good basic/flexible cms for a small website? [closed]

    - by Samuel
    Possible Duplicate: Which Content Management System (CMS) should I use? I'm designing a very basic portfolio website for an artist. It features a blog, portfolio, cv and contact page. I've handcoded the basics of this site in php/java, as it is a very small website (and I like coding by hand). But I need a simple cms backend for the dynamic parts of the website (the blog/portfolio). The big systems (ruby, joomla, wordpress) are far too invasive for my liking (and frankly a bit beyond my capabilities). Wordpress for example, requires too much adaptation of the design to the wordpress structure, and ruby is far too extensive for a simple site like this (in my opinion). So what I'm looking for is a (preferably open source) cms that has a simple backend for the artist to use as a blogger, with a mysql database for the content, that will allow me to insert content with simple tags (using smarty tags for example), but is otherwise not too invasive or demanding in terms of the required page structure. Does anyone know of a good cms that fits this description? p.s.: I have tried phpnews and cmsmadesimple, but phpnews was a litte too basic (but very close too what I'm looking for) and cmsmadesimple was way too slow (but otherwise also pretty close too what I wanted, though a bit too extensive).

    Read the article

  • Is PhotoBucket a viable solution to host a website's photo galleries

    - by Evan Plaice
    I'm currently working with a lot of photographers and will probably be picking up development on a professional photography site soon. With that in mind, and I can't stop thinking about a way I can implement a user-friendly photo gallery hosting solution where the site owner can upload images themselves without any webmaster intervention. Kind of like a CMS for image hosting. The idea is: - The user can log in to PhotoBucket - Upload their gallery - Visit an admin section of the site - Enter the new gallery name to the listing And... Voila, the gallery automagically gets displayed on the website in a clean lightbox-style presentation format (ie, no iframe nonsense). I took a brief look at the API and it looks promising. Is this a viable solution? Bonus points if you have implemented something like this with Photobucket and/or another 3rd-party image hosting site. Note: Purchasing a premium account is expected if necessary. The limitations on free accounts at most image hosting sites are just too restrictive to be useful.

    Read the article

  • Why is there nobody talking about an alternative to HTML & CSS? [closed]

    - by Nic
    HTML is such an old and cumbersome language, which was intended just to markup text. Today it's very rare to see a static HTML website, or a site with only text or a very simple layout. As a web developer I find it inconvenient to use HTML & CSS, very repetitive and cumbersome. I think that for a lot of website it could be simplified a lot. Tim Berners-Lee (W3) wrote a document named "The World Wide Web: Past, Present and Future" in August 1996 ... though HTML will be considered part of the established infrastructure (rather than an exciting new toy), there will always be new formats coming along, and it may be that a more powerful and perhaps a more consistent set of formats will eventually displace HTML. So, more than 15 years later, HTML is still here and it's here to stay. Why? Why searching for xml alternatives brings so much relevant result, but searching for html alternatives brings almost none relevant results? Answers like "it's too hard to change a standard" aren't answering the question since a lot of new standards emerged since the initiation of the web. I'm also not searching for answers that suggest using tools to simplify the process or formats that anyhow depends on HTML or CSS, technologies that currently require a plugin and not even trying to become an open standards (like Flash) aren't an answer neither. BTW, here are 2 articles written more than two years ago as food for thought, it might help with writing a better answers. "HTML, CSS, and Web Development Practices: Past, Present, and Future" describing a very related problem, by Jens O. Meiert. "A Brief History of HTML" by Scott Reynen, Here is a quote from the end: So now you can answer questions about HTML5 without even looking at the draft, which is handy, because the draft is 400+ pages long. Why is there a new tag in HTML5? Because some browser vendor (maybe the one that also owns a large video site) wanted it. Why are there so many scriptable interface elements in HTML5? Because some browser vendor (maybe the one selling phones without Flash support) wants them. Why is there no support for RDFa in HTML5? Apparently no browser vendor wanted it. Is that the future?

    Read the article

  • How important are SEO Friendly URLs [closed]

    - by nute
    Possible Duplicate: Is a URL with a query string better or worse for SEO then one without one? Currently, my URLs look something like http://mydomain.ext/question/5 where question is the Controller and 5 is the ID of the object or article retrieved. In theory I could spend some development time and some server resources to have URLs that would contain more information about the page loaded. However, seeing how websites like Youtube or many others just keep simple URLs with just an ID, I am asking, does it matter? It is worth it??

    Read the article

  • DNS hijack - prevention tips

    - by user578359
    Hi there, Over the weekend it looks like the DNS was hijacked on two of my domains. My set up is I have the sites registered on 1and1.co.uk, with dns nameservers pointing to Hostgator in the US where the sites are hosted. I also had cloudflare CDN running on the sites (via hostgator cpanel). My question is any ideas as to how this happened, and how I could either monitor it so I know if it occurs again, or strengthen the set up/service to minimise the risk. History: I received a ping from my site monitoring service that the sites were down. When I checked the sites were up so I assumed it was local to the monitoring service I received a ping last night the sites were up When I checked, one site was redirecting to download-manual.com (and checking that URL now, the home page is not the same as the one I saw, so they too may have been hijacked/hacked) The other site URL remained the same but had one of those standard site search pages which bounce you off to either phishing or paid for search sites I notified Hostgator who told me Cloudflare or 1and1 were the issue. I removed cloudflare, and contacted both them and hostgator, and am awaiting a response, but am not holding my breath. Is this common? I've never heard of this or come across this before. It's pretty scary that this can happen so easily. Appreciate any input. **Update: I've now spoken to support at 1and1, Hostgator, and Cloudflare, and each one claims it has nothing to do with them, and must be one of the others. Larry, curly, moe.

    Read the article

  • tomcat behind apache

    - by dannynjust
    i am trying to use the mod_proxy_ajp to forward all the request from tocat.example.com to example.com:8080 here is what the tomcat server.xml looks like: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> and here is the apache.conf config: <VirtualHost *:80> ServerName tomcat.example.com ServerAdmin [email protected] ErrorLog logs/tomcat.example.com-error_log CustomLog logs/tomcat.example.com-access_log common <Proxy *> AddDefaultCharset Off Order deny,allow Allow from all </Proxy> ProxyPass / ajp://example:8009/ ProxyPassReverse / ajp://example:8009/ </VirtualHost> but it is not working, any idea?

    Read the article

  • Project planning and customer tracking system

    - by Daniel Hollands
    First off, sorry if this is the wrong 'stack' site, but it seemed like a good place to start. I'm happy to report that my services as a web developer are starting to be in quite a lot of demand, and I have a few existing and potentially new customers all lining up - but I'm finding it very hard to keep track of everything. What I'm hoping for is some (preferably web-based) system which I can use to keep track of who my customers are, the various projects that I've got going on for them, and (if possible) the individual sub-tasks that make up each project. What would be even better is if the relevant customer was able to log into the site, and see the process of their projects. I do hope you know what I'm talking about, and that you'll be able to offer some suggestions of either web-base sites that offer something along these lines, or of some open source solution or something like that? Thank you

    Read the article

  • Nginx or Apache for a VPS?

    - by James
    I consider myself to be an inexperienced user/administrator when it comes to running my VPS. I can get by with a few CLI commands, I can set up Webmin and I can set up Yum repos, but beyond the very basic stuff, I'm out of my depth. So far, I'm running Apache. I don't know it particularly well, but I can get by with editing httpd.conf if I'm told what to edit. I've heard good things about Nginx and that it's not as resource-hungry as Apache. I'd like to give it a go, but I can't find any information about its suitability for administrators like me, with little experience of sysadmin or web server config. Webmin now has support for Nginx, so getting it installed and running probably won't be too much of a problem. What I'm wondering is, from a site administrator perspective, is running Nginx as transparent as running Apache? IE, at the moment, I can just throw up Wordpress and Drupal sites without having much to worry about or having to make any config changes to Apache. Would Nginx be as transparent?

    Read the article

  • A mechanism to include site title in every page, but not in <title> element

    - by Saeed Neamati
    Each site can have a name. For example, site x. Each page also can have a name (or a title) that should appear in <title> tag in the header. However, many websites out there use the combination site name - page name to provide the value for <title> tag. I find it a little far from being semantic. On the other hand, if you only include page title in <title> tag, search engines won't find your site by its name. For example, if your site's name is Thought Results and you don't include it in page titles, then if you search for Thought Results, you won't find your site in SERPs. Thus I'm searching for a mechanism to both include site title (not page title) in every page, and also only include page title in <title> tag to get more semantic results. Is there any way to achieve this?

    Read the article

  • Why do local HTML files take longer to load in IE compared to Firefox?

    - by jaslr
    I am creating a local HTML file that links to 2 external CSS files and 3 external JS files. When I refresh this in Internet Explorer 9, the page takes over a minute to load compared to instantly in Firefox (latest stable build. When I remove the external CSS and JS references, IE9 loads the page instantly. Can anyone explain why IE9 takes so long to load local HTML files with references to external CSS and JS files?

    Read the article

  • What naughty ways are there of driving traffic?

    - by Tom Wright
    OK, so this is purely for my intellectual curiosity and I'm not interested in illegal methods (no botnets please). But say, for instance, that some organisation incentivised link sharing in a bid to drive publicity. How could I drive traffic to my link? Obviously I could spam all my friends on social networking sites, which is what they want me to do, but that doesn't sound as fun as trying to game the system. (Not that I necessarily dispute the merit of this particular campaign.) The ideas I've come up with so far (in order of increasing deviousness) include: Link-dropping - This is too close to what they want me to do to be devious, but I've done it here (sorry) and I've done it on Twitter. I'm subverting it slightly by focusing on the game aspects rather than their desired message. AdWords - Not very devious at all, but effectively free with the vouchers I've accrued. That said, I must be pretty poor at choosing keywords, because I've seen very few hits (~5) so far. Browser testing websites - The target has a robots txt which prevents browsershots from processing it, but I got around this by including it in an iframe on a page that I hosted. But my creative juices have run dry I'm afraid. Does anyone have any cheeky/devious/cunning/all-of-the-above idea for driving traffic to my page?

    Read the article

  • "Email This" button with sideways counter

    - by aendrew
    I've been asked to build a design that has a "share this" area like below: I've built every aspect except the Email part of it -- any idea how best to do that? I've found http://getmailcounter.com/, but that displays a counter above. I'd personally just do a link, but it seems they're wanting some sort of analytics built in... Failing that, does anyone know of some sort of sharing system that looks like that and has all of those options? I'd just use AddThis, but its designs don't look very close to that... Thanks! Related: How to implement an email this link button

    Read the article

  • Will rewriting your .htaccess to 404 to return search results from your site negatively effect your ranking in Google?

    - by leeand00
    Depending on the type of site that you are running, it may or may not be advantageous to display search results instead of a 404 page, when someone visits a non-existent page on your site. I believe that the site I've been maintaining recently would benefit from this as it is the site of a publication. With a publication the more people you can get to read your site the better. But after reading up on how Google ranks the "quality" of your site, where you will appear in SERPs, based on how much the meta text of a page relates to the content of the page, I have to wonder if making a 404 page link to the search results would harm the "quality" of your site in Google eyes.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >