Search Results

Search found 20157 results on 807 pages for 'friendly url'.

Page 300/807 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • how to avoid getting negative points from google adsense

    - by Napster
    I have news based website, in which primary contents include news,image albums and videos. out of these i have copy rights for images and videos are just youtube embedded videos. Coming to news my site is kind of a mashup. It gathers data from various sites and presents them in more user friendly way for quick digestion and access. My problem here is that since the news part of the site can be found from other sites, my site could suffer in search rankings. Is there any solution to this. One thing I thought of is to put disallow on all the news ariticles pages, so google does not crawl them. Will this be helpful to me. When applying to google adsense does google crawl these pages (disallow) also.

    Read the article

  • Installer doesn't recognise windows 7 install

    - by SirWaffleRaptor
    I will start off saying I searched a bit for this problem with little success. You see, I am an utmost noob when it comes to Ubuntu and Linux based OS (hence why I want to install it and discover it) and the thread I found were too specific for me to do anything with. Therefore, if you find a thread that is noob-friendly and answers my question, please link it below or please try to answer as simply as possible :) The problem: I want to install Ubuntu 12.04 LTS 64 bit alongside Windows 7 on my main hard drive (to get the choice which one to start when booting). Pop the CD in, reboot and get to the installer screen. I know I'm supposed to have 3 options, but I'm only given the option to erase my hard drive or "do something else". Therefore, my question is: how do I make Ubuntu recognise Windows 7, so it may install alongside it?

    Read the article

  • Apache configuration file visualization/testing

    - by Matt Holgate
    Is there a tool available (or a debug mode built into Apache) that will allow me to interactively test and explain an Apache configuration for a given request? In particular, I'd like to be able to see which directives will apply when requesting a specific URL. For example, the output for the URL http://myserver.com/foo/bar/bar.html might look something like: Allow from 192.168.0.3 <-- From <Location /foo/bar> in myserver.com vhost Require valid user <-- From <Directory /var/www/foo> in global configuration Satisfy any <-- From <File bar.html> in global configuration [Background: why do I want this? The apache merging rules for configuration directives are quite complex to get right. It would be great to have a tool which allows you to check that your rules are doing exactly what you want, and would be a good learning tool]. If there isn't such a tool, is there a debug option in Apache that will log such information for each incoming request?

    Read the article

  • Fully secured gateway web sites

    - by SeaShore
    Hello, Are there any web sites that serve as gateways for fully encrypted communication? I mean sites with which I can open a secured session, and then to exchange through them with other sites in a secure way both URLs and content? Thanks in advance. UPDATE Sorry for not being clear. I was wondering if there was a way to access any site over the Internet (http or https) without letting any Intranet-proxy read the requested URL or the received content. My question is whether such a site exists, e.g.: I am connected to that site via https, I send it a URL in a secured way, the site gets the content from the target site (possibly in a non-secured way) and returns to me the requested content in a secured way.

    Read the article

  • DNS problems : correct nameserver, namserver working, but not resolving

    - by user1719624
    My problem is as follows. Any suggestions are welcome. [domain].org is not resolving whois and checking the registry information shows that the correct nameserver is set. The primary nameserver is also the server on which domain.org is hosted. The primary nameserver is also used for a number of other domains, and is working fine for those. Logging into the server, I can ping [domain].org and it resolves correctly. Setting the nameserver as my own DNS server on my laptop, and the URL resolves correctly. If the domain has the correct nameserver set, and the nameserver can resolve the URL to the correct IP address, and if I use the nameserver as my DNS then it resolves correctly, AND the nameserver is used for other domains which are resolving correctly, then why isn't it working? NB : this is a new domain registration and has been set up for around 10 days now, so it's not simple slow propagation. Any ideas? thanks

    Read the article

  • Effecient finding of long-range spotting targets

    - by nihohit
    I'm creating a top-down 2d strategy game, with a square grid map. So far, I've used Bresenham's line drawing algorithm in a circle to determine what's in LOS of each unit, and then targedt one of the targets in the circle. Now I find that this limits my units to shoot only at targets that they see. I want to extend my targeting algorithm to target any other unit in range of my weapon, even if they're out of sight range of this given unit, if they're "spotted" by another friendly unit. In other words, I want to enable usage of weapons with ranges longer than sight range. Is there a better way than iterating over all sighted units and computing range and LOSto each of them?

    Read the article

  • CMSs & ERPs for hospital management system

    - by Akshey
    Hi, What are the best free CMSs or CMS plugins or ERPs or any other free tools available for developing a hospital management system? I want to develop it for a children's hospital run by my father. The hospital is small with two doctors. Currently, everything is done manually on paper. The main entities who will be using the system are: Receptionist, the two doctors, chemist and the medical laboratorist. They will use it majorly for keeping the records of the patient. The patients would not be interacting with the system directly. The system needs to be user friendly and should be easy to learn. I was thinking to develop such a system using a CMS or an ERP or any other free tool. I have used wordpress/drupal in past but never used an ERP. Can you please guide me to make such a system using free, and preferably open source, tools? Thanks, Akshey

    Read the article

  • How to emulate a domain name - Webmin Setup

    - by theonlylos
    I am currently working on a client project where they are using a custom CMS which relies on having the specific domains configured for it to work properly. So in English, that means that when I try running the site on my test environment, the entire website fails because it isn't located on the primary domain (and I'm pretty sure the domain is hard coded since there's no control panel to adjust the file locations). Anyway what I wanted to ask is whether it is possible to use my test environment URL but have Apache and the DNS emulate my clients website URL locally, rather than calling the actual name servers. Right now I have a virtual host setup in Apache but I am not sure where to go from there. Any assistance is greatly appreciated.

    Read the article

  • Google Indexing Issue after htaccess changes

    - by Klement
    I have a site called www.FuneralCoverFinder.co.za. I have about 30 pages on the site and usually have 29 indexed. (Excluding 15 blog posts) They are new. I recently upgraded my entire site and made some redirection changes in my .htaccess file. I have made my url's more SEO friendly (Removing index.php/) and redirecting dead pages to working pages. I have tons of unique content all checked by grammarly and plagium to ensure I have no duplicate content. I have since resubmited my sitemap to Google and now have only one page indexed. It was within a couple of minutes. I usually see results almost immediately after submitting, now it's stuck on 1 page indexed. I assume I might have made errors in the .htaccess file as this was my first attempt. The site runs perfectly and all the url's redirect the way they should. I'm scared I have some or other loop, although the website runs fine. I still see many of my old indexed pages in the SERP's, I'm just worried that the issue with the new sitemap can cause my rankings some harm. My website is pretty SEO optimized onsite. I have about 1500 indexed backlinks and have been building them steadily over about half a year. I would really appreciate some clarity on this matter.

    Read the article

  • Search for odt files without indexing

    - by josinalvo
    I am looking for: a way to search inside odt files (i.e. search for contents, not name) that does not require any kind of indexing that is graphical and very user-friendly (for a relatively old person, who does not like computers much) I know that it is possible to have 1) and 2): for x in `find . -iname '*odt'`; do odt2txt $x | grep Query; done works well enough, and it's pretty fast. But I wonder if there is already a good solution that does this with a GUI (or can be adapted to do this easily)

    Read the article

  • Redirecting to a default (or last visited) subdirectory. Does Google like this?

    - by andufo
    i have a site that has 3 web applications, lets use this example: example.com/nicy example.com/mash example.com/zoken The main application is nicy, so if the user comes for the first time (or if Google indexes my site) that will be the default choice. This is the code placed inside example.com/index.php <? header('HTTP/1.1 301 Moved Permanently'); header('Location: http://example.com/nicy'); die(); ?> Is this solution SEO friendly for Google to index the nicy subdirectory as the main entrance page for the domain? (because of the 301 redirect). Thanks!

    Read the article

  • Reverse Proxy (mod_rewrite) and Rails (absolute paths)

    - by SooDesuNe
    I have front end rails app, that reverse proxies to any of a number of backend rails apps depending on URL, for example http://www.my_host.com/app_one reverse proxies to http://www.remote_host_running_app_one.com such that a URL like http://www.my_host.com/app_one/users will display the contents of http://www.remote_host_running_app_one.com/users I have a large, and ever expanding number of backends, so they can not be explicitly listed anywhere other than a database. This is no problem for mod_rewrite using a prg:/ rewrite map reverse proxy. The question is, the urls returned by rails helpers have the form /controller/action making them absolute to the root. This is a problem for the page served by mod_rewrite because links on the proxied page appear as absolute to the domain. i.e.: http://www.my_host.com/app_one/controller/action has links that end up looking like /controller/action/ when they need to look like /app_one/controller/action mod_proxy_html seems like the right idea, but it doesn't seem to be as dynamic as I would need, since the rules need to be hard coded into the config files. Is there a way to fix this server-side, so that the links will be routed correctly?

    Read the article

  • Lock screen elements frozen on desktop

    - by user286968
    I have finally made the switch from Windows to Ubuntu and now have to ask the first question because search did not help me this time. (Please help me in a newb-friendly way) The problem is easily described: The elements shown in the lockscreen on boot (ubuntu logo in the middle and watermark in bottom left corner) stay there even when I'm entering desktop. How can I remove them again? They haven't been there in the first place, but after updating Ubuntu with Update Center, the problem occured. As I cant post an image here with my reputation, pls see this link: http://s30.postimg.org/age8fraww/Screenshot_from_2014_05_30_23_27_58.jpg Thanks for your help and time. Kind regards

    Read the article

  • My wireless connection isn't working well

    - by Maikel Braas
    So this is my problem, I intalled ubuntu 12.04 with the wubi.exe and everything works fine, but my internet connection. When I try to install a program or use firefox, it goes with 24kb/s with a little luck for 1-2 seconds and then my internet terminates and it starts up again after 3-5 mins. Then it goes with an other 24kb/s and this will repeat. The thing is my connection works perfectly with windows. The laptop its running on is a Acer ASPIRE V3-771G. I tryed using a different network adaptor but that didn't help either. Please, some one help me with this. friendly regards, Maikel

    Read the article

  • Joomla in Windows is catching my access to a Virtual Directory where I placed my MVC application

    - by Romias
    In our windows hosting we use the root (wwwroot) folder to host a JOOMLA website as public website. This is running IIS 7. Then, we created a virtual directory called "App" to host there a ASP.NET MVC4 application. When I enter www.mydomain.com it shows the joomla website correctly. When I enter www.mydomain.com/App/ it somehow access my MVC app... as I see the URL changing to www.mydomain.com/App/Account/LogOn?ReturnUrl=%2fApp%2f BUT shows a 404 Joomla error as if it were looking that URL in Joomla. BTW, the hosting has 2 ASP.NET IIS Setup options: 4.0 Classic and 4.0 integrated. Using the Integrated one... it displays a blank page... using the classic one shows the 404 Joomla page. Any idea where to look for this?

    Read the article

  • htaccess rewrite domain/folder to domain/

    - by user1678259
    I've been checking all questions previously made here, but can't find a solution. We have a website nearly finished www.example.com/web/ and would like to hide the folder /web/ from the url. So what is shown is: www.example.com/web/* And would like: www.example.com/* If we try with a redirect, the url www.example.com goes to www.example.com/web/. And this is what we don't want. Any help will be very appreciated. Thank you all.

    Read the article

  • quickest way to research a set of pages backlinks

    - by JeremyB
    I have a list of 300+ pages (they were chosen based on which pages rank for a keyword I'm interested in) and I want to compile a list all the (known) inbound links to those pages. What's the fastest way to do this? It seems like the only tools out there-- Yahoo Site Explorer, SEOMoz, Majestic, require you to either a) manually export each set of links by hand, or b) get data at the domain level (e.g. Majestic's clique hunter). Does anyone know of any efficient way to do this? I ask because I'm about to write a bunch of code and I don't want to waste my time if there's another tool that will work. I know SEOMoz and Majestic have API's but I'm wondering if there's a more user-friendly option.

    Read the article

  • Apache: Stealth 404 the admin area until authenticated via basic auth, then allow access

    - by Kzqai
    Given a administrative area with urls like this: wp-admin/ wp-admin/whatever wp-admin/another-page wp-adminsecretlogin/ A standard basic-auth coverage would provide a username and password prompt on all three urls, and return a 403 on all failed auth attempts. This is a pretty obvious signal that something exists there, and thus is an invitation to script/brute force access. I would like to instead, require basic auth everywhere, but when not authenticated, not prompt for username and password, and instead return a 404 not found error for all urls except a wp-adminsecretlogin/ url. At that individual-to-the-site url, basic auth could go through, and unlock the rest of the administrative functionality (though the standard application login would still be necessary). How would I do that via apache .htaccess or .conf directives?

    Read the article

  • CURL -I issue stop responding when contain "="

    - by user1512778
    i did this command : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm&requestingHandler=WebNSORDetailHandler&ID=368343543' but stuck but if i did this : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:49:14 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 then i try shorten it : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceName=WebNSOR&templateName=detail.htm' stuck too i dont know why seems like if the url contain "=" it stop responding so tried this url removing the "=" (serviceName=WebNSOR to serviceNameWebNSOR) : curl -I 'http://criminaljustice.state.ny.us/cgi/internet/nsor/fortecgi?serviceNameWebNSOR' HTTP/1.1 200 OK Content-length: 207 Content-type: text/html Server: Sun-ONE-Web-Server/6.1 Date: Sat, 15 Dec 2012 08:50:38 GMT Via: 1.1 proxy-internet-revproxy Proxy-agent: Oracle-iPlanet-Proxy-Server/4.0 why i cant use = ? please assist me

    Read the article

  • How do spambots work?

    - by rlb.usa
    I have a forum that's getting hit a lot by forum spambots, and of course the best way to defeat something is to know thy enemy. I'll worry about defeating those spambots later, but right now I'd like to know more about them. Reading around, I felt surprised about the lack of thorough information on the subject (or perhaps my ineptness to input the correct search terms for better google results). I'm interested in learning all about spambots. I've asked on other forums and gotten brush-off answers like "Spambots are always users registering on your site." How do forum spambots work? How do they find the 'new user registration' page? (I'm especially surprised because some forums don't have a dedicated URL for this eg, www.forum.com/register.html , but instead use query strings or even other methods invisible to the URL bar) How do they know what to enter into each 'new user registration' field? How do they determine what's a page they can spam / enter data into and what is not? Do they even 'view' this page at all? ..If not, then I'd assume they're communicating with the server directly - how is - this possible? How do they do it? Can forum spambots break CAPTCHAs? Can they solve logic questions (how?)? Math questions? Do they reverse-engineer client-side anti-bot validation scripts? Server-side scripts? What techniques are still valid to prevent them? Where do spambots come from? Is someone sitting behind the computer snickering as they watch their bot destroy site after site? Or are they snickering as they simply 'release' it onto the internet somehow? Are spambots 'run' by an infected computer somewhere? Do they replicate themselves? etc

    Read the article

  • Is dual booting ubuntu and fedora/openSUSE as easy as popping in a liveCD?

    - by user25757
    I wish to dual boot ubuntu (currently installed) with fedora and/or openSUSE (fedora for vanilla gnome and SUSE for a good KDE experience). I do not know much about dual booting and grub. Do I only have to put in the liveCD and install side by side? Or do I have to partition and stuff? Also, is there an easy way to remove one if I do not want it? Please note that I do not want them on the same /home partition, I want them to have a different one each. (and also if there is a way that is mostly graphical and user friendly, it would be most appreciated)

    Read the article

  • htaccess rewriterule leading slash

    - by Tiddo
    I'm using htaccess to rewrite my urls so that I can have nice clean urls. However, the same htaccess file does different things on my local server and my remote server: On my local server the url to the website is like http://localhost/example/ and on my remote server the url is http://example.com/. For my local server I can use the following htaccess redirect rule: RewriteRule ^(.*)$ index.php?page=$1 [L,QSA] However, when I use this on my remote server I get an internal server error. Instead I have to use this: (note the leading slash) RewriteRule ^(.*)$ /index.php?page=$1 [L,QSA] Unfortunately this doesn't work on my local server: this rewrite rule requests http://localhost/index.php instead of http://localhost/example/index.php on my local server. How can I make this work on both my remote and local server?

    Read the article

  • How to set up Apache 2 to serve only subdirectories

    - by Lynden Shields
    I have 3 sites which need to be hosted on a web server (apache2 from repo running on Ubuntu 12.04). They are each in their own subdirectory within /var/www/ I would like apache to serve files from the relevant directories only if the directory name is given in the URL, but not serve the /var/www/ directory itself. E.g: http://1.2.3.4/site1/ should work and serve the index from /var/www/site1/index.html, but http://1.2.3.4/ should not serve anything. Currently, I can't get the url to point to the directory. Either I can get http://1.2.3.4/ to serve everything within /var/www/ (including /var/www/site2/secretstuff/), or I can get the root http://1.2.3.4/ to serve one of the subdirectories (/var/www/site1/). This is unacceptable site 1 needs Indexes enabled but the others must not. I just want to make site1's config only respond to requests of the form http://1.2.3.4/site1/* and not handle requests of the form http://1.2.3.4/ I do not have a domain name set up so I can't use subdomains.

    Read the article

  • Redirect non-www ssl traffic to www ssl (apache)

    - by The NinjaSysadmin
    Hello, I'm attempting to get a redirect which is failing, and for some reason I can't think today. I have a vHost file within HTTPD that listens on standard port 80 and port 443. I'm attempting to redirect https://domain.com/(.*) to https://www.domain.com/$1 so that the URL remains intact. My config is as follows: ServerName www.domain.com ServerAlias tempdomain.testdomain.co.uk ServerAlias domain.com My rerwrite rule I'm using is. RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ https://www.domain.com$1 [R=301,L] I've also tried removing the . and $ but nothing.. When I visit the url https://domain.com/secure.page?action=comp it doesn't redirect to https://www.domain.com/secure.page?action=comp I do also have other SSL pages, the above was just an example.. Can anyone point out my stupidity.

    Read the article

  • Password protect app in jetty

    - by JohnW
    I am testing a webapp (.war) running in Jetty 7. For demo purposes I want to run this on a public URL, however I would like not to have the whole world (if they happen to come across the URL) be able to see it. Is there a way to make Jetty require a basic-auth type of authentication when accessing the webapp (without modifying anything inside the war, i.e. no edits on the web.xml file)? Or if not the webapp, then any part of what Jetty provides at port 8080?

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >