Search Results

Search found 50527 results on 2022 pages for 'http expires'.

Page 415/2022 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • Redirect Google crawler to different robots.txt via .htaccess

    - by user3474818
    I have googled for the answer all day and still couldn't find an answer. I have a virtual subdomain www.static.example.com which is a mirror site of www.example.com. It means I have just one root folder for subdomain and domain aswell. I want to redirect crawlers to different robots.txt file - robots_static.txt when they see .static in url in which I will forbid indexing via /disallow command. I want to do this because I have duplicated content in Google search results. Subdomain is showing the exact same content as the main domain. Does anyone know how could I achieve that crawlers sees robots_static.txt instead of robots.txt? What I have managed to find so far is this: RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] but when I check in webmaster tools, it still sees robots.txt as my robots file instead of robots_static.txt, so it crawls and index everything twice. What did I do wrong? Thanks EDIT: This is my .htaccess file ## # @package Joomla # @copyright Copyright (C) 2005 - 2013 Open Source Matters. All rights reserved. # @license GNU General Public License version 2 or later; see LICENSE.txt ## ## # READ THIS COMPLETELY IF YOU CHOOSE TO USE THIS FILE! # # The line just below this section: 'Options +FollowSymLinks' may cause problems # with some server configurations. It is required for use of mod_rewrite, but may already # be set by your server administrator in a way that dissallows changing it in # your .htaccess file. If using it causes your server to error out, comment it out (add # to # beginning of line), reload your site in your browser and test your sef url's. If they work, # it has been set by your server administrator and you do not need it set here. ## ## Can be commented out if causes errors, see notes above. Options +FollowSymLinks ## Mod_rewrite in use. RewriteEngine On RewriteEngine On RewriteCond %{HTTP_HOST} !^www\. RewriteRule ^(.*)$ http://www.%{HTTP_HOST}/$1 [R=301,L] RewriteCond %{HTTP_HOST} ^www.static.*$ [NC] RewriteCond %{THE_REQUEST} ^[A-Z]{3,9}\ /.*robots\.txt.*\ HTTP/ [NC] RewriteRule ^robots\.txt /robots_static.txt [NC,L] ## Begin - Rewrite rules to block out some common exploits. # If you experience problems on your site block out the operations listed below # This attempts to block the most common type of exploit `attempts` to Joomla! # # Block out any script trying to base64_encode data within the URL. RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] # Block out any script that includes a <script> tag in URL. RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] # Block out any script trying to set a PHP GLOBALS variable via URL. RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] # Block out any script trying to modify a _REQUEST variable via URL. RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) # Return 403 Forbidden header and show the content of the root homepage RewriteRule .* index.php [F] # ## End - Rewrite rules to block out some common exploits. ## Begin - Custom redirects # # If you need to redirect some pages, or set a canonical non-www to # www redirect (or vice versa), place that code here. Ensure those # redirects use the correct RewriteRule syntax and the [R=301,L] flags. # ## End - Custom redirects ## # Uncomment following line if your webserver's URL # is not directly related to physical file paths. # Update Your Joomla! Directory (just / for root). ## # RewriteBase / RewriteCond %{THE_REQUEST} ^GET.*index\.php [NC] RewriteCond %{THE_REQUEST} !/system/.* RewriteRule (.*?)index\.php/*(.*) /$1$2 [R=301,L] RewriteCond %{THE_REQUEST} ^GET ## Begin - Joomla! core SEF Section. # RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] # # If the requested path and file is not /index.php and the request # has not already been internally rewritten to the index.php script RewriteCond %{REQUEST_URI} !^/index\.php # and the request is for something within the component folder, # or for the site root, or for an extensionless URL, or the # requested URL ends with one of the listed extensions RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] # and the requested path and file doesn't directly match a physical file RewriteCond %{REQUEST_FILENAME} !-f # and the requested path and file doesn't directly match a physical folder RewriteCond %{REQUEST_FILENAME} !-d # internally rewrite the request to the index.php script RewriteRule .* index.php [L] # ## End - Joomla! core SEF Section. <FilesMatch "\.(ico|pdf|flv|jpg|ttf|jpg|jpeg|png|gif|js|css|swf)$"> Header set Expires "Wed, 15 Apr 2020 20:00:00 GMT" Header set Cache-Control "public" </FilesMatch> <ifModule mod_headers.c> Header set Connection keep-alive </ifModule> ########## Begin - Remove Etags # FileETag none # ########## End - Remove Etags

    Read the article

  • How can I create multiple mini-sites with similar/duplicate content without hurting my search engine rank?

    - by ekpyrotic
    Essential background: I run a small company that lets members of the public post handwritten letters to their local politician (UK-based). Every week a number of early stage bills (called Early Day Motions) are submitted for debate in the House of Commons, and supporters of the motion will contact their local Members of Parliament, asking them to sign the motion. The crux: I want to target these EDMs with customised mini-sites, so when people search "EDM xxx", they find my customised mini-site, specifically targeting that EDM (i.e., "Send a handwritten letter to your MP asking them to sign EDM xxx"). At the moment, all these mini-sites (and my homepage) have duplicate content with only the relevant EDM name, number, and background image changed. (For example, http://mailmymp.com and http://mailmymp.com/edm/teaching-life-saving-skills-at-school-edm-550.php). The question: Firstly, will this hurt my potential search engine ranking? And, if so, what's the best way to target these political campaigns in an efficient manner without hurting my SEO prospects?

    Read the article

  • Some post-VS2010 Launch Resources

    Here are some useful links related to the Vermont .NET VS2010 launch meeting on Monday night with our RECORD Breaking attendance! :) MSDN Visual Studio Developer Center: msdn.microsoft.com/vstudio VS2010 Comparison of various SKUs: http://www.microsoft.com/visualstudio/en-us/products VS2010 Trial Downloads: http://www.microsoft.com/visualstudio/en-us/download Great links from MicrosoftFeed.Com VS2010 Wallpapers for the hardcore: 10+ Beautiful Microsoft Visual Studio 2010 Wallpapers …and...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Launchpad fails to build a package for my PPA

    - by AZorin
    I'm trying to build a package on Launchpad's Debian build system for PPAs but I'm having some issues with a certain package. The package I'm trying to build (zorin-xwinwrap) contains a source C file which I'm trying to get to compile and build on Launchpad's server so that it would install and work on 32 bit (i386) and 64 bit (amd64) systems. Unfortunately I keep on getting an Error code 2 with the debian/rules file and I have no clue how to fix this issue. The following link is the source package of the software I'm trying to add to my PPA: http://goo.gl/GjZvd The following link is the buildlog for the failed package on Launchpad: http://goo.gl/6A2rQ I would greatly appreciate any suggestions if anyone may have any. Thank you for your time.

    Read the article

  • Mobile My Oracle Support 6.3 Release is live!

    - by JanSyss
    We have released Mobile My Oracle Support 6.3 last Saturday (13-Oct-2012), including 10 enhancements and almost 40 bug fixes. Mobile My Oracle Support is My Oracle Support's webapplication optimized for mobile devices to manage your Service Requests, your On Demand Requests for Change (RFCs), search over Support's Knowledge Base, Bug database or Sun System Handbook, and to manage your pending user requests (CUA). You can find the application at http://support.oracle.mobi  or get redirected from http://support.oracle.com when using a mobile device. Overall Several UI optimizations in different screens. Service Request Area Show the platinum icon for Platinum SRs and the restore status for Platinum Sev 1s. Email send with Share functionality now contains links to Mobile MOS and Full Site. Knowledge Management Area Ability in Advanced Search to search the Sun System Handbook (cfr. screenshot below) Better rendering of the KB documents to avoid where possible horizontal scrolling. Don't hesitate to share your feedback and comments or even requests.

    Read the article

  • Is it possible to track redirects to external sites from our subdomains?

    - by ChaBuku
    I have a handful of subdomains set up as redirects because we are using them for QR codes. I want to be able to track the QR code redirects (which are already set up and printed so no changing them at this point) and see the effectiveness of each. Here's two examples: http://qr.glorkianwarrior.com and http://ad.glorkianwarrior.com are set up to forward to our iTunes page (later on this year it may forward to Google Play or a specific landing page), is there any way on my server to track the redirect from the subdomain to iTunes and see where traffic is coming from first? I have the redirects set up through cPanel presently using subdomains. Edit: From the research I've seen I can't track a 301 directly. If I redirect to an internal page and then do a timed redirect to the iTunes link, how long will it take for the tracking script to track a hit?

    Read the article

  • .htaccess 301 Redirect for wildcard subdomains

    - by Steve
    I run Wordpress in Network mode, which means I can have multiple websites running off one installation of Wordpress. Each website runs as a subdomain. Wordpress handles this using .htaccess, and a wildcard subdomain pointing to the location of Wordpress, so there are no actual subdomains created in cPanel; just a wildcard subdomain in cPanel, ad Wordpress handles the rest. I want to 301 redirect http://one.example.com/portfolio to http://two.example.com/portfolio. If I only have 1 .htaccess file in the web root of example.com, how do I achieve this?

    Read the article

  • Avoid penalties for duplicate (multilanguage) shared hosting

    - by Dave
    My concern is about SEO. Now let me explain the scenario. I am making a 3 languages website. The development is alright, but I was targeting local customers with one domain, and international (english version) with another. Eg: Local http://www.minhalojadesapatos.com.br (this is not the real website, just example!) Other http://www.myshoesstore.com.br Both domain point to exactly the same hosting and content, but when user comes through local domain, default language is set to portuguese, otherwise, default is english. Language handling on backend uses PHP Sessions and cookies, so with just a click users can change content language. How to avoid being SEO-penalised in this context? (yeah, I was hungry when focusing market for choosing two domains but the activity really needs that, it is a travel agency).

    Read the article

  • Removing surrounding noises from voice recording

    - by Peak Reconstruction Wavelength
    I have a wave file whose frequency spectrum looks like this. http://i.stack.imgur.com/2rRaS.png It contains audio, which I want to keep while removing the rest. The problem is that the surround noise changes, just those distinct voice patterns remain. I marked the voice patterns for clarity: http://i.stack.imgur.com/eLkBl.png What could an algorithm look like / a workflow in adobe audition look like that removes everything but the voice patterns? I think that the main characteristic is the line-shaped form over time. Loudness alone is not enough as the noise is loud aswell.

    Read the article

  • Starting android Development

    - by Tamim Ad Dari
    I am considering learning android development. I have some basic knowledge in C++. I downloaded the ADT plugin and eclipse. Now while starting from http://developers.android.com I see the codes were in XML. So I googled for learning XML. The best site I found was http://www.w3schools.org but there I found that for learning XML I have to learn HTML and CSS. So I learned the basics of HTML and CSS. But, Now I find in that learning java is a must. So can someone give idea about the sequental languages that I should study now? Should I learn php,mysql too. BTW, I have a dream to work in google :p

    Read the article

  • Problem with NVIDIA G86 on Kubuntu 12.04

    - by Stefan
    I got problems some weeks ago with my NVIDIA G86 (8500 GT), supposedly due to the infamous 295.40 version of the driver. I got errror messges like NVRM: RmInitAdpterFailed. Tried varous sugestions about setting kernel acpi and memory options, but no luck. I pulled in x-swat and got 302.17, if I remember correctly. It did not help. People recommended xorg-edgers , so I pulled that in and got kernel 3.5.0.12 and nvidia 304.43 but the problem remained. Getting slightly panicked, I tried to back back to vanilla 12.04, so I purged nvidia* and located and removed anything on the system that smelled nvidia. I installed nouveau, cause people said it was great, but as it turns out, my card does not seem to be supported. :-( Sigh... So now I fear that I have atmessed up system, and graphics is terrible. Any help would be appreciated. Xorg.0.log: http://paste.ubuntu.com/1189616/ kern.log: http://paste.ubuntu.com/1189634/

    Read the article

  • problem showing my website correctly in search engines

    - by dinbrca
    Hello guys, I have a website which i have indexed on google for example (like 15 days ago). some of my pages pass arguments like: http://www.bla.com/products.php?pro=bla&page=view suddently i saw that passing arguments like this isn't good for SEO purposes and started using htaccess rewrite. and changed the arguments to like this: http://www.bla.com/products/bla/*view*/ now my site on google still shows as i showed at link number 1 what should i do? i thought i should wait for the search engine to crawl my site again but nothing happened. thanks in advanced, Din

    Read the article

  • Setting proxy from terminal

    - by baltusaj
    I have tried changing my proxy settings in a terminal as: export HTTP_PROXY=http://10.1.3.1:8080 and export http_proxy=http://10.1.3.1:8080 but when I try to install a new package or update apt-get, apt-get starts displaying messages from which it seems it is trying to connect to a previously set proxy: sudo apt-get update 0% [Connecting to 10.1.2.2 (10.1.2.2)] [Connecting to 10.1.2.2 (10.1.2.2) I have tried setting the proxy via bashrc file but that din work either. As far as I remember 10.1.2.2 was set using GNOME GUI but I don't have access to the GUI right now so I am trying to set it from terminal.

    Read the article

  • SharePoint Search Problem: The start address sps3://server cannot be crawled.

    - by Clara Oscura
    With this post, I'm going to start a series on problems I have encountered with SharePoint search. Error: The start address sps3://luapp105 cannot be crawled. Context: Application 'Search_Service_Application', Catalog 'Portal_Content' Details:  Access is denied. Verify that either the Default Content Access Account has access to this repository, or add a crawl rule to crawl this repository. If the repository being crawled is a SharePoint repository, verify that the account you are using has "Full Read" permissions on the SharePoint Web Application being crawled.   (0x80041205) (Event ID: 14, Task Category: Gatherer) Solution: give appropriate permissions to User Profile Synchronisation Service http://social.technet.microsoft.com/Forums/en-US/sharepoint2010setup/thread/64cdf879-f01e-4595-bc52-15975fefd18d http://www.dotnetmafia.com/blogs/dotnettipoftheday/archive/2010/03/29/how-to-set-up-people-search-in-sharepoint-2010.aspx

    Read the article

  • Question about mod_rewrite rule for redirecting failing pages

    - by SimpleCoder
    I'm setting up a mod_rewrite rule that redirects failing pages to a custom Page Not Found page. This is with Wordpress. I'm using the guide here: http://httpd.apache.org/docs/2.2/rewrite/rewrite_guide_advanced.html#redirect404. My rule so far looks like this: RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.+) http://example.com/?page_id=254 [R] This works. It seems to be a combination of the first and second suggestion that worked, since the -U flag did nothing. My question is, out of curiosity why the following happens: When I change REQUEST_FILENAME to REQUEST_URI (as the second example suggests), the page loads, but none of the style sheets load. All of my formatting is gone, and this happens on every page. Can anyone think of why this might happen?

    Read the article

  • How to prevent a 404 Error when creating a subdomain and using www to access it?

    - by Chris
    I have installed a multi-site installation of WordPress onto my domain. I then added the necessary code to the wp-config.php file and .htaccess as instructed by WordPress. I also installed a plugin called Quick Page/Post Redirect Plugin which allowed me to place a 301 redirect onto the main domain as I only want to use the sub domain and not the main domain. Then I also added the following line of code to the wp-config.php file to redirect the main domain define( 'NOBLOGREDIRECT', 'URL Redirect Address' ); The site works fine with a redirect on the main domain and my subdomain runs fine when you type in subdomain.example.com or http://subdomain.example.com. However when I enter www.subdomain.example.com or http://www.subdomain.example.com the following error message is returned: Not Found The requested URL / was not found on this server. Apache/2.4.9 (Unix) Server at www.subdomain.domain.com Port 80 Any help with this would be much appreciated.

    Read the article

  • What measures can be taken to make sure Google is aware of the existence of a newly created page?

    - by knorv
    Consider a website with a large number of pages. New pages are published regularly. When publishing a new page the website operator wants to get the newly created paged indexed in Google as soon as possible. The website operator wants to minimize the time spent between publication and indexing. Consider the site http://www.example.com/ with hundreds of thousands of pages. The page page http://www.example.com/something/important-page.html is created at say 12:00. I want to get important-page.html indexed as soon as possible after 12:00. Ideally within seconds or minutes. What options are available to try to get Google to index a specific newly created page as soon as possible?

    Read the article

  • No sound in any web browser(s)

    - by shaneo
    Hello I recently tried to compile and update alsa from source via this guide http://www.stchman.com/alsa_update.html. Afterwards the sound in any web browser I open Firefox, Opera, Chrome, Chromium, Iron there is no sound on any pages. I went back through the script listed on the site and found where it had installed the drivers and deleted it and than re-installed alsa via synaptic. Though I still have no sound in my browser(s). All system sounds work as they are supposed to only web sounds don't work. Here is my alsabase.conf http://paste.ubuntu.com/1073135/ also a snapshot of alsamixer Any assistance would be greatly appreciated. Thank You and let me know if any more information is required.

    Read the article

  • Tales from the Coal Face - Reporting errors

    - by TATWORTH
    One of the questions that comes up frequently, is "Is it worthwhile to report errors?".Last weekend, after installing the latest StyleCop I loaded up my copy of Power Collections. I found that StyleCop was now correctly picking up a lot of missing "this." statements, however there were now a number of false positives. Anticipating the need to submit sample code, I cleaned the solution and zipped it up.I reported this at http://stylecop.codeplex.com/discussions/357319.  The stylecop administrator promoted this report to a work item (see http://stylecop.codeplex.com/workitem/7285) and I uploaded the previously prepared Zip file. The StyleCop team was able to locate the problem and it is "Fixed in upcoming 4.7.27".The conclusion:Report errors!  Prepare sample code illustrating the error.

    Read the article

  • I can not download anything

    - by Jason Machen
    I am very new to ubuntu but decided to wipe my windows 7 and install it. I can not download anything from the software center. This is the error message I get. I can use the web in all other ways including this site. What can I do? Thanks, Jason W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/main/source/Sources 404 Not Found [IP: 91.189.91.13 80] W:Failed to fetch http://security.ubuntu.com/ubuntu/dists/raring-security/restricted Plus about 20 other lines.

    Read the article

  • Read data from a folder in main domain folder (CPanel\WHM)

    - by Memphis Raines
    I have defined a host in my CPanel\WHM server and put all my websites under one host account. The host Main Domain is domain.com, and all other websites are Add-on Domains: domain.com --folder --domain1 --domain2 --domain3 ... The thing I need is that when calling domain.com in browser, the server read files from another folder. for example when call http://domain.com it shows us http://domain.com/folder BUT I don't mean a redirection, I want server do this in background without showing visitors the real path. I couldn't do this with Domain WildCard Redirection because it got error. How can I do this? With htaccess or ... ?

    Read the article

  • JDeveloper does not recognize existing subversion working directory

    - by Bob Webster
    Just a quick note about an issue where JDeveloper no longer recognized an existing subversion working directory. Symptom:  JDeveloper Versioning menu offers to Version an Application that is already versioned in svn. Cause: The repository url contained in the hidden .svn folders of the working directory is no longer valid. Solution: Determine the correct url for the Subversion repository and update the .svn working directory.Fix the url contained in the svn folders of the working directory using the svn switch command. Example:           In a shell change directory to the Application folder.           Run the svn info command to confirm the current settings.                $ svn info                   Path: .                   URL: http://192.168.1.128/repos/jdeveloperrepo/AsyncExamples/BPELCallAsync/trunk                   Repository Root: http://192.168.1.128/repos/jdeveloperrepo                   Repository UUID: 3dc5eb88-3001-0010-8d6e-fd6f73825647                   Revision: 145                   Node Kind: directory                   Schedule: normal                   Last Changed Rev: 145                   Last Changed Date: 2012-06-07 07:15:56 -0700 (Thu, 07 Jun 2012)            In this case, the IP address in the repository URL is incorrect,           the svn server is located at 192.168.56.1           Note: The IP Address currently set is displayed after the Project Name in the            Application Navigator.  See the screen snapshot above.            Run the svn switch command with the --relocate option            Provide as much of the urls as necessary to correctly rewrite the url from current to new.            For example,            to change the repository server address from 192.168.1.128   to   192.168.56.1                     $  svn switch --relocate  http://192.168.1.128   http://192.168.56.1  .                               (Note the trailing period in the above command)           When the url is correct, JDeveloper should recognize the Subversion Working Directory.

    Read the article

  • Figuring our complex REST queries for SharePoint

    - by Sahil Malik
    SharePoint 2010 Training: more information A little while ago, I showed the REST query for a relatively complex query. Some readers have emailed me about how to figure out further queries, and especially for complex insert/delete/update scenarios. Well it is quite easy to figure out almost any query for SharePoint REST API. Okay, this is not just about SharePoint – you can apply what you read here for any REST API interface supported by Microsoft, like WCF data services. But, sometimes when you have many columns, or complex update operations, or are working with weird providers, it is tough to figure out the specific HTTP request you need to craft, error free, using REST. Well fear not, there is hope. As an example, what I did is, I created a SharePoint site at http://sp2010.winsmarts.internal/sampledata with 3 lists in it - 1. Artists (with one Column, Title) Read full article ....

    Read the article

  • URL parameter names being changed by user agents

    - by Mike Deck
    In reviewing one of our site's web logs I'm seeing instances where we are returning a 404 to requests because we're expecting an id parameter to be sent, but instead we're seeing a di parameter. The resource in question is an image but which image file actually gets served is dependent on the id parameter. The expected url is something like http://images.mysite.com/photo.gif?id=123&width=200&height=300 What I'm seeing in the logs is requests for http://images.mysite.com/photo.gif?di=123&width=200&height=300 The only case where we are seeing this on the id parameter. It seems unlikely that this is due to a server side or JavaScript bug since it seems to be only effecting a small percentage of our traffic. We are seeing this across a wide variety of user agents (both mobile and desktop) and IPs. Has anyone else seen this? Is there a browser plugin or other software you're aware of that could be causing this, and if so is there a good way to work around the issue?

    Read the article

  • Is there a way I can sort traffic by page-type based upon URL structure in Google-Analytics or Google Webmaster Tools??

    - by Felix
    I have a local business directory site. I'm trying to segment my incoming traffic by page-type such that i can find out what percentage of traffic is going to zip code pages exclusively and what percentage is going to city/state level pages. I basically want to filter by URL structure to find out what percentage of total traffic zip code pages account for. The reason for doing this is to find out if Does Google Tag Manager help with this? Here are the two URL paths: http://www.example.com/ny/new-york/10011/ http://www.example.com/ny/new-york Thanks all!

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >