Search Results

Search found 20873 results on 835 pages for 'url fetch'.

Page 11/835 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Can I use Zoneedit to do URL rewrite?

    - by chilly-child
    This is our scenario: Our DNS is hosted by a company. They don't manage the DNS. We use Zoneedit (www.zoneedit.com) to manage the DNS such as nameservers, CNAMEs, etc... Then we have our web host where we just have our files hosted. We have a subdomain created on zoneedit. We would like to do a URL rewrite so that subdomain.ourdomain.com is displayed as www.ourdomain.com/subdomain. Do I use Zoneedit to do the URL rewrite or the web host or the DNS host? I checked the Zoneedit docs but I could not find a way to do a URL rewrite. Need some advice. Thanks

    Read the article

  • Disabling URL decoding in nginx proxy

    - by Tomasz Nurkiewicz
    When I browse to this URL: http://localhost:8080/foo/%5B-%5D server (nc -l 8080) receives it as-is: GET /foo/%5B-%5D HTTP/1.1 However when I proxy this application via nginx: location /foo { proxy_pass http://localhost:8080/foo; } The same request routed through nginx port is forwarded with path decoded: GET /foo/[-] HTTP/1.1 Decoded square brackets in the GET path are causing the errors in the target server (HTTP Status 400 - Illegal character in path...) as they arrive un-escaped. Is there a way to disable URL decoding or encode it back so that the target server gets the exact same path when routed through nginx? Some clever URL rewrite rule?

    Read the article

  • Shortcut to open URL from clipboard

    - by good boy
    I want to create a shortcut that opens links from the clipboard. I frequently switch browsers and it's very annoying to copy/paste hundreds of URLs from one to another. I have created a shortcut to launch a page on each browser - but how can I make the URL field include data from clipboard, so that when I copy a URL and click on the shortcut, it will direct to the URL that is currently on the clipboard. If this is not possible, then is there an AutoHotKey script or something similar that can accomplish this? I would prefer a Desktop shortcut, but whatever works.

    Read the article

  • Remove Trailing Slash from WordPress URL (The site also don't have www)

    - by mrintech
    I need help as I am confused a lot with .htaccess Some months back, I removed WWW from the URL of my domain name using following .htaccess lines: RewriteCond %{HTTP_HOST} !^example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [L,R=301] Now, I also want to remove the trailing slash from the URL, as because I am using WordPress and a page/post will open, no matter if there's a trailing slash or NOT! I request you to please provide me the .htaccess code, so that I can REMOVE the trailing slash. Kindly remember, I don't want WWW also and I have already set .htaccess rule for the removal of WWW Note: 3 Years back when I started the blog, I set the Permalinks Structure without trailing slash. Now, suddenly Google Webmasters Tools is showing warnings. Also, the URL for rel="canonical" is WITHOUT trailing slash If you require any more details, I will be happy to provide

    Read the article

  • Squid url rewrites https>>http

    - by bobfran
    I'm exploring some uses with Squid proxy 2.7 and I have seen a good number of examples for url rewrites that take urls such as: http: //somesitename.com and then the rewriter can change the url to: https: //somesitename.com And those examples work great. What I'm wondering though, is if its possible to do the reverse with a squid url rewriter. that is, to go from https: //somesitename.com to http: //somesitename.com ? Simply trying to edit the script file that handles the rewrites doesn't seem to do the trick. So I was wondering if there are some certain things I have to configure squid to do first, if its even possible to do what I am asking. I have my browser manually set up to have squid as a proxy for all requests and I can see https requests showing up in my squid access.log file (via the CONNECT method).

    Read the article

  • How to get/obtain Variables from URL in Flash AS3

    - by Leon
    So I have a URL that I need my Flash movie to extract variables from: example link: http://www.example.com/example_xml.php?aID=1234&bID=5678 I need to get the aID and the bID numbers. I'm able to get the full URL into a String via ExternalInterface var url:String = ExternalInterface.call("window.location.href.toString"); if (url) testField.text = url; Just unsure as how to manipulate the String to just get the 1234 and 5678 numbers. Appreciate any tips, links or help with this!

    Read the article

  • jquery - get url path?

    - by KittyYoung
    I know I can use window.location.pathname to return a url, but how do I parse the url? I have a url like this: http://localhost/messages/mine/9889 and I'm trying to check to see if "mine" exists in that url? So, if "mine" is the second piece in that url, I want to write an if statement based on that... if(second argument == 'mine') { do something }

    Read the article

  • Url for SEO Link

    - by k0ni
    Hi, i need a function (c#) or regular expression that makes me a nice URL out of a string. (and replaces invalid characters) Something like here on stackoverflow.. example: Short URL or long URL for SEO - short-url-or-long-url-for-seo Thanks

    Read the article

  • URL Routing with WebForms, what to do with additional querystrings like page number, filter, etc?

    - by Scott
    I am switching from Intelligencia's UrlRewriter to the new web forms routing in ASP.NET 4.0. I have it working great for basic pages, however, in my e-commerce site, when browsing category pages, I previously used querystrings that were built into my pager control to control paging. An old url (with UrlRewriting) would be: http://www.mysite.com/Category/Arts-and-Crafts_17 I have a MapPageRoute defined in global.asax as: routes.MapPageRoute("category-browse", "Category/{name}_{id}", ~/CategoryPage.aspx"); This works great. Now, somebody clicks to go to page 2. Previously I would have just tacked on ?page=2 as the querystring. Now, How do I handle this using web forms routing? I know I can do something like: http://www.mysite.com/Category/Arts-and-Crafts_17/page/2 But in addition to page, I can have filters, age ranges, gender, etc. Should I just keep defining routes that handle these variables, or should I continue using querystrings and can I define a route that allows me to use my querstrings like before?

    Read the article

  • REST/ROA Architecture - Send database search/querying/filter/sorting parameters in URL? (>, <, IN, e

    - by DutrowLLC
    I'm building a REST interface to my application using ROA (Resource Oriented Architecture) . I'd like to give the client the ability to specify search parameters in URL. So a client could say "Give me all people who's: "first_name" is equal to "BOB" "age" is greater than "30" sort by "last_name" I was thinking something like: GET /PEOPLE/{query_parameters}/{sort_parameters} ...or perhaps GET /PEOPLE?query=<query_string>&sort=<sort_string> ...but I'm unsure what syntax would be good for specifying in the COLUMN_NAME-OPERATOR-VALUE triplicates. I was thinking perhaps something like: column_name.operator.value So the client could say: GET /PEOPLE?query=first_name.EQUALS.bob&query=age.GREATER_THAN.30&sort=last_name.ASCENDING I really don't want to re-invent the wheel here, are there some accepted ways that this is done? I am using Restlets, I don't know if that makes a difference.

    Read the article

  • Is it advisable to have non-ascii characters in the URL?

    - by Ravi Gummadi
    We are currently working on a I18N project. I was just wondering what are the complications of having the non-ascii characters in the URL. If its not, what are the alternatives to deal with this problem? EDIT (in response to Maxym's answer): The site is going to be local to specific country and I need not worry about the world wide public accessing this site. I understand that from usability point of view, It is really annoying. What are the other technical problem associated with this?

    Read the article

  • Mod_rewrite Mask URL

    - by user37143
    Hi, I need to rewrite https://mydomain1.com/chanagepassword to https://mydomain2.com/ssp then hide the url mydomain2.com/cp in browser and show https://mydomain1.com/chanagepassword. I got the first part working with the rule: Options +FollowSymLinks RewriteEngine on RewriteRule /changepassword https://mydomain2.com/ssp/ [R=301,L] However it shows https://mydomain2.com/ssp url on browser. How can I mask this and show https://mydomain1.com/chanagepassword Please support. Thanks.

    Read the article

  • Load balancers, multiple data centers and url based routing

    - by kunkunur
    There is one data center - dc1. There is a business need to setup another data center - dc2 in another geography and there might be more in the future say dc3. Within the data center dc1: There are two web servers say WS1 and WS2. These two webservers do not share anything currently. There isnt any necessity foreseen to have more webservers within each dc. dc1 also has a local load balancer which has been setup with session stickiness. So if a user say u1 lands on dc1 and if the load balancer decides to route his first request to WS1 then from there on all u1's requests will get routed to WS1. Local load balancer and webservers are invisible to the user. Local load balancer listens to the traffic on a virtual ip which is assigned to the virtual cluster of webservers ws1 and ws2. Virtual ip is the ip to which the host name is resolved to in the DNS. There are no client specific subdomains as of now instead there is a client specific url(context). ex: www.example.com/client1 and www.example.com/client2. Given above when dc2 is onboarded I want to route the traffic between dc1 and dc2 based on the client. The options that I have found so far are. Have client specific subdomains e.g. client1.example.com and client2.example.com and assign each of them with the virtual ip of the data center to which I want to route them. or Assign www.example.com and www1.example.com to first dc i.e. dc1 and assign www2.example.com to dc2. All requests will first get routed to dc1 where WS1 and WS2 will redirect the user to www1.example.com or www2.example.com based on whether the url ends with /client1 or /client2. I need help in the following If I setup a global load balancer between dc1 and dc2 do I have any alternative solutions. That is, can a global load balancer route the traffic based on the url ? Are there drawbacks to subdomain based solutions compared to www1 solution? With www1 solution I am worried that it creates a dependency on dc1 atleast for the first request and the user will see that he is getting redirected to a different url.

    Read the article

  • Curly braces in a URL

    - by lipton
    Often I have came across URLS like the following: http://www.isthisahacker.com/{7B643FB915-845C-4A76-A071-677D62157FE07D}.htm Do the curly braces in the URL above indicate some kind of attempt to access the registry, or is that a legitimate URL? It looks kind of suspicious to me.

    Read the article

  • Problem with URL encoded parameters in URL view helper

    - by Richard Knop
    So my problem is kinda weird, it only occurs when I test the application offline (on my PC with WampServer), the same code works 100% correctly online. Here is how I use the helper (just example): <a href="<?php echo $this->url(array('module' => 'admin', 'controller' => 'index', 'action' => 'approve-photo', 'id' => $this->escape($a->id), 'ret' => urlencode('admin/index/photos')), null, true); ?>" class="blue">approve</a> Online, this link works great, it goes to the action which looks similar to this: public function approvePhotoAction() { $request = $this->getRequest(); $photos = $this->_getTable('Photos'); $dbTrans = false; try { $db = $this->_getDb(); $dbTrans = $db->beginTransaction(); $photos->edit($request->getParam('id'), array('status' => 'approved')); $db->commit(); } catch (Exception $e) { if (true === $dbTrans) { $db->rollBack(); } } $this->_redirect(urldecode($request->getParam('ret'))); } So online, it approves the photo and redirects back to the URL that is encoded as "ret" param in the URL ("admin/index/photos"). But offline with WampServer I click on the link and get 404 error like this: Not Found The requested URL /public/admin/index/approve-photo/id/1/ret/admin/index/photos was not found on this server. What the hell? I'm using the latest version of WampServer (WampServer 2.0i [11/07/09]). Everything else works. Here is my .htaccess file, just in case: RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ /index.php [NC,L] # Turn off magic quotes #php_flag magic_quotes_gpc off I'm using virtual hosts to test ZF projects on my local PC. Here is how I add virtual hosts. httpd.conf: NameVirtualHost *:80 <VirtualHost *:80> ServerName myproject DocumentRoot "C:\wamp\www\myproject" </VirtualHost> the hosts file: 127.0.0.1 myproject Any help would be appreciated because this makes testing and debugging projects on my localhost a nightmare and almost impossible task. I have to upload everything online to check if it works :( UPDATE: Source code of the _redirect helper (built in ZF helper): /** * Set redirect in response object * * @return void */ protected function _redirect($url) { if ($this->getUseAbsoluteUri() && !preg_match('#^(https?|ftp)://#', $url)) { $host = (isset($_SERVER['HTTP_HOST'])?$_SERVER['HTTP_HOST']:''); $proto = (isset($_SERVER['HTTPS'])&&$_SERVER['HTTPS']!=="off") ? 'https' : 'http'; $port = (isset($_SERVER['SERVER_PORT'])?$_SERVER['SERVER_PORT']:80); $uri = $proto . '://' . $host; if ((('http' == $proto) && (80 != $port)) || (('https' == $proto) && (443 != $port))) { $uri .= ':' . $port; } $url = $uri . '/' . ltrim($url, '/'); } $this->_redirectUrl = $url; $this->getResponse()->setRedirect($url, $this->getCode()); } UPDATE 2: Output of the helper offline: /admin/index/approve-photo/id/1/ret/admin%252Findex%252Fphotos And online (the same): /admin/index/approve-photo/id/1/ret/admin%252Findex%252Fphotos UPDATE 3: OK. The problem was actually with the virtual host configuration. The document root was set to C:\wamp\www\myproject instead of C:\wamp\www\myproject\public. And I was using this htaccess to redirect to the public folder: RewriteEngine On php_value upload_max_filesize 15M php_value post_max_size 15M php_value max_execution_time 200 php_value max_input_time 200 # Exclude some directories from URI rewriting #RewriteRule ^(dir1|dir2|dir3) - [L] RewriteRule ^\.htaccess$ - [F] RewriteCond %{REQUEST_URI} ="" RewriteRule ^.*$ /public/index.php [NC,L] RewriteCond %{REQUEST_URI} !^/public/.*$ RewriteRule ^(.*)$ /public/$1 RewriteCond %{REQUEST_FILENAME} -f RewriteRule ^.*$ - [NC,L] RewriteRule ^public/.*$ /public/index.php [NC,L] Damn it I don't know why I forgot about this, I thought the virtual host was configured correctly to the public folder, I was 100% sure about that, I also double checked it and saw no problem (wtf? am I blind?).

    Read the article

  • PHP errors -> Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt | Warning: mysqli_stmt::c

    - by Tunji Gbadamosi
    I keep getting this error while trying to modify some tables. Here's my code: /** <- line 320 * * @param array $guests_array * @param array $tickets_array * @param integer $seat_count * @param integer $order_count * @param integer $guest_count */ private function book_guests($guests_array, $tickets_array, &$seat_count, &$order_count, &$guest_count){ /* @var $guests_array ArrayObject */ $sucess = false; if(sizeof($guests_array) >= 1){ //$this->mysqli->autocommit(FALSE); //insert the guests into guest, person, order, seat $menu_stmt = $this->mysqli->prepare("SELECT id FROM menu WHERE name=?"); $menu_stmt->bind_param('s',$menu); //$menu_stmt->bind_result($menu_id); $table_stmt = $this->mysqli->prepare("SELECT id FROM tables WHERE name=?"); $table_stmt->bind_param('s',$table); //$table_stmt->bind_result($table_id); $seat_stmt = $this->mysqli->prepare("SELECT id FROM seat WHERE name=? AND table_id=?"); $seat_stmt->bind_param('ss',$seat, $table_id); //$seat_stmt->bind_result($seat_id); for($i=0;$i<sizeof($guests_array);$i++){ $menu = $guests_array[$i]['menu']; $table = $guests_array[$i]['table']; $seat = $guests_array[$i]['seat']; //get menu id if($menu_stmt->execute()){ $menu_stmt->bind_result($menu_id); while($menu_stmt->fetch()) ; } $menu_stmt->close(); //get table id if($table_stmt->execute()){ $table_stmt->bind_result($table_id); while($table_stmt->fetch()) ; } $table_stmt->close(); //get seat id if($seat_stmt->execute()){ $seat_stmt->bind_result($seat_id); while($seat_stmt->fetch()) ; } $seat_stmt->close(); $dob = $this->create_date($guests_array[$i]['dob_day'], $guests_array[$i]['dob_month'], $guests_array[$i]['dob_year']); $id = $this->add_person($guests_array[$i]['first_name'], $guests_array[$i]['surname'], $dob, $guests_array[$i]['sex']); if(is_string($id)){ $seat = $this->add_seat($table_id, $seat_id, $id); /* @var $tickets_array ArrayObject */ $guest = $this->add_guest($id,$tickets_array[$i+1],$menu_id, $this->volunteer_id); /* @var $order integer */ $order = $this->add_order($this->volunteer_id, $table_id, $seat_id, $id); if($guest == 1 && $seat == 1 && $order == 1){ $seat_count += $seat; $guest_count += $guest; $order_count += $order; $success = true; } } } } return $success; } <- line 406 Here are the warnings: The person PRSN10500000LZPH has been added to the guest tablePRSN10500000LZPH added to table (1), seat (1)The order for person(PRSN10500000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (1)PRSN10600000LZPH added to table (1), seat (13)The person PRSN10600000LZPH has been added to the guest tableThe order for person(PRSN10600000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (13) Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 358 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 363 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 366 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 371 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 374 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 379 PRSN10700000LZPH added to table (1), seat (13)The person PRSN10700000LZPH has been added to the guest tableThe order for person(PRSN10700000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (13) Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 358 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 363 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 366 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 371 Warning: mysqli_stmt::execute(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 374 Warning: mysqli_stmt::close(): Couldn't fetch mysqli_stmt in /Users/olatunjigbadamosi/Sites/ST_Ambulance/FormDB.php on line 379 PRSN10800000LZPH added to table (1), seat (13)The person PRSN10800000LZPH has been added to the guest tableThe order for person(PRSN10800000LZPH) is registered with volunteer (PRSN10500000LZPH) at table (1) and seat (13)

    Read the article

  • I can't update my system properly, "no package header" error

    - by joel
    Every time I try to run sudo apt-get update or try running updates from the GUI interface I run into the following problem or something similar: Reading package lists... Error! E: Encountered a section with no Package: header E: Problem with MergeList /var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_restricted_binary-i386_Packages E: The package lists or status file could not be parsed or opened. I've tried purging using sudo rm -rf <filename> where <filename> is the listed file above, and then running sudo apt-get update to fix it (as listed elsewhere in this forum) and no luck, just keep getting this message. I'm running Ubuntu 12.04 and this is getting really frustrating... I just want a system that runs smoothly and doesn't require it's hand to be held when it comes to updates. Tried the solutions posted below and am still receiving the same errors, sample output: W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_restricted_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_multiverse_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_source_Sources Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_source_Sources Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_universe_binary-amd64_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_universe_binary-i386_Packages Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_main_i18n_Translation-en Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-updates_universe_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-backports_universe_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_main_i18n_Translation-en%5fCA Encountered a section with no Package: header W: Failed to fetch gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise-security_multiverse_i18n_Translation-en%5fCA Encountered a section with no Package: header E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Cant install software

    - by user53209
    So I just installed the ubuntu 11.10 .. And when i goto software center and try to download any software(use source).. all i get is a window saying that "Failed to download repository information" , "check your internet connection" and W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_restricted_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_universe_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch htt p ://archive.ubuntu.com/ubuntu/dists/oneiric/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_main_i18n_Index , W:Failed to fetch http ://archive.ubuntu.com/ubuntu/dists/oneiric/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_multiverse_i18n_Index , W:Failed to fetch htt ://archive.ubuntu.com/ubuntu/dists/oneiric/restricted/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_restricted_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric_universe_i18n_Index , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_main_source_Sources Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_restricted_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_universe_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_multiverse_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch h tp://archive.ubuntu.com/ubuntu/dists/oneiric-updates/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_main_i18n_Index , W:Failed to fetch h ttp://archive.ubuntu.com/ubuntu/dists/oneiric-updates/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_multiverse_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric-updates/restricted/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_restricted_i18n_Index , W:Failed to fetch htt p://archive.ubuntu.com/ubuntu/dists/oneiric-updates/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-updates_universe_i18n_Index , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_main_source_Sources Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_restricted_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_universe_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_multiverse_binary-i386_Packages Hash Sum mismatch , W:Failed to fetch h ttp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/main/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_main_i18n_Index , W:Failed to fetch h ttp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/multiverse/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_multiverse_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/restricted/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_restricted_i18n_Index , W:Failed to fetch ht tp://archive.ubuntu.com/ubuntu/dists/oneiric-backports/universe/i18n/Index No Hash entry in Release file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-backports_universe_i18n_Index , W:Failed to fetch bzip2:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_oneiric-security_main_source_Sources Hash Sum mismatch , E:Some index files failed to download. They have been ignored, or old ones used instead." Neglect the http typos, its to evade the 2 hyperlink.. I tried making a file in the apt.conf.d folder also, and added proxy entries, also set the system proxy.. and also the "network proxy", but nothing works.. And now I cant install any software!! Help needed

    Read the article

  • Cannot update 12.04 due to "Failed to fetch" warning

    - by harsha
    I can't update my ubuntu 12.04. I tried it from terminal, update manager and also from the synaptic package manager. The error is W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en_IN Unable to connect to 172.20.0.100:8080: W: Failed to fetch http://in.archive.ubuntu.com/ubuntu/dists/precise-backports/universe/i18n/Translation-en Unable to connect to 172.20.0.100:8080: E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • terminal failed to fetch and some index files failed to download

    - by firstson
    My terminal failed to fetch, and some index files failed to download: W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/precise-security/Release.gpg Something wicked happened resolving 'security.ubuntu.com:http' (-5 - No address associated with hostname) E: Some index files failed to download. They have been ignored, or old ones used instead. please help me to solve the problem in my terminal. I really appreciate the solution.

    Read the article

  • How to manipulate default document with rewrite module on IIS7?

    - by eugeneK
    Until few month ago i've been using IIS 6 where i could add different Default Documents to each websites which are physically same directory. Since II7 which adds Default Document value to web config i couldn't use such technique as web.config was changed for all the directory. So I've found simple solution with rewrite module to change Default Document for each domain <defaultDocument enabled="false" /> <rewrite> <rewriteMaps> <rewriteMap name="ResolveDefaultDocForHost"> <add key="site1.com" value="Default1.aspx" /> <add key="site2.com" value="Default2.aspx" /> </rewriteMap> </rewriteMaps> <rules> <rule name="DefaultDoc Redirect If No Trailing Slash" stopProcessing="true"> <match url=".*[^/]$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> </conditions> <action type="Redirect" url="{R:0}/" /> </rule> <rule name="PerHostDefaultDocSlash" stopProcessing="true"> <match url="$|.*/$" /> <conditions> <add input="{REQUEST_FILENAME}" matchType="IsDirectory" /> <add input="{ResolveDefaultDocForHost:{HTTP_HOST}}" pattern="(.+)" /> </conditions> <action type="Rewrite" url="{R:0}{C:1}" appendQueryString="true" /> </rule> </rules> </rewrite> Now i've got two other issues. first of all i can't use canonical url rewrite, if i set one then site1.com and site2.com redirected to www.site1.com instead of having www. for each. second problem is that there is a directory within site1' and site2' physical directory called members in which Default.aspx is always a Default Document doesn't matter which domain name was used. It doesn't work as well. Please help me with this issue because i've never thought i will get such problem with supposed to be better IIS7...

    Read the article

  • IIS URL Rewriting: How can I reliably keep relative paths when serving multiple files?

    - by NVRAM
    My WebApp is part CMS, and when I serve up an HTML page to the user it typically contains relative paths in a.href and img.src attributes. I currently have them accessed by urls like: ~/get-data.aspx/instance/user/page.html -- where instance indicates the particular instance for the report and "user/page.html" is a path created by an external application that generates the content. This works pretty reliably with code in the application's BeginRequest method that translates the text after ".aspx" into a query string, then uses Context.RewritePath(). So far so good, but I've just tripped over something that took me by surprise: it appears that if any of the query string ("instance/user/page.html") happens to contain a plus sign ("+") the BeginRequest method is never called, and a 404 is immediately returned to the user. So my question is two-fold: Am I correct in my belief that a "+" would cause the 404, and if so are there other things that could cause similar problems? Is there a way around that problem (perhaps a different method than BeginRequest)? Is there a better way to preserve relative URL paths for generated content than what I'm using? I'd rather not require site admins to install a 3rd party rewrite tool if I can help it.

    Read the article

  • Cannot 301 redirect with IIS URL Rewrite Module

    - by Justin
    I am trying to troubleshoot my issue with the URL Rewrite Module on IIS 7. I migrated a Wordpress blog over to BlogEngine.net. There were only about 5 entries that I wanted to use 301 redirects to the new blog, so I wanted to simply create 5 exact match redirect rules using the rewrite module. For some reason the exact match rule never seems to take effect, I always get a 404 error when the original url is navigated to. I verified that my exact match pattern matched the existing backlinks and it does. I then tried a simple test and got the same behavior, no redirection. I created a page, test.html, on my site, I then created a second page, test2.html. So my exact match pattern is: "http://www.mydomain.com/test.html" And the rule is supposed to do a 301 redirect to "http://www.mydomain.com/test2.html " The redirect never happens. I created the steps for the rule based on the instructions in this page: http://learn.iis.net/page.aspx/461/creating-rewrite-rules-for-the-url-rewrite-module/ I don't see that I left out a step. After I apply the rule I've even gone as far as doing an IISReset to make sure it would be in effect but still no luck. Any thoughts on what I might have left out? (Note: my rewrite rules dont include the " " around them but I had to add since serverfault thinks I am trying to spam the system with multiple urls.)

    Read the article

  • Nginx Removes the index.php from URL

    - by codeHead
    I have a codeigniter php application on nginx. It works as expected on Apache but after moving to nginx, I noticed that the index.php is automatically removed from the URL in all my links. Infact when I try using index.php it does not go to the desired URL but gets redirected to my default controller. below is a coopy of my nginx.conf file. server{ listen 80; server_name mydomainname.com; root /var/www/domain/current; # index index.php; error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log main; location / { # Check if a file or directory index file exists, else route it to index.php. try_files $uri $uri/ /index.php ; } location ~* \.php { fastcgi_pass backend; include fastcgi.conf; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_read_timeout 500; #fastcgi_param SCRIPT_FILENAME $document_root/index.php; add_header Expires "Thu, 01 Jan 1970 00:00:01 GMT"; add_header Cache-Control "no-cache, no-store, private, proxy-revalidate, must-revalidate, post-check=0, pre-check=0"; add_header Pragma no-cache; add_header X-Served-By $hostname; } location ~* ^.+\.(css|js)$ { expires 7d; add_header Pragma public; add_header Cache-Control "public"; } # set expiration of assets to MAX for caching location ~* \.(ico|gif|jpe?g|png)(\?[0-9]+)?$ { expires max; log_not_found on; } } I need to use my URL With the index.php -- please help.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >