Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 13/122 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • IIS7 301 Redirect from a List of URLs

    - by corymathews
    Recent changes are forcing me to add a bunch of 301 redirects. Seems that IIS7 is my best bet as compared to redirects within the files. I have found how to add them 1 by 1 but this requires the page/folder to exist (which most don't anymore(and creating them seems to defeat the point of the redirect)) and does not work on dynamic urls. I also cannot go to every page and add the redirects at the page level because some older pages are in php which is no longer supported on the new server. There is also no obvious pattern to the changes so each one must be made on its own. samples of the redirects page.htm - /page/ /folder/folder/ - /folder/folder.cfm /folder/folder/ - /folder/ /page.php?id=1 - page.htm

    Read the article

  • URLs with query stripped of ampersands appearing in error logs

    - by Jeremy DeGroot
    I've noticed a curious phenomena popping up in my error logs recently. If, as the result of processing a form, I redirect my users to the URL http://www.example.com/index.php?foo=bar&bar=baz, I will see the following two URLs in my log http://www.example.com/index.php?foo=barbar=baz http://www.example.com/index.php?foo=bar&bar=baz The first one is obviously incorrect and will cause my application to redirect to a 404. It always appears first, usually a second before the second one. The 404 page is not doing the redirection, so it appears that the browser is trying both versions. At first, looking at my server logs made me believe it affected only Firefox 3.6.3, but I've found an example of Safari being afflicted as well. It happens fairly intermittently, though it can occur multiple times in a users' session. I've never been able to get it to happen to me. Any thoughts as to the nature of the problem or a solution?

    Read the article

  • Django "Page not found" error page shows only one of two expected urls

    - by Frank V
    I'm working with Django, admittedly for the first time doing anything real. The URL config looks like the following: urlpatterns = patterns('my_site.core_prototype.views', (r'^newpost/$', 'newPost'), (r'^$', 'NewPostAndDisplayList'), # capture nothing... #more here... - perhaps the perma-links? ) This is in an app's url.py which is loaded from the project's url.py via: urlpatterns = patterns('', # only app for now. (r'^$', include('my_site.core_prototype.urls')), ) The problem is, when I receive a 404 attempting to utilize newpost, the error page only shows the ^$ -- it seems to ignore the newpost pattern... I'm sure the solution is probably stupid-simple but right now I'm missing it. Can someone help get me on the right track...

    Read the article

  • AnkhSVN - How to remove old URLs from list of URLs in the "Open from Subversion" dialog box

    - by user2942597
    I work for a small company and am the sole developer using AnkhSVN to version my code. For the server side I am using VisualSVN v2.5.8. The server is installed on my own machine (on a different drive). I have a few repositories that I created about two years ago that have been working fine. We recently completed an Active Directory Domain Rename (that's another story) so the FQDN of my machine changed so the domain portion is no longer the same as what it was when the server portion was installed. I managed to get AnkhSVN to connect to the repositories so everything is working again but the URL list that comes up on the "Open from Subversion" dialog box still has all the old URLs. How can I remove them? I've searched everywhere I can think of looking for this list but can't seem to find it anywhere. Any suggestions would be greatly appreciated. Chuck R.

    Read the article

  • Regular expressions and matching URLs with metacharacters

    - by James P.
    I'm having trouble finding a regular expression that matches the following String. Korben;http://feeds.feedburner.com/KorbensBlog-UpgradeYourMind?format=xml;1 One problem is escaping the question mark. Java's pattern matcher doesn't seem to accept \? as a valid escape sequence but it also fails to work with the tester at myregexp.com. Here's what I have so far: ([a-zA-Z0-9])+;http://([a-zA-Z0-9./-]+);[0-9]+ Any suggestions? Edit: The original intent was to match all URLs that could be found after the first semi colon.

    Read the article

  • Can you suggest good ways of generating URLS for viewing tagged content

    - by rikh
    For example, here on stack overflow the URL http://stackoverflow.com/questions/tagged/javascript+php will give you all questions tagged with javascript and php. The system I have allows tags with spaces in them, so the approach used here would not be a good fit for me. What character would you use to separate the tags, so the URLs are still human readable, google readable and web browser compatible. My gut feeling was to use commas. eg http://example.com/tagged/first+tag,second+tag Any feedback or suggestions would be welcome.

    Read the article

  • Downloading HTTP URLs asynchronously in C++

    - by Joey Adams
    What's a good way to download HTTP URLs (e.g. such as http://0.0.0.0/foo.htm ) in C++ on Linux ? I strongly prefer something asynchronous. My program will have an event loop that repeatedly initiates multiple (very small) downloads and acts on them when they finish (either by polling or being notified somehow). I would rather not have to spawn multiple threads/processes to accomplish this. That shouldn't be necessary. Should I look into libraries like libcurl? I suppose I could implement it manually with non-blocking TCP sockets and select() calls, but that would likely be less convenient.

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • Apache Modrewrite & 301 redirect- Dynamic URLs with characters.

    - by Ben Chesters
    I've been trying for weeks, literally, to rename these URLs and also ensure the old one is 301 redirected to the new one: www.example.com/?mod=11&p=215 - www.example.com/clean-url-section www.example.com/?mod=96&tab=6 - www.example.com/clean-url-section-2 Does anyone have any idea why I am having no luck, I got 500 server errors or nothing at all! Is it because of the question marks and characters? I'd be grateful for any help. I have tried this (below) and it seems to be redirecting to the http:// www.example.com/new-page (this page doesn't exist, as I only want it to rename the page BUT use a 301 so that search engines continue you to value it) RewriteCond %{query_string} mod=96&tab=6 RewriteRule (.*) http:// www. example.com/new-page? [R=301,L] Scratching my head!

    Read the article

  • Python: replace urls with title names from a string

    - by Hellnar
    Hello I would like to remove urls from a string replace them with their titles of the original contents. For example: mystring = "Ah I like this site: http://www.stackoverflow.com. Also I must say I like http://www.digg.com" sanitize(mystring) # it becomes "Ah I like this site: Stack Overflow. Also I must say I like Digg - The Latest News Headlines, Videos and Images" For replacing url to the title, I have written this snipplet: #get_title: string -> string def get_title(url): """Returns the title of the input URL""" output = BeautifulSoup.BeautifulSoup(urllib.urlopen(url)) return output.title.string

    Read the article

  • mod_rewrite and relative urls

    - by Davide Gualano
    I'm setting up some simple url rewriting rules using mod_rewrite and a .htacces file, but I've got some problems. If I set up the .htacces this way: Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteRule /index.html /index.php [L] when I call from the browser this url: http://localhost/~dave/mySite/index.html I got a 404 error. Using this .htacces instead Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteRule /index.html http://localhost/~dave/mySite/index.php [L] everything works fine and I get the index.php page I'm expecting. Am I forced to user absolute urls as the target of a rewrite? Is this a Apache configuration problem? I'm using Max OS X 10.6.2 with its standard Apache installation.

    Read the article

  • Why is GXHC_gx_session_id appended to URLs?

    - by Nariman
    Based on my limited understanding of this parameter [1] it seems to be used in representing cookieless session IDs in java applications... but strangely, we're now noticing that a 3-year-old .NET stack is now appearing in Bing SERPs with GXHC_gx_session_id appended to the domain - and we're not alone: http://www.bing.com/search?q=GXHC_gx_session_id http://www.google.ca/#hl=en&q=GXHC_gx_session_id When comparing Google SERPs to Bing SERPs there are some inconsistencies in whether a particular site carries this parameter - is it then a bing-specific issue only? What else could cause this parameter to be appended to indexed URLs if the target environment (anything behind the load balancers) isn't running java? [1] - http://java.itags.org/java-web-tier-apis/72018/

    Read the article

  • How to replace plain URLs with links?

    - by Sergio del Amo
    I am using the function below to match URLs inside a given text and replace them for HTML links. The regular expression is working great, but currently I am only replacing the first match. How I can replace all the URL? I guess I should be using the exec command, but I did not really figure how to do it. function replaceURLWithHTMLLinks(text) { var exp = /(\b(https?|ftp|file):\/\/[-A-Z0-9+&@#\/%?=~_|!:,.;]*[-A-Z0-9+&@#\/%=~_|])/i; return text.replace(exp,"<a href='$1'>$1</a>"); }

    Read the article

  • ASP.NET MVC - Organizing Site / URLs

    - by CocoB
    My question is around the best practice for dividing up an asp.net mvc web app. I am building a fairly simple application which has two main sections, public and private. Basically I am running up against the issue of collisions between controllers. What I want is to have urls like /public/portfolio, but also have /private/portfolio. Looking into some options, it seems that areas would work well for this situation. Are there other alternatives, such as some creative routing scheme that I should consider?

    Read the article

  • In what way does Wordpress rewrite page URLs?

    - by Mac Taylor
    Hey Recently I'm interested in post's structure of Wordpress. They use a table named (wp_posts) and in this table they saved 3 related fields such as : post_title post_name guid It's clear that they save title of each story in post_title field , and slugs in post_name , and full url of a post in guild filed . But where the hell, they rewrite these urls in way it appears in browsers : http://localhost/wordpress/about/ There is no htaccess rules for this ! I checked rewrite.php and didn't understand an inch ?! i need to create similar pages , what steps should i take !?

    Read the article

  • automatically rewrite URLs in ASP.NET

    - by Ali_dotNet
    I use VS2010,C# to develop an ASP.NET web site, my customers want me to have their pages like this: mysite.com/customer (in fact they call mysite/customer/default.aspx) so I've manually created several folders for each customer, and inserted a default.aspx file in the folder so that users can view customer page by typing mysite.com/customer is there a better way for performing this scenario? I don't want to have mysite.com/customer1.aspx, I want to have mysite.com/customer1, is there anyway that I can remove folders (and their containing default.aspx files) and generate something automatic using my customers database? should I use URL rewriting? is there anyway that I can create page mysite.com/customer1.aspx, and users can view it by typing mysite.com/customer1? I think it is possible to rewrite URLs in web.config, but I don't want to do it manually in web.config as my pages would increase in a daily basis thanks

    Read the article

  • Wordpress & Vanity User URLs

    - by st4ck0v3rfl0w
    Hi All, Was wondering if anyone had any smart approaches to creating dynamic vanity user urls upon user registration. My site basically uses emails as usernames. I have the regex to strip the text before the "@" symbol (e.g. "[email protected]" becomes "name") I would then like to take the "name" and create a vanity url (e.g. domain.com/name or name.domain.com) Any thoughts on how to accomplish this with Wordpress? I'm a pretty advanced Wordpress user and my first thoughts were to do the following Verify user registration Upon successful user registration create page name with username as the title (this would help me achieve the www.domain.com/username) Apply preset template to that page with the desired view Any and all thoughts are welcome.

    Read the article

  • Paperclip generating wrong URLs in Heroku

    - by Tony
    Paperclip is generating wrong URLs in Heroku. I have an Audio model which has a mp3 field as follows: class Audio < ActiveRecord::Base has_attached_file :mp3, :storage => :s3, :s3_credentials => S3_CREDENTIALS, :bucket => S3_CREDENTIALS[:bucket], :path => ":rails_root/public/system/:attachment/:id/:style/:filename", :url => "/system/:attachment/:id/:style/:filename" I am calling audio.mp3.url from a controller, and it returns http://s3.amazonaws.com/MyApp/audios/mp3s//original/96a9ae89302fdf8462ee05eb829f2e17578b144e20120908-2-11f61zr.mp3?1347135050 instead of http://s3.amazonaws.com/MyApp/audios/mp3s/000/000/004/original/96a9ae89302fdf8462ee05eb829f2e17578b144e20120908-2-11f61zr.mp3?1347135050 (which works) Why is it missing the '000/000/004' part of the route? The same model is generating the right URL when used in a view. Any help? I am using paperclip 3.2.0 and Rails 3.1.8. Any help?

    Read the article

  • php string search - grabbing specific urls

    - by MEM
    Hello, I have this string that may contain some urls that I need to grab. For instance, if the user does: www.youtube ... or www.vimeo ... or http://www.youtube ... or HttP://WwW.viMeo I need to grab it (until he finds a space perhaps). and store it on a already created array. The need is to separate the vimeo links from the youtube ones and place each of those on the appropriate video object. I'm not sure if this is possible, I mean, if the URL coming from the browser could be used to be placed on a predefined video object. If it is, then this is the way to go (so I believe). If all this is feasible, can I have your help in order to build such a rule? Thanks in advance

    Read the article

  • Does redirecting old site's URLs to new site's front page hurt a page's ranking?

    - by Kaivosukeltaja
    An old site that is being rewritten needs to have it's URLs redirected to the new site. There are a few hundred pages that may or may not have corresponding pages on the new site, probably with different slugs, and adding mappings manually will require more hours than we can spare. It was suggested that all old URLs be redirected to the new front page, but I remember reading somewhere that this confers a penalty in page rank because it's what link farmers do. Is this true or can we take the easy way out?

    Read the article

  • Sitecore not resolving rich text editor URLS in page renders

    - by adam
    Hi We're having issues inserting links into rich text in Sitecore 6.1.0. When a link to a sitecore item is inserted, it is outputted as: http://domain/~/link.aspx?_id=8A035DC067A64E2CBBE2662F6DB53BC5&_z=z Rather than the actual resolved url: http://domain/path/to/page.aspx This article confirms that this should be resolved in the render pipeline: in Sitecore 6 it inserts a specially formatted link that contains the Guid of the item you want to link to, then when the item is rendered the special link is replaced with the actual link to the item The pipeline has the method ShortenLinks added in web.config <convertToRuntimeHtml> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.PrepareHtml, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.ShortenLinks, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.SetImageSizes, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.ConvertWebControls, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.FixBullets, Sitecore.Kernel"/> <processor type="Sitecore.Pipelines.ConvertToRuntimeHtml.FinalizeHtml, Sitecore.Kernel"/> </convertToRuntimeHtml> So I really can't see why links are still rendering in ID format rather than as full SEO-tastic urls. Anyone got any clues? Thanks, Adam

    Read the article

  • HTTP POST prarameters order / REST urls

    - by pq
    Let's say that I'm uploading a large file via a POST HTTP request. Let's also say that I have another parameter (other than the file) that names the resource which the file is updating. The resource cannot be not part of the URL the way you can do it with REST (e.g. foo.com/bar/123). Let's say this is due to a combination of technical and political reasons. The server needs to ignore the file if the resource name is invalid or, say, the IP address and/or the logged in user are not authorized to update the resource. Looks like, if this POST came from an HTML form that contains the resource name first and file field second, for most (all?) browsers, this order is preserved in the POST request. But it would be naive to fully rely on that, no? In other words the order of HTTP parameters is insignificant and a client is free to construct the POST in any order. Isn't that true? Which means that, at least in theory, the server may end up storing the whole large file before it can deny the request. It seems to me that this is a clear case where RESTful urls have an advantage, since you don't have to look at the POST content to perform certain authorization/error checking on the request. Do you agree? What are your thoughts, experiences?

    Read the article

  • 301 Redirecting URLs based on GET variables in .htaccess

    - by technicalbloke
    I have a few messy old URLs like... http://www.example.com/bunch.of/unneeded/crap?opendocument&part=1 http://www.example.com/bunch.of/unneeded/crap?opendocument&part=2 ...that I want to redirect to the newer, cleaner form... http://www.example.com/page.php/welcome http://www.example.com/page.php/prices I understand I can redirect one page to another with a simple redirect i.e. Redirect 301 /bunch.of/unneeded/crap http://www.example.com/page.php But the source page doesn't change, only it's GET vars. I can't figure out how to base the redirect on the value of these GET variables. Can anybody help pls!? I'm fairly handy with the old regexes so I can have a pop at using mod-rewrite if I have to but I'm not clear on the syntax for rewriting GET vars and I'd prefer to avoid the performance hit and use the cleaner Redirect directive. Is there a way? and if not can anyone clue me in as to the right mod-rewrite syntax pls? Cheers, Roger.

    Read the article

  • ExpertPDF and Caching of URLs

    - by Josh
    We are using ExpertPDF to take URLs and turn them into PDFs. Everything we do is through memory, so we build up the request and then read the stream into ExpertPDF and then write the bits to file. All the files we have been requesting so far are just plain HTML documents. Our designers update CSS files or change the HTML and rerequest the documents as PDFs, but often times, things are getting cached. Take, for example, if I rename the only CSS file and view the HTML page through a web browser, the page looks broke because the CSS doesn't exist. But if I request that page through the PDF Generator, it still looks ok, which means somewhere the CSS is cached. Here's the relevant PDF creation code: // Create a request HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url); request.UserAgent = "IE 8.0"; request.ContentType = "application/x-www-form-urlencoded"; request.Method = "GET"; // Send the request HttpWebResponse resp = (HttpWebResponse)request.GetResponse(); if (resp.IsFromCache) { System.Web.HttpContext.Current.Trace.Write("FROM THE CACHE!!!"); } else { System.Web.HttpContext.Current.Trace.Write("not from cache"); } // Read the response pdf.SavePdfFromHtmlStream(resp.GetResponseStream(), System.Text.Encoding.UTF8, "Output.pdf"); When I check the trace file, nothing is being loaded from cache. I checked the IIS log file and found a 200 response coming from the request, even after a file had been updated (I would expect a 302). We've tried putting the No-Cache attribute on all HTML pages, but still no luck. I even turned off all caching at the IIS level. Is there anything in ExpertPDF that might be caching somewhere or something I can do to the request object to do a hard refresh of all resources? UPDATE I put ?foo at the end of my style href links and this updates the CSS everytime. Is there a setting someplace that can prevent stylesheets from being cached so I don't have to do this inelegant solution?

    Read the article

  • (RoR) How to: link multiple apps, multiple URLs, one database

    - by Samson
    Hello. I am currently developing a site using Ruby on Rails. I am still a beginner who just started around a month ago. I use InstantRails on Windows 7. Here's my question. Let's say app A is functional using MYSQL database A_development. The files such as views and controller are under folder 'A'. I now know how to, say for example, link www.app.com to this app by opening port 80 and changing some lines in the mySQL config. In this app, you can register your username, login, and post some messages. I now want to create some pretty identical apps say B and C. The only thing different will be the posts that shows, and the views. You can still log in with the same username, and everything is saved in the same database. I now want the URLs to look something like A.app.com leading to app A, B.app.com leading to app B, etc. Can that be achieved? How? I've been googling for a few days already and I'm still lost. As I'm new to this forum, I'm not quite sure what info do you guys need. Please list and I'll provide them asap. Any help will be appreciated! Thanks.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >