Search Results

Search found 8013 results on 321 pages for 'clean urls'.

Page 36/321 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Difference between two kinds of Bing URL Referers

    - by joshuahedlund
    Most of the referral URLS that I get from Bing have the following syntax: http://www.bing.com/search?q=keywords+keywords&[some other variables] However I just noticed that maybe 10-20% of them are coming in like this: http://www.bing.com/url?source=search&[some other variables]&url=http%3A%2F%2Fwww.example.com/user-landing-page-on-my-site&yrktarget=_top&q=keywords+keywords&[some other variables] The first syntax gives me the keywords the user typed in, but the second actually gives me the keywords the user typed in and their landing page on my site. I was originally unaware of this second kind altogether because I have a customized referral report that filters out URLs containing my domain. But now that I noticed them I want to know why they occur to see if I can get more to occur this way because the second syntax contains more valuable information. If I go to one of the first URLs, it gives me a typical Bing query page. The second URLs seem to just redirect me to the Bing home page. I'm not sure if it has to do with the kind of search being performed (I also get a few http://www.bing.com/shopping/search?q= referers) or some other metric. Does anyone know what causes some referral URLs from Bing to have the /search?q syntax and others to have the /url?source syntax? P.S. I have verified that I am getting both kinds of URLs from non-advertising clicks. P.P.S. I am not talking about data in Google Analytics or similar software but the raw $_SERVER['HTTP_REFERER'] value coming from the client's original request.

    Read the article

  • De-index URL paremeters

    - by Doug Firr
    Upon reading over this question is lengthy so allow me to provide a one sentence summary: I need to get Google to de-index URLs that have certain parameters appended I have a website example.com with language translations. There used to be many translations but I deleted them all so that only English (Default) and French options remain. When one selects a language option a parameter is aded to the URL. For example, the home page: https://example.com (default) https://example.com/main?l=fr_FR (French) I added a robots.txt to stop Google from crawling any of the language translations: # robots.txt generated at http://www.mcanerin.com User-agent: * Disallow: Disallow: /cgi-bin/ Disallow: /*?l= So any pages containing "?l=" should not be crawled. I checked in GWT using the robots testing tool. It works. But under html improvements the previously crawled language translation URLs remain indexed. The internet says to add a 404 to the header of the removed URLs so the Googles knows to de-index it. I checked to see what my CMS would throw up if I visited one of the URLs that should no longer exist. This URL was listed in GWT under duplicate title tags (One of the reasons I want to scrub up my URLS) https://example.com/reports/view/884?l=vi_VN&l=hy_AM This URL should not exist - I removed the language translations. The page loads when it should not! I played around. I typed example.com?whatever123 It seems that parameters always load as long as everything before the question mark is a real URL. So if Google has indexed all these URLS with parameters how do I remove them? I cannot check if a 404 is being generated because the page always loads because it's a parameter that needs to be de-indexed.

    Read the article

  • How can I do a clean Mod_Rewrite that hides the variable numbers passed in the query string but just

    - by Jay Bee
    Hi, I have been developing web applications for a while now. My applications have been fairing poorly in search engine results because of the dynamic links that my websites generate. I admire the way some developers do their mod_rewrite to produce something like: http://www.mycompany.com/accommodation/europe/ to run a substitute of "index.php?category_id=2&country=23" How can I achieve that in my urls? Warm regards, JB

    Read the article

  • How do you get Lighttpd to compress CodeIgniter's "clean urls"?

    - by ocdcoder
    I was looking at PageSpeed on my test website and noticed that Lighttpd wasn't compressing my HTML (but was compressing my javascript and css files). I'm assuming this is because I'm using CodeIgniter and it's clean url system and since the requests don't have file extensions, Lighttpd doesn't have the rule to compress it. That being the case, how do I get Lighttpd to compress my HTML? Is this something I shouldn't be doing? Or something I need to specially configure Lighttpd for?

    Read the article

  • How do I return clean JSON from a WCF Service?

    - by user208662
    I am trying to return some JSON from a WCF service. This service simply returns some content from my database. I can get the data. However, I am concerned about the format of my JSON. Currently, the JSON that gets returned is formatted like this: {"d":"[{\"Age\":35,\"FirstName\":\"Peyton\",\"LastName\":\"Manning\"},{\"Age\":31,\"FirstName\":\"Drew\",\"LastName\":\"Brees\"},{\"Age\":29,\"FirstName\":\"Tony\",\"LastName\":\"Romo\"}]"} In reality, I would like my JSON to be formatted as cleanly as possible. I believe (I may be incorrect), that the same collection of results, represented in clean JSON, should look like so: [{"Age":35,"FirstName":"Peyton","LastName":"Manning"},{"Age":31,"FirstName":"Drew","LastName":"Brees"},{"Age":29,"FirstName":"Tony","LastName":"Romo"}] I have no idea where the “d” is coming from. I also have no clue why the escape characters are being inserted. My entity looks like the following: [DataContract] public class Person { [DataMember] public string FirstName { get; set; } [DataMember] public string LastName { get; set; } [DataMember] public int Age { get; set; } public Person(string firstName, string lastName, int age) { this.FirstName = firstName; this.LastName = lastName; this.Age = age; } } The service that is responsible for returning the content is defined as: [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class TestService { [OperationContract] [WebGet(ResponseFormat = WebMessageFormat.Json)] public string GetResults() { List<Person> results = new List<Person>(); results.Add(new Person("Peyton", "Manning", 35)); results.Add(new Person("Drew", "Brees", 31)); results.Add(new Person("Tony", "Romo", 29)); // Serialize the results as JSON DataContractJsonSerializer serializer = new DataContractJsonSerializer(results.GetType()); MemoryStream memoryStream = new MemoryStream(); serializer.WriteObject(memoryStream, results); // Return the results serialized as JSON string json = Encoding.Default.GetString(memoryStream.ToArray()); return json; } } How do I return “clean” JSON from a WCF service? Thank you!

    Read the article

  • how to remove repeated record's from results linq to sql

    - by Sadegh
    hi, i want to remove repeated record's from results but distinct don't do this for me! why??? var results = (from words in _Xplorium.Words join wordFiles in _Xplorium.WordFiles on words.WordId equals wordFiles.WordId join files in _Xplorium.Files on wordFiles.FileId equals files.FileId join urls in _Xplorium.Urls on files.UrlId equals urls.UrlId where files.Title.Contains(query) || files.Description.Contains(query) orderby wordFiles.Count descending select new SearchResultItem() { Title = files.Title, Url = urls.Address, Count = wordFiles.Count, CrawledOn = files.CrawledOn, Description = files.Description, Lenght = files.Lenght, UniqueKey = words.WordId + "-" + files.FileId + "-" + urls.UrlId }).Distinct();

    Read the article

  • How should I do a loop a nokogiri search in ruby?

    - by kim
    I have the following that I retreive the title of each url from an array that contains a list of urls. require 'rubygems' require 'nokogiri' require 'open-uri' @urls = ["http://google.com", "http://yahoo.com", "http://rubyonrails.org"] @found_titles = Array.new @found_titles[0] = Nokogiri::HTML(open("#{@urls[0]}")).search("title").inner_html #this can go on forever...but #@found_titles[1] = Nokogiri::HTML(open("#{@urls[1]}")).search("title").inner_html #@found_titles[2] = Nokogiri::HTML(open("#{@urls[2]}")).search("title").inner_html puts "#{@found_titles[0]}" How should i form a loop method for this so i can get the title even when the list in @url array gets longer.

    Read the article

  • Sessions enabled - do we have to clean them up ourselves?

    - by user246114
    Hi, When we turn sessions on in google app engine like: // appengine-web.xml <sessions-enabled>true</sessions-enabled> does app engine automatically clean up expired sessions, or do we have to do it ourselves? After turning them on, I see in the datastore that some entries are being generated like _ah_session, I'm wondering if those are them? Thanks

    Read the article

  • IIS7.5 Outbound Rule for lower case URLs in <a href="...">

    - by Quog
    Hi, I know how to canonicalise the case of URLs on incoming request to IIS7.5, in fact, there's a built in rule template to start from. But how about outbound (without changing the code)? This is where I got to so far: <outboundRules> <rule name="Outbound lowercase" preCondition="IsHTML" enabled="true"> <match filterByTags="A" pattern="[A-Z]" ignoreCase="false" /> <action type="Rewrite" value="{ToLower:{R:0}}" /> </rule> <preConditions> <preCondition name="IsHTML"> <add input="{RESPONSE_CONTENT_TYPE}" pattern="^text/html" /> </preCondition> </preConditions> </outboundRules> However, IIS barfs on the action with a 500 implying an invalid web.config, probably on the {ToLower:XXXX} which I stole from the MS-supplied inbound rule template. Anyone know how to do this? Anyone know where the options are fully documented (my GoogleNinja skills failed me: I found this but "Specifies value syntax for the rule. This element is available only for the Rewrite action type" is not really comprehensive). Thanks, Damian

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • How do I use a jQuery not selector to select relative URLs?

    - by Matt
    I'm working on a little jQuery script to add Google Analytics pageTracker onclick data to all relative URLs on my forum, allowing me to track clicks to external sites. I don't want to add the onclick to internal links on forum.sitename or sitename, and I don't want to add them to any hrefs marked # or that start with /. My script below works nicely, but for one minor problem! All of the forum's URLs are relative and don't start with /. I appear to have no way to change that, so need to modify the jQuery below to prevent it adding the onclick to links like as it currently does. What I want to do, is to write a .not() function like .not("[href!^=http") to prevent jQuery from adding the onclick to any hrefs which do not start with http. However, .not() appears not to support this. I'm new to jQuery and can't figure this out. Any pointers would be massively appreciated. $(document).ready(function(){ // Get URL from a href var URL = $("a").attr('href'); // Add pageTracker data for GA tracking $("a") .not("[href^=#]") .not("[href^=http://forum.sitename]") .not("[href^=http://www.sitename]") .attr("onclick","pageTracker._trackEvent('Outgoing_Links', 'Forum', " + URL + ");") ; }); Thanks!

    Read the article

  • How to handle Clean URIs in Classic ASP using PATH_INFO?

    - by Mario
    I'm trying to handle Clean URIs in a Classic ASP application. In PHP, I was able to use URIs like http://example.com/index.php/foo/bar/baz and have /foo/bar/baz available in the PATH_INFO environment variable. (I usually add a rewrite rule so I do not need the index.php segment) However, I don't seem to be able to mimic this in Classic ASP. If I try http://example.com/index.asp/foo/bar/baz, I get a 404 error. Is there a way to add a path after the index.asp segment and get the PHP like behaviour in ASP? Note: I'm currently using the workaround of rewriting URLs of the form: http://example.com/foo/bar/baz/ to index.asp?path=/foo/bar/baz since I can't seem to get index.asp/foo/bar/baz to work.

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • How do I tell eclipse to auto-generate or retain stubs when it starts and does a clean build?

    - by Erick Robertson
    I'm working on a Java application that uses JavaSpace. We're developing this in Eclipse. There are a couple instances where we are inserting code into the JavaSpace to do some more advanced space notification logic. Doing this requires that we generate stubs for the classes used within the JavaSpace. We use an external script to generate these stubs. The problem is that whenever Eclipse restarts, it does a clean build of the whole application. When it does this, it deletes all the stubs and we have to regenerate them. I would like to find a way to either tell Eclipse not to remove the _stub.class and _skel.class files within the bin folder where the .class files are placed. Either that, or somehow teach Eclipse to generate the stub files whenever it does a rebuild (and I suppose whenever the source files from which the stubs are generated changes). How can I do one of these, so that we don't have to manually build the stubs every time we start up Eclipse?

    Read the article

  • Java: Best practices for turning foreign horror-code into clean API...?

    - by java.is.for.desktop
    Hello, everyone! I have a project (related to graph algorithms). It is written by someone else. The code is horrible: public fields, no getters/setters huge methods, all public some classes have over 20 fields some classes have over 5 constructors (which are also huge) some of those constructors just left many fields null (so I can't make some fields final, because then every second constructor signals errors) methods and classes rely on each other in both directions I have to rewrite this into a clean and understandable API. Problem is: I myself don't understand anything in this code. Please give me hints on analyzing and understanding such code. I was thinking, perhaps, there are tools which perform static code analysis and give me call graphs and things like this.

    Read the article

  • Rails 3 routes and using GET to create clean URLs?

    - by Hard-Boiled Wonderland
    I am a little confused with the routes in Rails 3 as I am just starting to learn the language. I have a form generated here: <%= form_tag towns_path, :method => "get" do %> <%= label_tag :name, "Search for:" %> <%= text_field_tag :name, params[:name] %> <%= submit_tag "Search" %> <% end %> Then in my routes: get "towns/autocomplete_town_name" get "home/autocomplete_town_name" match 'towns' => 'towns#index' match 'towns/:name' => 'towns#index' resources :towns, :module => "town" resources :businesses, :module => "business" root :to => "home#index" So why when submitting the form do I get the URL: /towns?utf8=?&name=townname&commit=Search So the question is how do I make that url into a clean url like: /towns/townname Thanks, Andrew

    Read the article

  • Data clean up: are there libraries of common permutations that we can use? Or is there a better appr

    - by anyaelena
    We are working on clean-up and analysis of a lot of human-entered customer data. We need to decide programmatically whether 2 addresses (for example) are the same, even though the data was entered with slight variations. Right now we run each address through fairly simplistic string replacement (replacing avenue with ave, for example), concatenate the fields and compare the results. We are doing something similar with names. At the very least, it seems like our list of search-replace values should already exist somewhere. Or perhaps you can suggest a totally different and superior way to detect matches?

    Read the article

  • Real-world examples of populating a GWT CellTable using a clean MVP pattern?

    - by piehole
    We are using the GWT-Presenter framework and attempting to use CellTable to put together an updateable grid. It seems as though several of the GWT constructs for CellTable don't lend themselves to easily breaking up the logic into clean view and presenter code. Examples: 1) Within the View's constructor, the CellTable is defined and each column is created by anonymous inner classes that extend the Column class to provide the onValue() method. 2) The FieldUpdater interface must be implemented to provide logic to execute when a user alters data in a cell. This seems like it would best fit in the Presenter's onBind() method, but FieldUpdaters often need access to the Cell or Column which belong in the view. CellTable does not have accessor methods to get hold of the Columns or Cells, so it seems the only way for the Presenter to get them is for me to create a multitude of member variables on the View and accessors on my Display interface. Can anyone provide good examples for dealing with CellTable in GWT-Presenter or a comparable MVP

    Read the article

  • How can I use a clean URL only in a subfolder of my website?

    - by tibin mathew
    Hi, I have a web site http://www.mydomain.com Here I have created a sub folder http://www.mydomain.com/products. I want to change all the page inside the product folder as clean URL. I know .htaccess should be inside product folder. If it's enabled, will it affect all the parent directories and files of my site I mean http://www.mydomain.com/ here, will it affect the pages here also. I have one more doubt about .htaccess file, is there a way I can enable mod_rewrite through any code code without directly editing httpd.conf file Please help me Thanks

    Read the article

  • How do you remove/clean-up code which is no longer used?

    - by clarke ching
    So, we have a project which had to be radically descoped in order to ship on time. It's got a lot of code left in it which is not actually used. I want to clean up the code, removing any dead-wood. I have the authority to do it and I can convince people that it's a commercially sensible thing to do. [I have a lot of automated unit tests, some automated acceptance tests and a team of testers who can manually regression test.] My problem: I'm a manager and I don't know technically how to go about it. Any help?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >