Search Results

Search found 3028 results on 122 pages for 'urls'.

Page 23/122 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Can zlib.crc32 or zlib.adler32 be safely used to mask primary keys in URLs?

    - by David Eyk
    In Django Design Patterns, the author recommends using zlib.crc32 to mask primary keys in URLs. After some quick testing, I noticed that crc32 produces negative integers about half the time, which seems undesirable for use in a URL. zlib.adler32 does not appear to produce negatives, but is described as "weaker" than CRC. Is this method (either CRC or Adler-32) safe for usage in a URL as an alternate to a primary key? (i.e. is it collision-safe?) Is the "weaker" Adler-32 a satisfactory alternative for this task? How the heck do you reverse this?! That is, how do you determine the original primary key from the checksum?

    Read the article

  • How should I handle pages that move to a new url with regards to search engines?

    - by Anders Juul
    Hi all, I have done some refactoring on a asp.net mvc application already deployed to a live web site. Among the refactoring was moving functionality to a new controller, causing some urls to change. Shortly after the various search engine robots start hammering the old urls. What is the right way to handle this in general? Ignore it? In time the SEs should find out that they get nothing but 400 from the old urls. Block old urls with robots.txt? Continue to catch the old urls, then redirect to new ones? Users navigating the site would never get the redirection as the urls are updated through-out the new version of the site. I see it as garbage code - unless it could be handled by some fancy routing? Other? As always, all comments welcome... Thanks, Anders, Denmark

    Read the article

  • How do I allow inline images with data urls on .NET 4 without triggering request validation?

    - by Johan Driessen
    I'm using the jQuery jstree plugin (http://jstree.com) in a ASP.NET MVC 2 project on .NET 4 RC. It comes with some stylesheets with inline images with data urls, like this: .tree-checkbox ul { background-image:url(data:image/gif;base64,R0lGODlhAgACAIAAAB4dGf///yH5BAEAAAEALAAAAAACAAIAAAICRF4AOw==); } Now, the url for the background image contains a colon, which .NET 4 thinks is an unsafe character, so I get this error message: A potentially dangerous Request.Path value was detected from the client (:). According to the documentation, I am supposed to be able to prevent this by adding <pages validateRequest="false" /> to my Web.config, but that doesn't seem to help. I have tried adding it to the main Web.config for the application, as well as to a special Web.config in the /config folder, but to no avail. Is there any way to get .NET to allow this?

    Read the article

  • SQL: select random row from table where the ID of the row isn't in another table?

    - by johnrl
    I've been looking at fast ways to select a random row from a table and have found the following site: http://74.125.77.132/search?q=cache:http://jan.kneschke.de/projects/mysql/order-by-rand/&hl=en&strip=1 What I want to do is to select a random url from my table 'urls' that I DON'T have in my other table 'urlinfo'.The query I am using now selects a random url from 'urls' but I need it modified to only return a random url that is NOT in the 'urlinfo' table. Heres the query: SELECT url FROM urls JOIN (SELECT CEIL(RAND() * (SELECT MAX(urlid) FROM urls ) ) AS urlid ) AS r2 USING(urlid); And the two tables: CREATE TABLE urls ( urlid INT NOT NULL AUTO_INCREMENT PRIMARY KEY, url VARCHAR(255) NOT NULL, ) ENGINE=INNODB; CREATE TABLE urlinfo ( urlid INT NOT NULL PRIMARY KEY, urlinfo VARCHAR(10000), FOREIGN KEY (urlid) REFERENCES urls (urlid) ) ENGINE=INNODB;

    Read the article

  • REST - why we need million urls and different HTTP request?

    - by Andre
    I asked this question. But I still don't understand why we need to utilize different HTTP requests: DELETE/PUT/POST/GET in order to build nice API Wouldn't it be a lot simpler to pass all information in request parameters and have a SINGLE ENTRY-POINT for your api?: GET www.example.com/api?id=1&method=delete&returnformat=JSON GET www.example.com/api?id=1&method=delete&returnformat=XML or POST www.example.com/api {post data: id=1&method=delete&returnformat=JSON} POST www.example.com/api {post data: id=1&method=delete&returnformat=XML} and then - we can handle all methods and data internally without the need for hundreds of urls... how would you call this type of API - It's not REST apparently, it's not SOAP. then - what is it? UPDATE I'm not proposing any new standards here. I merely asking a question in order to better understand why web services work the way they work.

    Read the article

  • PHP/MySQL - Special characters in URLs. How to avoid?

    - by RC
    Hey everyone, My database contains information extracted from an external feed. In this raw text feed, the following text is used in place of special characters: & - &amp; ' - &39; é - &eacute; I extract some of this text to form URLs. For example, a URL that I construct from data containing these characters might look like this: http://url.com/search/?brand=Franklin&Hédgson's I use the GET variables in this URL to construct further lookups, which leads to a couple of specific problems: The é and ' characters are sent back to MySQL as they appear, and so they don't trigger any results (because the characters take the full HTML form in the database text). The & within the URL separates the variable, and the GET returns only Franklin, when it should return the whole string. Are there any straightforward ways of dealing with this? Thanks.

    Read the article

  • input URL, output contents of "view page source", i.e. after javascript / etc, library or command-li

    - by Ryan Berckmans
    I need a scalable, automated, method of dumping the contents of "view page source" (DOM) to a file. Programs such as wget or curl will non-interactively retrieve a set of URLs, but do not execute javascript or any of that 'fancy stuff'. My ideal solution looks like any of the following (fantasy solutions): cat urls.txt | google-chrome --quiet --no-gui \ --output-sources-directory=~/urls-source (fantasy command line, no idea if flags like these exist) or cat urls.txt | python -c "import some-library; \ ... use some-library to process urls.txt ; output sources to ~/urls-source" As a secondary concern, I also need: dump all included javascript source to file (a la firebug) dump pdf/image of page to file (print to file)

    Read the article

  • How to use personalized urls in asp.net mvc application.

    - by Bootcamp
    I am working on a website in which many users can create their account and have a personalized page. I wish to provide them a twitter like url to access their pages, for example www.mysite.com/smith or www.mysite.com/john . I am using asp.net mvc 1.0. I have an understand that i can add routes to the global.asax file, but i am not able to figure out how to add a route that will work for such urls. Please provide some help / suggestions. Thanks.

    Read the article

  • Is it bad for SEO to have an 'article' published under 2 urls?

    - by Alichad
    Hi, On our new website we publish an article once and can tag it to appear in several sections eg. blahblah.com/insight/10-05-21/Buzzcity-releases-mobile-game-library.aspx blahblah.com/international_media/10-05-21/Buzzcity-releases-mobile-game-library.aspx Is it better for SEO to have the 2 different urls which include important keywords like ‘insight’ and ‘international media’ or is it better to have a single generic url? E.g. blahblah.com/articles/10-05-21/Buzzcity_releases_mobile_game_library.aspx I read somewhere that google doesn’t like the same content ‘duplicated’ in 2 (or 3) places - I am not a tecchie. THanks

    Read the article

  • How can I map URLs to filenames with perl?

    - by eugene y
    In a simple webapp I need to map URLs to filenames or filepaths. This app has a requirement that it can depend only on modules in the core Perl ditribution (5.6.0 and later). The problem is that filename length on most filesystems is limited to 255. Another limit is about 32k subdirectories in a single folder. My solution: my $filename = $url; if (length($filename) > $MAXPATHLEN) { # if filename longer than 255 my $part1 = substr($filename, 0, $MAXPATHLEN - 13); # first 242 chars my $part2 = crypt(0, substr($filename, $MAXPATHLEN - 13)); # 13 chars hash $filename = $part1.$part2; } $filename =~ s!/!_!g; # escape directory separator Is it reliable ? How can it be improved ?

    Read the article

  • RichEdit VCL and URLs. Workarounds for OnPaint Issues.

    - by HX_unbanned
    So, issue is with the thing Delphi progies scare to death - Rich Edit in Windows ( XP and pre-XP versions ). Situation: I have added EM_AUTOURLDETECTION in OnCreate of form. Target - RichEdit1. Then, I have form, that is "collapsed" after showing form. RichEdit Control is sattic, visible and enabled, but it is "hidden" because form window is collapsed. I can expand and collapse form, using Button1 and changing forms Constraints and Size properties. After first time I expand form, the URL inside RichEdit1 control is highlighted. BUT - After second, third, fourth, etc... time I Collapse and Expand form, the RichEdit1 Control does not highlight URL anymore. I have tried EM_SETTEXTMODE messages, also WM_UPDATEUISTATE, also basic WM_TEXT message - no luck. It sems like this merssage really works ( enables detection ) while sending keyboard strokes ( virtual keycodes ), but not when text has been modified. Also - I am thinking to rewrite code to make RichEdit Control dynamic. Would this fix the problem? Maybe solution is to override OnPaint / OnDraw method to avoid highlight ( formatting ) losing when collapsing or expanding form? Weird is that my Embarcadero Documentation says this function must work in any moment text has been modified. Why it does not work? Any help appreciated. I am making this Community Wiki because this is common problem and togewther we cam find solution, right? :) Also - follow-ups and related Question: http://stackoverflow.com/questions/738694/override-onpaint http://stackoverflow.com/questions/478071/how-to-autodetect-urls-in-richedit-2-0 http://www.vbforums.com/archive/index.php/t-59959.html

    Read the article

  • For business people to manage, keep binary images in MySQL or just the urls?

    - by Michael Mao
    Hello everyone: I am working on a task to enable image uploading and auto-scaling(from full sized to thumbnail) by jQuery & PHP. I can naturally come up with two approaches : First, store both images as binary objects directly into MySQL; Second, store only urls to the images and keep the images somewhere on server. The images are for everyone to view, so there are no security restrictions, as far as I know. Personally I don't have any preference, however, at the end of the day, it is the business people that are going to manage the images as part of the system(CRUD). So I am wondering which seems to be a bit better for them? Of course I am building a easy-to-use, visualize web interface for the staff to control the process, but I am not sure if that is enough. Lessons told me that if I don't think for the future and seek the most flexible approach, the I will probably screw myself sooner or later. PS. The following link is what I've found so far, which is pretty cool, no flash involved :) Andrew Valum's ajax image upload jQuery plugin

    Read the article

  • Rails: Obfuscating Image URLs on Amazon S3? (security concern)

    - by neezer
    To make a long explanation short, suffice it to say that my Rails app allows users to upload images to the app that they will want to keep in the app (meaning, no hotlinking). So I'm trying to come up with a way to obfuscate the image URLs so that the address of the image depends on whether or not that user is logged in to the site, so if anyone tried hotlinking to the image, they would get a 401 access denied error. I was thinking that if I could route the request through a controller, I could re-use a lot of the authorization I've already built into my app, but I'm stuck there. What I'd like is for my images to be accessible through a URL to one of my controllers, like: http://railsapp.com/images/obfuscated?member_id=1234&pic_id=7890 If the user where to right-click on the image displayed on the website and select "Copy Address", then past it in, it would be the SAME url (as in, wouldn't betray where the image is actually hosted). The actual image would be living on a URL like this: http://s3.amazonaws.com/s3username/assets/member_id/pic_id.extension Is this possible to accomplish? Perhaps using Rails' render method? Or something else? I know it's possible for PHP to return the correct headers to make the browser think it's an image, but I don't know how to do this in Rails... UPDATE: I want all users of the app to be able to view the images if and ONLY if they are currently logged on to the site. If the user does not have a currently active session on the site, accessing the images directly should yield a generic image, or an error message.

    Read the article

  • Apahce - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory McCann
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • Apache - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

  • Google not indexing new forum

    - by Tom Gullen
    We installed a new forum a few months ago now. The URL is: https://www.scirra.com/forum I've 301'd the old topics/threads, as well as included all the new URLs in the sitemap. Yet they still are not appearing. Webmaster tools is showing: 139,512 URLs submitted 50,544 URLs indexed And has been stuck there for quite some time. A massive drop in indexed pages since we updated the forum as well: Any help much appreciated

    Read the article

  • In Google Webmaster Tools we have 3 sitemaps attributed to 1 domain

    - by Frank
    Thanks for you advice and help ahead of time I have a website that has been on the internet for almost 10 years created in "Microsoft Frontpage" with over 900 pages. Currently in Google Webmaster tools it shows up as 2 domains and 3 sitemaps: http://www.example.com example.com hostedsitmaps.com Furthermore, since we were having hard to placing the xml sitemap on our site(Frontpage Issues) we decided to hire pro-sitemaps.com to create, host and upload the xml file which they did. Thus, I have another site hostedsitemaps.com on our webmaster tools for the site. Hosted sitemaps shows: 900 urls submitted 800 Indexed. Crawl errors and Search queries: No data available http://www.example.com shows: 889 URLs submitted 1 URLs indexed. Crawl Errors: 14 Soft 404 796 Not found Search Queries: 8104 example.com shows: 889 URLs submitted 1 URLs indexed Crawl errors: 48 Soft 404 91 Not found Search Queries: 8104 My questions and need for help are as followed: 1. Why are our domain based sites in webmaster tools ( example.com and http:www.example.com) showing only 1 URL indexed while the hostedsitemap has 800 indexed? 2. Should we have 3 domains configured for this "one" domain in Google Webmaster tools? 3. Should we eliminate/delete the hostedsitemap from webmaster tools completely and take off that XML sitemap? 4 Does having example.com and http://www.example.com impact web ranking? 5. Any other thoughts or help in this very complicated matter for us. Thanks.

    Read the article

  • Moving from http to https - Google webmaster tools | Bing webmaster tools

    - by user2240778
    I'm moving from http to https for my entire site. The site is currently added to google webmaster tools as www.example.com and all the pages are indexed as http. How do i go about moving to the new https URLs on Google webmaster tools. Do I just submit a updated sitemap which has the https URLs OR Do I add a new site as https://www.example.com and submit the sitemap with https urls? All the http urls are set to redirect to their https counterparts.

    Read the article

  • I run Webmin and I want it to be accessed with two URLs, both using proxypass in apache

    - by user36644
    This is what I am trying to do: NameVirtualHost * <VirtualHost *> ServerName testsite.org ServerAdmin [email protected] DocumentRoot /var/www/ </VirtualHost> <VirtualHost *> ServerName panel.testsite.org ProxyPass / http://panel.testsite.org:10000/ ProxyPassReverse / http://panel.testsite.org:10000/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName newsite.com ServerAdmin [email protected] DocumentRoot /var/newsite/ </VirtualHost> <VirtualHost 12.34.56.78> ServerName panel.newsite.com ProxyPass / http://panel.newsite.com:10000/ ProxyPassReverse / http://panel.newsite.com:10000/ </VirtualHost> The problem is that it won't accept the 2nd vhost with the IP 12.34.56.78 because it says one already exists. panel.newsite.com and newsite.com have the same IP...so I am not sure how I can make it so that only the URL "panel.newsite.com" will get proxypassed to port 10000 but no other URL on newsite.com

    Read the article

  • How to let mod_wsgi only handle certain URLs under Apache?

    - by Frederik
    I have a Django app that handles "/admin/" and "/myapp/". All the other requests should be handled by Apache. I've tried using LocationMatch but then I'd have to write a negative regex. I've tried WSGIScriptAlias with the /admin/ prefix but then the wsgi_handler receives the request with the /admin/ part cut off. Is there a cleaner way to make mod_wsgi only handle certain requests?

    Read the article

  • Make shortened and long urls play together on the same domain (RewriteRule).

    - by Renato Renato
    Long story short, I want to have both example.com/aJ5 and example.com/any-other-url working together. I'm using apache and not very good at writing regex. I have already a global RewriteRule which sends everything to the app entry point. What I need is to tell apache if length($path) is <= 5 chars then rewrite to another location. I know I can use {1,5} like syntax in regex, but don't really know if it's what I'm looking for. I'd like to implement this at web-server level rather than php level. Any help is appreciated.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >