Search Results

Search found 8013 results on 321 pages for 'clean urls'.

Page 38/321 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • Can zlib.crc32 or zlib.adler32 be safely used to mask primary keys in URLs?

    - by David Eyk
    In Django Design Patterns, the author recommends using zlib.crc32 to mask primary keys in URLs. After some quick testing, I noticed that crc32 produces negative integers about half the time, which seems undesirable for use in a URL. zlib.adler32 does not appear to produce negatives, but is described as "weaker" than CRC. Is this method (either CRC or Adler-32) safe for usage in a URL as an alternate to a primary key? (i.e. is it collision-safe?) Is the "weaker" Adler-32 a satisfactory alternative for this task? How the heck do you reverse this?! That is, how do you determine the original primary key from the checksum?

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • How should I handle pages that move to a new url with regards to search engines?

    - by Anders Juul
    Hi all, I have done some refactoring on a asp.net mvc application already deployed to a live web site. Among the refactoring was moving functionality to a new controller, causing some urls to change. Shortly after the various search engine robots start hammering the old urls. What is the right way to handle this in general? Ignore it? In time the SEs should find out that they get nothing but 400 from the old urls. Block old urls with robots.txt? Continue to catch the old urls, then redirect to new ones? Users navigating the site would never get the redirection as the urls are updated through-out the new version of the site. I see it as garbage code - unless it could be handled by some fancy routing? Other? As always, all comments welcome... Thanks, Anders, Denmark

    Read the article

  • How do I allow inline images with data urls on .NET 4 without triggering request validation?

    - by Johan Driessen
    I'm using the jQuery jstree plugin (http://jstree.com) in a ASP.NET MVC 2 project on .NET 4 RC. It comes with some stylesheets with inline images with data urls, like this: .tree-checkbox ul { background-image:url(data:image/gif;base64,R0lGODlhAgACAIAAAB4dGf///yH5BAEAAAEALAAAAAACAAIAAAICRF4AOw==); } Now, the url for the background image contains a colon, which .NET 4 thinks is an unsafe character, so I get this error message: A potentially dangerous Request.Path value was detected from the client (:). According to the documentation, I am supposed to be able to prevent this by adding <pages validateRequest="false" /> to my Web.config, but that doesn't seem to help. I have tried adding it to the main Web.config for the application, as well as to a special Web.config in the /config folder, but to no avail. Is there any way to get .NET to allow this?

    Read the article

  • SQL: select random row from table where the ID of the row isn't in another table?

    - by johnrl
    I've been looking at fast ways to select a random row from a table and have found the following site: http://74.125.77.132/search?q=cache:http://jan.kneschke.de/projects/mysql/order-by-rand/&hl=en&strip=1 What I want to do is to select a random url from my table 'urls' that I DON'T have in my other table 'urlinfo'.The query I am using now selects a random url from 'urls' but I need it modified to only return a random url that is NOT in the 'urlinfo' table. Heres the query: SELECT url FROM urls JOIN (SELECT CEIL(RAND() * (SELECT MAX(urlid) FROM urls ) ) AS urlid ) AS r2 USING(urlid); And the two tables: CREATE TABLE urls ( urlid INT NOT NULL AUTO_INCREMENT PRIMARY KEY, url VARCHAR(255) NOT NULL, ) ENGINE=INNODB; CREATE TABLE urlinfo ( urlid INT NOT NULL PRIMARY KEY, urlinfo VARCHAR(10000), FOREIGN KEY (urlid) REFERENCES urls (urlid) ) ENGINE=INNODB;

    Read the article

  • REST - why we need million urls and different HTTP request?

    - by Andre
    I asked this question. But I still don't understand why we need to utilize different HTTP requests: DELETE/PUT/POST/GET in order to build nice API Wouldn't it be a lot simpler to pass all information in request parameters and have a SINGLE ENTRY-POINT for your api?: GET www.example.com/api?id=1&method=delete&returnformat=JSON GET www.example.com/api?id=1&method=delete&returnformat=XML or POST www.example.com/api {post data: id=1&method=delete&returnformat=JSON} POST www.example.com/api {post data: id=1&method=delete&returnformat=XML} and then - we can handle all methods and data internally without the need for hundreds of urls... how would you call this type of API - It's not REST apparently, it's not SOAP. then - what is it? UPDATE I'm not proposing any new standards here. I merely asking a question in order to better understand why web services work the way they work.

    Read the article

  • PHP/MySQL - Special characters in URLs. How to avoid?

    - by RC
    Hey everyone, My database contains information extracted from an external feed. In this raw text feed, the following text is used in place of special characters: & - &amp; ' - &39; é - &eacute; I extract some of this text to form URLs. For example, a URL that I construct from data containing these characters might look like this: http://url.com/search/?brand=Franklin&Hédgson's I use the GET variables in this URL to construct further lookups, which leads to a couple of specific problems: The é and ' characters are sent back to MySQL as they appear, and so they don't trigger any results (because the characters take the full HTML form in the database text). The & within the URL separates the variable, and the GET returns only Franklin, when it should return the whole string. Are there any straightforward ways of dealing with this? Thanks.

    Read the article

  • input URL, output contents of "view page source", i.e. after javascript / etc, library or command-li

    - by Ryan Berckmans
    I need a scalable, automated, method of dumping the contents of "view page source" (DOM) to a file. Programs such as wget or curl will non-interactively retrieve a set of URLs, but do not execute javascript or any of that 'fancy stuff'. My ideal solution looks like any of the following (fantasy solutions): cat urls.txt | google-chrome --quiet --no-gui \ --output-sources-directory=~/urls-source (fantasy command line, no idea if flags like these exist) or cat urls.txt | python -c "import some-library; \ ... use some-library to process urls.txt ; output sources to ~/urls-source" As a secondary concern, I also need: dump all included javascript source to file (a la firebug) dump pdf/image of page to file (print to file)

    Read the article

  • How to use personalized urls in asp.net mvc application.

    - by Bootcamp
    I am working on a website in which many users can create their account and have a personalized page. I wish to provide them a twitter like url to access their pages, for example www.mysite.com/smith or www.mysite.com/john . I am using asp.net mvc 1.0. I have an understand that i can add routes to the global.asax file, but i am not able to figure out how to add a route that will work for such urls. Please provide some help / suggestions. Thanks.

    Read the article

  • Is it bad for SEO to have an 'article' published under 2 urls?

    - by Alichad
    Hi, On our new website we publish an article once and can tag it to appear in several sections eg. blahblah.com/insight/10-05-21/Buzzcity-releases-mobile-game-library.aspx blahblah.com/international_media/10-05-21/Buzzcity-releases-mobile-game-library.aspx Is it better for SEO to have the 2 different urls which include important keywords like ‘insight’ and ‘international media’ or is it better to have a single generic url? E.g. blahblah.com/articles/10-05-21/Buzzcity_releases_mobile_game_library.aspx I read somewhere that google doesn’t like the same content ‘duplicated’ in 2 (or 3) places - I am not a tecchie. THanks

    Read the article

  • xecvp: ./chk: Permission denied

    - by Hiranth
    I'm trying to install Xen 4.0.1 on Ubuntu 10.10. When i run the "make world" it gives the following error at the end.... make -C check clean make[4]: Entering directory `/home/hirantha/xen-4.0.1/tools/check' ./chk clean make[4]: execvp: ./chk: Permission denied make[4]: * [clean] Error 127 make[4]: Leaving directory `/home/hirantha/xen-4.0.1/tools/check' make[3]: * [subdir-clean-check] Error 2 make[3]: Leaving directory `/home/hirantha/xen-4.0.1/tools' make[2]: * [subdirs-clean] Error 2 make[2]: Leaving directory `/home/hirantha/xen-4.0.1/tools' make[1]: * [clean] Error 2 make[1]: Leaving directory `/home/hirantha/xen-4.0.1' make: * [world] Error 2 Why is that? Please help me to solve this....

    Read the article

  • How can I map URLs to filenames with perl?

    - by eugene y
    In a simple webapp I need to map URLs to filenames or filepaths. This app has a requirement that it can depend only on modules in the core Perl ditribution (5.6.0 and later). The problem is that filename length on most filesystems is limited to 255. Another limit is about 32k subdirectories in a single folder. My solution: my $filename = $url; if (length($filename) > $MAXPATHLEN) { # if filename longer than 255 my $part1 = substr($filename, 0, $MAXPATHLEN - 13); # first 242 chars my $part2 = crypt(0, substr($filename, $MAXPATHLEN - 13)); # 13 chars hash $filename = $part1.$part2; } $filename =~ s!/!_!g; # escape directory separator Is it reliable ? How can it be improved ?

    Read the article

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • RichEdit VCL and URLs. Workarounds for OnPaint Issues.

    - by HX_unbanned
    So, issue is with the thing Delphi progies scare to death - Rich Edit in Windows ( XP and pre-XP versions ). Situation: I have added EM_AUTOURLDETECTION in OnCreate of form. Target - RichEdit1. Then, I have form, that is "collapsed" after showing form. RichEdit Control is sattic, visible and enabled, but it is "hidden" because form window is collapsed. I can expand and collapse form, using Button1 and changing forms Constraints and Size properties. After first time I expand form, the URL inside RichEdit1 control is highlighted. BUT - After second, third, fourth, etc... time I Collapse and Expand form, the RichEdit1 Control does not highlight URL anymore. I have tried EM_SETTEXTMODE messages, also WM_UPDATEUISTATE, also basic WM_TEXT message - no luck. It sems like this merssage really works ( enables detection ) while sending keyboard strokes ( virtual keycodes ), but not when text has been modified. Also - I am thinking to rewrite code to make RichEdit Control dynamic. Would this fix the problem? Maybe solution is to override OnPaint / OnDraw method to avoid highlight ( formatting ) losing when collapsing or expanding form? Weird is that my Embarcadero Documentation says this function must work in any moment text has been modified. Why it does not work? Any help appreciated. I am making this Community Wiki because this is common problem and togewther we cam find solution, right? :) Also - follow-ups and related Question: http://stackoverflow.com/questions/738694/override-onpaint http://stackoverflow.com/questions/478071/how-to-autodetect-urls-in-richedit-2-0 http://www.vbforums.com/archive/index.php/t-59959.html

    Read the article

  • For business people to manage, keep binary images in MySQL or just the urls?

    - by Michael Mao
    Hello everyone: I am working on a task to enable image uploading and auto-scaling(from full sized to thumbnail) by jQuery & PHP. I can naturally come up with two approaches : First, store both images as binary objects directly into MySQL; Second, store only urls to the images and keep the images somewhere on server. The images are for everyone to view, so there are no security restrictions, as far as I know. Personally I don't have any preference, however, at the end of the day, it is the business people that are going to manage the images as part of the system(CRUD). So I am wondering which seems to be a bit better for them? Of course I am building a easy-to-use, visualize web interface for the staff to control the process, but I am not sure if that is enough. Lessons told me that if I don't think for the future and seek the most flexible approach, the I will probably screw myself sooner or later. PS. The following link is what I've found so far, which is pretty cool, no flash involved :) Andrew Valum's ajax image upload jQuery plugin

    Read the article

  • Rails: Obfuscating Image URLs on Amazon S3? (security concern)

    - by neezer
    To make a long explanation short, suffice it to say that my Rails app allows users to upload images to the app that they will want to keep in the app (meaning, no hotlinking). So I'm trying to come up with a way to obfuscate the image URLs so that the address of the image depends on whether or not that user is logged in to the site, so if anyone tried hotlinking to the image, they would get a 401 access denied error. I was thinking that if I could route the request through a controller, I could re-use a lot of the authorization I've already built into my app, but I'm stuck there. What I'd like is for my images to be accessible through a URL to one of my controllers, like: http://railsapp.com/images/obfuscated?member_id=1234&pic_id=7890 If the user where to right-click on the image displayed on the website and select "Copy Address", then past it in, it would be the SAME url (as in, wouldn't betray where the image is actually hosted). The actual image would be living on a URL like this: http://s3.amazonaws.com/s3username/assets/member_id/pic_id.extension Is this possible to accomplish? Perhaps using Rails' render method? Or something else? I know it's possible for PHP to return the correct headers to make the browser think it's an image, but I don't know how to do this in Rails... UPDATE: I want all users of the app to be able to view the images if and ONLY if they are currently logged on to the site. If the user does not have a currently active session on the site, accessing the images directly should yield a generic image, or an error message.

    Read the article

  • Apahce - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory McCann
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • Apache - How to disable gzip content encoding (eg DEFLATE) for one set of URLs?

    - by Rory
    I have a ubuntu apache webserver and I have enabled mod_deflate to gzip all the content. However there's one folder I'd like to disable the mod_deflate for. I was going to do something like this: <Location /myfolder> RemoveOutputFilter DEFLATE </Location> But that doesn't work. Rational: I am trying to debug an XMLRPC server and I am using wireshark to see what gets past in the HTTP requests, since the replies are gzipped, I can't see what's going on.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

  • Google not indexing new forum

    - by Tom Gullen
    We installed a new forum a few months ago now. The URL is: https://www.scirra.com/forum I've 301'd the old topics/threads, as well as included all the new URLs in the sitemap. Yet they still are not appearing. Webmaster tools is showing: 139,512 URLs submitted 50,544 URLs indexed And has been stuck there for quite some time. A massive drop in indexed pages since we updated the forum as well: Any help much appreciated

    Read the article

  • In Google Webmaster Tools we have 3 sitemaps attributed to 1 domain

    - by Frank
    Thanks for you advice and help ahead of time I have a website that has been on the internet for almost 10 years created in "Microsoft Frontpage" with over 900 pages. Currently in Google Webmaster tools it shows up as 2 domains and 3 sitemaps: http://www.example.com example.com hostedsitmaps.com Furthermore, since we were having hard to placing the xml sitemap on our site(Frontpage Issues) we decided to hire pro-sitemaps.com to create, host and upload the xml file which they did. Thus, I have another site hostedsitemaps.com on our webmaster tools for the site. Hosted sitemaps shows: 900 urls submitted 800 Indexed. Crawl errors and Search queries: No data available http://www.example.com shows: 889 URLs submitted 1 URLs indexed. Crawl Errors: 14 Soft 404 796 Not found Search Queries: 8104 example.com shows: 889 URLs submitted 1 URLs indexed Crawl errors: 48 Soft 404 91 Not found Search Queries: 8104 My questions and need for help are as followed: 1. Why are our domain based sites in webmaster tools ( example.com and http:www.example.com) showing only 1 URL indexed while the hostedsitemap has 800 indexed? 2. Should we have 3 domains configured for this "one" domain in Google Webmaster tools? 3. Should we eliminate/delete the hostedsitemap from webmaster tools completely and take off that XML sitemap? 4 Does having example.com and http://www.example.com impact web ranking? 5. Any other thoughts or help in this very complicated matter for us. Thanks.

    Read the article

  • Moving from http to https - Google webmaster tools | Bing webmaster tools

    - by user2240778
    I'm moving from http to https for my entire site. The site is currently added to google webmaster tools as www.example.com and all the pages are indexed as http. How do i go about moving to the new https URLs on Google webmaster tools. Do I just submit a updated sitemap which has the https URLs OR Do I add a new site as https://www.example.com and submit the sitemap with https urls? All the http urls are set to redirect to their https counterparts.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >