Search Results

Search found 5595 results on 224 pages for 'mod perl'.

Page 106/224 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Why does my regex fail when the number ends in 0?

    - by Russell C.
    This is a really basic regex question but since I can't seem to figure out why the match is failing in certain circumstances I figured I'd post it to see if anyone else can point out what I'm missing. I'm trying to pull out the 2 sets of digits from strings of the form: 12309123098_102938120938120938 1321312_103810312032123 123123123_10983094854905490 38293827_1293120938129308 I'm using the following code to process each string: if($string && $string =~ /^(\d)+_(\d)+$/) { if(IsInteger($1) && IsInteger($2)) { print "success ('$1','$2')"; } else { print "fail"; } } Where the IsInterger() function is as follows: sub IsInteger { my $integer = shift; if($integer && $integer =~ /^\d+$/) { return 1; } return; } This function seems to work most of the time but fails on the following for some reason: 1287123437_1268098784380 1287123437_1267589971660 Any ideas on why these fail while others succeed? Thanks in advance for your help!

    Read the article

  • Why it is necessary to put comments on check-ins? [closed]

    - by Mik Kardash
    In fact, I always have something to put in when I perform a check-in of my code. However, the question I have is - Is it really so necessary? Does it help so much? How? From one point of view, comments can help you to keep track of changes performed with every check-in. Thus, I will be able to analyze the changes and identify a hypothetic problem a little bit quicker. On the other hand, it takes some time to write useful information into check-in. Is it worth it? What are the pros and cons of writing comments to every check-in? Is there any way to write "efficient" check-in comments?

    Read the article

  • What's the best simplest way to detect a file from a directory?

    - by Nano HE
    My code like this, I looked up the open command, but I am not sure it is the simplest way or not. if #Parser.exe exist in directory of Debug { move ("bin/Debug/Parser.exe","Parser.exe"); } elsif #Parser.exe exist in directory of Release { move ("bin/Release/Parser.exe","Parser.exe"); } else { die "Can't find the Parser.exe."; } Thank you.

    Read the article

  • Online job-searching is tedious. Help me automate it.

    - by ehsanul
    Many job sites have broken searches that don't let you narrow down jobs by experience level. Even when they do, it's usually wrong. This requires you to wade through hundreds of postings that you can't apply for before finding a relevant one, quite tedious. Since I'd rather focus on writing cover letters etc., I want to write a program to look through a large number of postings, and save the URLs of just those jobs that don't require years of experience. I don't require help writing the scraper to get the html bodies of possibly relevant job posts. The issue is accurately detecting the level of experience required for the job. This should not be too difficult as job posts are usually very explicit about this ("must have 5 years experience in..."), but there may be some issues with overly simple solutions. In my case, I'm looking for entry-level positions. Often they don't say "entry-level", but inclusion of the words probably means the job should be saved. Next, I can safely exclude a job the says it requires "5 years" of experience in whatever, so a regex like /\d\syears/ seems reasonable to exclude jobs. But then, I realized some jobs say they'll take 0-2 years of experience, matches the exclusion regex but is clearly a job I want to take a look at. Hmmm, I can handle that with another regex. But some say "less than 2 years" or "fewer than 2 years". Can handle that too, but it makes me wonder what other patterns I'm not thinking of, and possibly excluding many jobs. That's what brings me here, to find a better way to do this than regexes, if there is one. I'd like to minimize the false negative rate and save all the jobs that seem like they might not require many years of experience. Does excluding anything that matches /[3-9]\syears|1\d\syears/ seem reasonable? Or is there a better way? Training a bayesian filter maybe?

    Read the article

  • How can I quickly sum all numbers in a file?

    - by Mark Roberts
    I have a file which contains several thousand numbers, each on it's own line: 34 42 11 6 2 99 ... I'm looking to write a script which will print the sum of all numbers in the file. I've got a solution, but it's not very efficient. (It takes several minutes to run.) I'm looking for a more efficient solution. Any suggestions?

    Read the article

  • Critiquing PHP-code / PerlCritic for PHP?

    - by jeekl
    I'm looking for an equivalent of PerlCritic for PHP. PerlCritc is a static source code analyzer that qritiques code and warns about everything from unused variables, to unsafe ways to handle data to almost anything. Is there such a thing for PHP that could (preferably) be run outside of an IDE, so that source code analysis could be automated?

    Read the article

  • How to read the file

    - by muruga
    I want to get the file from one host to another host. We can get the file using the NET::FTP module. In that module we can use the get method to get the file.But I want the file content instead of the file. I know that using the read method we can read the file content. But how to call the read function and how to get the file content. Please help me.

    Read the article

  • How to print lines from a file that have repeated more than six times

    - by Mike
    I have a file containing the data shown below. The first comma-delimited field may be repeated any number of times, and I want to print only the lines after the sixth repetition of any value of this field For example, there are eight fields with 1111111 as the first field, and I want to print only the seventh and eighth of these records Input file: 1111111,aaaaaaaa,14 1111111,bbbbbbbb,14 1111111,cccccccc,14 1111111,dddddddd,14 1111111,eeeeeeee,14 1111111,ffffffff,14 1111111,gggggggg,14 1111111,hhhhhhhh,14 2222222,aaaaaaaa,14 2222222,bbbbbbbb,14 2222222,cccccccc,14 2222222,dddddddd,14 2222222,eeeeeeee,14 2222222,ffffffff,14 2222222,gggggggg,14 3333333,aaaaaaaa,14 3333333,bbbbbbbb,14 3333333,cccccccc,14 3333333,dddddddd,14 3333333,eeeeeeee,14 3333333,ffffffff,14 3333333,gggggggg,14 3333333,hhhhhhhh,14 Output: 1111111,gggggggg,14 1111111,hhhhhhhh,14 2222222,gggggggg,14 3333333,gggggggg,14 3333333,hhhhhhhh,14 What I have tried is to transponse the 2nd and 3rd fields with respect to 1st, so that I can use nawk on the field of $7 or $8 #!/usr/bin/ksh awk -F"," '{ a[$1]; b[$1]=b[$1]","$2 c[$1]=c[$1]","$3} END{ for(i in a){ print i","b[i]","c[i]} } ' file > output.txt

    Read the article

  • getting sql records

    - by droidus
    when i run this code, it returns the topic fine... $query = mysql_query("SELECT topic FROM question WHERE id = '$id'"); if(mysql_num_rows($query) > 0) { $row = mysql_fetch_array($query) or die(mysql_error()); $topic = $row['topic']; } but when I change it to this, it doesn't run at all. why is this happening? $query = mysql_query("SELECT topic, lock FROM question WHERE id = '$id'"); if(mysql_num_rows($query) > 0) { $row = mysql_fetch_array($query) or die(mysql_error()); $topic = $row['topic']; $lockedThread = $row['lock']; echo "here: " . $lockedThread; }

    Read the article

  • splitting on all kind of spaces and tabs

    - by rockyurock
    hello, i am reading some parameters(from user input) from a .txt file and want to make sure that my script could read it even a space or tab is left before that particular parameter by user. also if i want to add a comment for each parameter followed by # , after the parameter (e.g 7870 #this is default port number) to let the user know about the parameter how can i achieve it in same file ? here is wat i am using (/\|\s/) code:: $data_file="config.txt"; open(RAK, $data_file)|| die("Could not open file!"); @raw_data=; @Ftp_Server =split(/\|\s/,$raw_data[32]); config.txt (user input file) PING_TTL | 1 CLIENT_PORT | 7870 FTP_SERVER | 192.162.522.222 could any body suggest me a robust way to do it? /rocky

    Read the article

  • How can I make this REGEX cleaner?

    - by Solignis
    I have this regex I made to compare OS names to a line in a VMX file. It started out as seperate elsif statments but I ended up making into a single if statment. Anyhow here is the code, I am trying to find a way to make the code cleaner but it put each match on a seperate line it no longer works. elsif ($vmx_file =~ m/guestOSAltName\s+=\s"Microsoft\sWindows\sServer\s2003,Web\sEdition"|"Microsoft\sWindows\sSmall\sBusiness\sServer\s2003"|"Microsoft\sWindows\s2000\sAdvanced\sServer"|"Microsoft\sWindows\s2000\sServer"|"Microsoft\sWindows\s2000\sProfessional"|"Microsoft\sWindows\s98"|"Microsoft\sWindows\s95"|"Microsoft\sWindows\sNT\s4"/) { $virtual_machines{$vm}{"Architecture"} = "32-bit";

    Read the article

  • Get filename from path

    - by Eric
    I am trying to parse the filename from paths. I have this: my $filepath = "/Users/Eric/Documents/foldername/filename.pdf"; $filepath =~ m/^.*\\(.*[.].*)$/; print "Linux path:"; print $1 . "\n\n"; print "-------\n"; my $filepath = "c:\\Windows\eric\filename.pdf"; $filepath =~ m/^.*\\(.*[.].*)$/; print "Windows path:"; print $1 . "\n\n"; print "-------\n"; my $filepath = "filename.pdf"; $filepath =~ m/^.*\\(.*[.].*)$/; print "Without path:"; print $1 . "\n\n"; print "-------\n"; But that returns: Linux path: ------- Windows path:Windowsic ilename.pdf ------- Without path:Windowsic ilename.pdf ------- I am expecting this: Linux path: filename.pdf ------- Windows path: filename.pdf ------- Without path: filename.pdf ------- Can somebody please point out what I am doing wrong? Thanks! :)

    Read the article

  • What's the deal with reftype { } ?

    - by friedo
    I recently saw some code that reminded me to ask this question. Lately, I've been seeing a lot of this: use Scalar::Util 'reftype'; if ( reftype $some_ref eq reftype { } ) { ... } What is the purpose of calling reftype on an anonymous hashref? Why not just say eq 'HASH' ?

    Read the article

  • How to read the network file.

    - by ungalnanban
    I'm using Net::FTP for getting a remote hosted file. I want to read the file. I don't want to get the file from the remote host to my host (localhost), but I need to read the file content only. Is there any module to do this? use strict; use warnings; use Data::Dumper; use Net::FTP; my $ftp = Net::FTP->new("192.168.8.20", Debug => 0) or die "Cannot connect to some.host.name: $@"; $ftp->login("root",'root123') or die "Cannot login ", $ftp->message; $ftp->cwd("SUGUMAR") or die "Cannot change working directory ", $ftp->message; $ftp->get("Testing.txt") or die "get failed ", $ftp->message; $ftp->quit; In the above sample program, I get the file from 192.168.8.20. Then I will read the file and do the operation. I don't want to place the file in my local system. I need to read the Testing.txt file content.

    Read the article

  • Script to fix broken lines in a .txt file?

    - by Gravitas
    Hi, I'd love like to read books properly on my Kindle. To achieve my dream, I need a script to fix broken lines in a txt file. For example, if the txt file has this line: He watched Kahlan as she walked with her shoulders slumped down. ... then it should fix it by deleting the newline before the word "down": He watched Kahlan as she walked with her shoulders slumped down. So, fellow programmers, whats (a) the easiest way to do this and (b) the best language? p.s. The solution will involve searching for a lowercase letter in column 1, and deleting the newline before it to stitch the lines together. There are 1.2 million occurrences of this "rogue line break" in the novel I am trying to fix.

    Read the article

  • Why did File::Find finish short of completely traversing a large directory?

    - by Stan
    A directory exists with a total of 2,153,425 items (according to Windows folder Properties). It contains .jpg and .gif image files located within a few subdirectories. The task was to move the images into a different location while querying each file's name to retrieve some relevant info and store it elsewhere. The script that used File::Find finished at 20462 files. Out of curiosity I wrote a tiny recursive function to count the items which returned a count of 1,734,802. I suppose the difference can be accounted for by the fact that it didn't count folders, only files that passed the -f test. The problem itself can be solved differently by querying for file names first instead of traversing the directory. I'm just wondering what could've caused File::Find to finish at a small fraction of all files. The data is stored on an NTFS file system. Here is the meat of the script; I don't think including DBI stuff would be relevant since I reran the script with nothing but a counter in process_img() which returned the same number. find(\&process_img, $path_from); sub process_img { eval { return if ($_ eq "." or $_ eq ".."); ## Omitted querying and composing new paths for brevity. make_path("$path_to\\img\\$dir_area\\$dir_address\\$type"); copy($File::Find::name, "$path_to\\img\\$dir_area\\$dir_address\\$type\\$new_name"); }; if ($@) { print STDERR "eval barks: $@\n"; return } } And here is another method I used to count files: count_images($path_from); sub count_images { my $path = shift; opendir my $images, $path or die "died opening $path"; while (my $item = readdir $images) { next if $item eq '.' or $item eq '..'; $img_counter++ && next if -f "$path/$item"; count_images("$path/$item") if -d "$path/$item"; } closedir $images or die "died closing $path"; } print $img_counter;

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Is `eval`ing in a CPAN module without localizing $@ a bug?

    - by rassie
    I think I've encountered a bug in Params::Validate, but I'm not sure whether I identified the problematic code piece correctly. The code in question failed to pass exceptions up the chain (using Try::Tiny), so I started debugging and found out that a class used inside the try block has a destructor. This destructor calls object methods which use Params::Validate and looking into Validate.pm source I see an eval without $@ localization, i.e. the global $@ gets overwritten. Now I see two options: Params::Validate should always localize $@ and thus it's a bug that should be reported. The bug is in the class in question, because it shouldn't use Params::Validate in a destructor. Params::Validate can stay as it is now. Which one is it? How I should I handle this situation? PS: I think that CPAN modules should be rock-solid and neither break themselves nor their environment, hence the question title.

    Read the article

  • Using HTML::Template within a value attribute

    - by Zerobu
    Hello, my question is how would I use an HTML::Template tag inside a value of form to change that form. For example <table border="0" cellpadding="8" cellspacing="1"> <tr> <td align="right">File:</td> <td> <input type="file" name="upload" value= style="width:400px"> </td> </tr> <tr> <td align="right">File Name:</td> <td> <input type="text" name="filename" style="width:400px" value="" > </td> </tr> <tr> <td align="right">Title:</td> <td> <input type="text" name="title" style="width:400px" value="" /> </td> </tr> <tr> <td align="right">Date:</td> <td> <input type="text" name="date" style="width:400px" value="" /> </td> </tr> <tr> <td colspan="2" align="right"> <input type="button" value="Cancel"> <input type="submit" name="action" value="Upload" /> </td> </tr> </table> I want the value to have a variable in it.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >