Search Results

Search found 19408 results on 777 pages for 'output formats'.

Page 523/777 | < Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >

  • Confusion in RegExp Reluctant quantifier? Java

    - by Dusk
    Hi, Could anyone please tell me the reason of getting an output as: ab for the following RegExp code using Relcutant quantifier? Pattern p = Pattern.compile("abc*?"); Matcher m = p.matcher("abcfoo"); while(m.find()) System.out.println(m.group()); // ab and getting empty indices for the following code? Pattern p = Pattern.compile(".*?"); Matcher m = p.matcher("abcfoo"); while(m.find()) System.out.println(m.group());

    Read the article

  • Hudson jobs won't call javac?

    - by Dissonant
    Hi, I have just set up Hudson on my server. For some reason, my build will not call javac to compile my builds...? I have set the path to the JDK in the Manage Hudson area, and it seems to recognise it (doesn't give me a warning). Is there something else I'm supposed to do? Here's a sample console output of one of my jobs (note how javac isn't called at all): Started by user admin Checking out svn+ssh://myhost.com/Project1 A /src/Program.java A build.xml U At revision 119 no change for svn+ssh://myhost.com/Project1 since the previous build Finished: SUCCESS

    Read the article

  • Customizing Log4j to filter PatternLayout

    - by JavaScriptDude
    Greetings, I have just starting migrating to WLS 10.x and have noticed that the thread name [%t] for WL is quite verbose and more informative than I need for my deployment needs. Ultimately, I only care about the thread ID but WL gives me this:    [ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)' ~ Does anybody know if there is a way in log4j to write a custom filter that will allow me to override PatternLayout so I can parse the WLS Thread Name to just output the thread ID which in this case above is 0. I'd rather extend then customize as it makes upgrading libraries so much easier. Thanks :) - JsD

    Read the article

  • What's happening in this Perl foreach loop?

    - by benjamin button
    I have this Perl code: foreach (@tmp_cycledef) { chomp; my ($cycle_code, $close_day, $first_date) = split(/\|/, $_,3); $cycle_code =~ s/^\s*(\S*(?:\s+\S+)*)\s*$/$1/; $close_day =~ s/^\s*(\S*(?:\s+\S+)*)\s*$/$1/; $first_date =~ s/^\s*(\S*(?:\s+\S+)*)\s*$/$1/; #print "$cycle_code, $close_day, $first_date\n"; $cycledef{$cycle_code} = [ $close_day, split(/-/,$first_date) ]; } The value of tmp_cycledef comes from output of an SQL query: select cycle_code,cycle_close_day,to_char(cycle_first_date,'YYYY-MM-DD') from cycle_definition d order by cycle_code; What exactly is happening inside the for loop?

    Read the article

  • Getting closure-compiler and Node.js to play nice

    - by bukzor
    Are there any projects that used node.js and closure-compiler (CC for short) together? The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a require("./MyLib.js"), that line is put directly into the output, but it doesn't make any sense in that context. I see a few options: Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance. Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode. I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it?

    Read the article

  • What would be the right way to declare an array within a script that will be called by cron?

    - by Nano Taboada
    I've written a Korn Shell script that sets an array the following way: set -A fruits Apple Orange Banana Strawberry but when I'm trying to run it from within cron, it raises the following error: Your "cron" job on myhost /myScript.sh produced the following output: myScript.sh: -A: bad option(s) I've tried many crontab syntax variants, such as: Attempt 1: 0,5,10,15,20,25,30,35,40,45,50,55 * * * * /path/to/script/myScript.sh Attempt 2: 0,5,10,15,20,25,30,35,40,45,50,55 * * * * /path/to/script/./myScript.sh Attempt 3: 0,5,10,15,20,25,30,35,40,45,50,55 * * * * cd /path/to/script && ./myScript.sh Any workaround would be sincerely appreciated. Thanks much in advance!

    Read the article

  • An Inaccessible JavaScript Object property - Why is Firebug showing this?

    - by Matty
    So, I'm attempting to access the content of an object and for the life of me can't figure out why I can't. I'm starting to believe that the object doesn't have the properties that Firebug indicates that it does. More likely than that I'm just not using the right syntax to access them. Give the following function: function(userData) { console.log(userData); // statement 1 console.log(userData.t_nodecontent); // statement 2 } Which generates the following FireBug output for statement 1 and unknown for statement 2. Is there something obvious that I'm overlooking in the way I'm attempting to reference the value of t_nodecontent? I'm at a loss :(

    Read the article

  • Nokogiri Not Parsing File

    - by Jesse J
    I'm using Nokogiri to parse pepXML files from different peptide search engines. I have two pepXML files, both of which appear, inasmuch as I can tell, to be of correct format, and puts Nokogiri::XML(IO.read(file)) will output the whole XML file for both files. The problem is, doc.xpath("any valid xpath") will parse the tag from one of the files, but not the other. No errors are given, so I have no idea why it won't parse. Anyone know of any reasons why Nokogiri wouldn't parse something out?

    Read the article

  • selecting ppp of multiple interfaces

    - by Neeraj
    Hi everyone, I am currently having a hard time on getting the mobile broadband connection running on ubuntu 10.04(lucid lynx). I am using a USB modem and used wvdial to connect to the web It went like: Sending ATZ ... OK sending some more flags OK modem initialized connecting Local IP x.x.x.x Remote IP y.y.y.y Primary DNS z.z.z.z Secondary DNS a.a.a.a (Some more output) I tested this with invalid username to make sure it is really connected and i think it was connected as connection failed with invalid usernames. Now when I do a remote ping say 8.8.8.8 (Google DNS servers), the ping says unreachable I think this might be because the system may be using the ethernet to send packets which was indeed disconnected. So can anyone help me out with this. How can I select ppp as the interface to send packets or is there some other problem. A command line solution will be appreciated as my network manager applet doesnt works correctly. Any help is much much appreciated. -- thx

    Read the article

  • Markdown to text/plain and text/html for multipart email

    - by fphilipe
    I’m looking for a solution to send DRY multipart emails in Rails. With DRY I mean that the content for the mail is only defined once. I’ve thought about some possible solutions but haven’t found any existing implementations. The solutions I’ve thought about are: load the text from I18n and apply Markdown for the html mail and apply Markdown with a special output type for the text mail where links are put in parenthesis after the link text bold, italic and other formatting that doesn't make sense are removed ordered and unordered lists are maintained generate only the html mail and convert that to text according to the above conditions Is there any available solution out there? Which one is probably the better way to do it?

    Read the article

  • How do you fix "Too many open files" problem in Hudson?

    - by Randyaa
    We use Hudson as a continuous integration system to execute automated builds (nightly and based on CVS polling) of a lot of our projects. Some projects poll CVS every 15 minutes, some others poll every 5 minutes and some poll every hour. Every few weeks we'll get a build that fails with the following output: FATAL: java.io.IOException: Too many open files java.io.IOException: java.io.IOException: Too many open files at java.lang.UNIXProcess.<init>(UNIXProcess.java:148) The next build always worked (with 0 changes) so we always chalked it up to 2 build jobs being run at the same time and happening to have too many files open during the process. This weekend we had a build fail Friday night (automatic nightly build) with the message and every other nightly build also failed. Somehow this triggered Hudson to continuously build every project which failed until the issue was resolved. This resulted in a build every 30 minutes or so of every project until sometime Saturday night when the issue magically disappeared.

    Read the article

  • Facing a problem when i am making a setup of Windows Service.

    - by prateeksaluja20
    i am trying to make a setup of windows service.but when i was building the setup the output is like that.. ------ Build started: Project: TwitterService, Configuration: Debug Any CPU ------ TwitterService - C:\Users\Globus-n2\Desktop\LatestTweetMati\TwitterService\bin\Debug\TwitterService.exe ------ Starting pre-build validation for project 'Setup' ------ ------ Pre-build validation for project 'Setup' completed ------ ------ Build started: Project: Setup, Configuration: Debug ------ Building file 'C:\Users\Globus-n2\Desktop\LatestTweetMati\Setup\Debug\Setup.msi'... Packaging file 'TwitterService.exe.config'... Packaging file 'GlobusLib.dll'... Packaging file 'TwitterService.exe'... Packaging file 'GlobusTwitterLib.dll'... ========== Build: 1 succeeded or up-to-date, 1 failed, 0 skipped ========== setup is failed i am not getting any error.i have tried to make a new copy but still problem remain.i have tried to add those dll again but problem is not solved.can any one please help me to solve this problem.il really very thankful if any one try to solve this problem.

    Read the article

  • Custom Django tag & jQuery

    - by pocoa
    I'm new to Django. Today I created some Django custom tags which is not that hard. But now I wonder what is the best way to include some jQuery or some Javascript code packed into my custom tag definition. What is the regular way to include a custom library into my code? For example: {% faceboxify item %} So assume that it'll create a specific HTML output for Facebox plugin. I just want to learn some elegant way to import this plugin into my code. I want the above definition to be enough for all functionality. Is there any way to do it? I couldn't find any example. Maybe I'm missing something.. Thank you.

    Read the article

  • long running process in asp.net C#

    - by user339323
    Hello All, I have a web application that has a long running (resource intensive) process in the code behind and the end output is a pdf file (images to pdf conversion tool) It runs fine..and since I am on a dedicated server, it is not at all a problem with respect to resources right now. However, I wonder that the system would reach its resource limits if, there are more than 20 users processing at a time. I have seen services online where the user enters their email and the processes are, I suppose, queued in the background and the results emailed with the 1st in 1st out method. Can someone please give me a start on how to implement this kind of logic in asp.net applications using C#? Thanks a lot in advance, Prasad.

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • iPython in Emacs. Quick code evaluation

    - by AmV
    Hi all, I would like to "send" code snippets to a iPython interpreter in Emacs 23.2 (Linux). I have two related questions about this: Q1: I have learned that Emacs provides ('shell-command-on-region') to run selected regions in a shell. I have set setq shell-file-name to my iPython path, but when I run M-| after selecting a region, Emacs prompts me the following: Shell command on region: and if I then type RET, I get the iPython man page on the *Shell Command Output* buffer, without the region being executed. Why? Q2: Assuming that I have already started an iPython shell in some other buffer in Emacs, is there a way of selecting a region in another buffer and "sending" this region to the already-started iPython shell? Thanks!

    Read the article

  • Console class in java Exception in reading password

    - by satheesh
    Hi, i am trying to use Console class in java. with this code import java.io.Console; public class ConsoleClass { public static void main(String[] args){ Console c=System.console(); char[] pw; pw=c.readPassword("%s","pw :"); for(char ch:pw){ c.format("%c",ch); } c.format("\n"); MyUtility mu =new MyUtility(); while(true){ String name=c.readLine("%s", "input?: "); c.format("output: %s \n",mu.doStuff(name)); } } } class MyUtility{ String doStuff(String arg1){ return "result is " +arg1; } } here i am getting NullPointerException when i tried to run in netbeans but i am not getting any Exception when tried to run in cmd with out netbeans IDE.Why?

    Read the article

  • Perl, deleting an .xml file created and open with IO::File and XML::Writer?

    - by Sho Minamimoto
    So I'm running through a list of things and have code that creates an .xml file with IO::File called $doc, then I make a new writer with XML::Writer(OUTPUT = $doc). More code runs and I build a big xml file with XML::Writer. Then, near the end of the file, I find out if I need this file at all. If I do need it, I just $writer-end(); $doc-close(); but if I don't need it, what should I enter to just delete all data I've stored/saved and move onto the next file? I tried unlink($docpath) (before and after $doc-close()), the file was not deleted.

    Read the article

  • Which collation should I use to store these country names in MySQL?

    - by morpheous
    I am trying to store a list of countries in a mySQL database. I am having problems storing (non English) names like these: São Tomé and Príncipe República de El Salvador They are stored with strange characters in the db, (and therefore output strangely in my HTML pages). I have tried using different combinations of collations for the database and the MySQL connection collation: The "obvious" setting was to use utf8_unicode_ci for both the databse and the connection information. To my utter surprise, that did not solve the problem. Does anyone know how to resolve this issue?

    Read the article

  • Des decryption returns empty

    - by Nilambari
    Hi, I am using Des.php for decryption. For few texts descript function returns Correct output in readable text format. For few... it just returns blank(empty.) After debugging in Des.php, i found that function _unpad($text) is returning false. The input $text is also still encoded. What could be the reason? Whenever decrypt function calls for _unpad($text) function, i am getting empty as results. Des.php resource: here

    Read the article

  • Testing methods called on yielded object

    - by Todd R
    I have the following controller test case: def test_showplain Cleaner.expect(:parse).with(@somecontent) Cleaner.any_instance.stubs(:plainversion).returns(@returnvalue) post :showplain, {:content => @somecontent} end This works fine, except that I want the "stubs(:plainversion)" to be an "expects(:plainversion)". Here's the controller code: def showplain Cleaner.parse(params[:content]) do | cleaner | @output = cleaner.plainversion end end And the Cleaner is simply: class Cleaner ### other code and methods ### def self.parse(@content) cleaner = Cleaner.new(@content) yield cleaner cleaner.close end def plainversion ### operate on @content and return ### end end Again, I can't figure out how to reliably test the "cleaner" that is made available from the "parse" method. Any suggestions?

    Read the article

  • System.Data.DataException thrown when launching application after install

    - by jwarzech
    I currently have a (Visual Studio 2008) installer that has a 'Primary output...' custom action under 'Install' with 'InstallerClass' set to false. During the install I get a System.Data.DataException. When I run the debugger I am getting an exception being thrown from a bit of database access code that runs when the application starts. The database access is using Windows Authentication and works correctly when the application is started manually. Is this something to do with permission settings not allowing db access when launched from the installer? (I'm out of ideas)

    Read the article

  • How to eval javascript code with iPhone SDK?

    - by overboming
    I need to fetch some result on a webpage, which use some javascript code to generate the part I am interesting in like following eval(function(p,a,c,k,e,d){e=function(c){return c};if(!''.replace(/^/,String)){while(c--)d[c]=k[c]||c;k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('5 11=17;5 12=["/3/2/1/0/13.4","/3/2/1/0/15.4","/3/2/1/0/14.4","/3/2/1/0/7.4","/3/2/1/0/6.4","/3/2/1/0/8.4","/3/2/1/0/10.4","/3/2/1/0/9.4","/3/2/1/0/23.4","/3/2/1/0/22.4","/3/2/1/0/24.4","/3/2/1/0/26.4","/3/2/1/0/25.4","/3/2/1/0/18.4","/3/2/1/0/16.4","/3/2/1/0/19.4","/3/2/1/0/21.4"];5 20=0;',10,27,'40769|54|Images|Files|png|var|imanhua_005_140430179|imanhua_004_140430179|imanhua_006_140430226|imanhua_008_140430242|imanhua_007_140430226|len|pic|imanhua_001_140429664|imanhua_003_140430117|imanhua_002_140430070|imanhua_015_140430414||imanhua_014_140430382|imanhua_016_140430414|sid|imanhua_017_140430429|imanhua_010_140430289|imanhua_009_140430242|imanhua_011_140430367|imanhua_013_140430382|imanhua_012_140430367'.split('|'),0,{})) How do I get the evaluation output?

    Read the article

  • Couchbase (ex. membase) solution to dump all keys in a bucket

    - by j99
    I was googling around and found various python + tap solutions that should enable me to dump all keys from a bucket but none of them worked for me. I have a bucket at port 11230 and I need to get a dump of all keys in order to fill them into sphinx search engine. If I execute: # python /opt/couchbase/lib/python/tap_example.py 127.0.0.1:11230 I get the following output: info: New bin connection from None error: uncaptured python exception, closing channel <tap.TapConnection connected at 0x7f5d287184d0> (<type 'exceptions.AssertionError'>: [/usr/lib/python2.6/asyncore.py|read|78] [/usr/lib/python2.6/asyncore.py|handle_read_event|428] [/opt/couchbase/lib/python/mc_bin_server.py|handle_read|325]) this error is the same even if I try some other host or port. I also tried many other python scripts that I found on forums and groups but all of them produced the same error. My primary development environment includes PHP & Perl on Debian linux box but I will take any solution that would just dump all the keys into plain text file. Thank you for any help!

    Read the article

  • Which is more efficient/faster when calling a cached image?

    - by andufo
    Hi, i made an image resizer in php. When an image is resized, it caches a new jpg file with the new dimensions. Next time you call the exact img.php?file=hello.jpg&size=400 it checks if the new jpg has already been created. If it has NOT been created yet, it creates the file and then prints the output (cool). If it ALREADY exists, no new file needs to be generated and instead, it just calls the already cached file. My question is regarding the second scenario. Which of these is faster? redirecting: header('Location: cache/hello_400.jpg');die(); grabbing data and printing the cached file: $data = file_get_contents('cache/hello_400.jpg'); header('Content-type: '.$mime); header('Content-Length: '.strlen($data)); echo $data; Any other ways to improve this?

    Read the article

< Previous Page | 519 520 521 522 523 524 525 526 527 528 529 530  | Next Page >