Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 314/2673 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Is there a way to tell JVM to optimize my code before processing?

    - by Rogach
    I have a method, which takes much time to execute first time. But after several invocations, it takes about 30 times less time. So, to make my application respond to user interaction faster, I "warm-up" this method (5 times) with some sample data on initialization of application. But this increases app start-up time. I read, that JVM's can optimize and compile my java code to native, thus speeding things up. I wanted to know - maybe there is some way to explicitly tell JVM that I want this method to be compiled on startup of application?

    Read the article

  • Dealing with external processes

    - by Jesse Aldridge
    I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far: Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example: import textwrap, os, time from subprocess import Popen test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) proc = Popen('python -B "%s"' % test_path) for i in range(3): print 'Hello %i' % i time.sleep(1) os.remove(test_path) I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement. If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code. I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that. A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem... Blocking processes cause the gui to become unresponsive. import textwrap, sys, os from subprocess import Popen from PyQt4.QtGui import * from PyQt4.QtCore import * test_path = 'test_file.py' with open(test_path, 'w') as file: file.write(textwrap.dedent(''' import time for i in range(3): print 'Hello %i' % i time.sleep(1)''')) app = QApplication(sys.argv) button = QPushButton('Launch process') def launch_proc(): # Can't move the window until process completes proc = Popen('python -B "%s"' % test_path) proc.communicate() button.connect(button, SIGNAL('clicked()'), launch_proc) button.show() app.exec_() os.remove(test_path) Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details... Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong. Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time. I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more... Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background. Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that. It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging. Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes. This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager. F***ing quotes Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''". A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this. Can't easily return data from a subprocess. You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine. Obscure command-line flags and a crappy terminal based help system. These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do. External factors. This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior. There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • python duration of a file object in an argument list

    - by msw
    In the pickle module documentation there is a snippet of example code: reader = pickle.load(open('save.p', 'rb')) which upon first read looked like it would allocate a system file descriptor, read its contents and then "leak" the open descriptor for there isn't any handle accessible to call close() upon. This got me wondering if there was any hidden magic that takes care of this case. Diving into the source, I found in Modules/_fileio.c that file descriptors are closed by the fileio_dealloc() destructor which led to the real question. What is the duration of the file object returned by the example code above? After that statement executes does the object indeed become unreferenced and therefore will the fd be subject to a real close(2) call at some future garbage collection sweep? If so, is the example line good practice, or should one not count on the fd being released thus risking kernel per-process descriptor table exhaustion?

    Read the article

  • Java: versioned data structures?

    - by Jason S
    I have a data structure that is pretty simple (basically a structure containing some arrays and single values), but I need to record the history of the data structure so that I can efficiently get the contents of the data structure at any point in time. Is there a relatively straightforward way to do this? The best way I can think of would be to encapsulate the whole data structure with something that handles all the mutating operations by storing data in functional data structures, and then for each mutation operation caching a copy of the data structure in a Map indexed by time-ordering (e.g. a TreeMap with real time as keys, or a HashMap with a counter of mutation operations combined with one or more indexes stored in TreeMaps mapping real time / tick count / etc. to mutation operations) any suggestions?

    Read the article

  • ActiveRecord model with datetime stamp with timezone support attribute.

    - by jtarchie
    Rails is great that it will support timezone overall in the application with Time.zone. I need to be able to support the timezone a user selects for a record. The user will be able to select date, time, and timezone for the record and I would like all calculations to be done with respect to the user selected timezone. My question is what is the best practice to handle user selected timezones. The model is using a time_zone_select and datetime_select for two different attributes timezone and scheduled_at. When the model saves, the scheduled_at attribute gets converted to the locally defined Time.zone. When a user goes back to edit the scheduled_at attribute with the datetime_select the datetime is set to the converted Time.zone timezone and not the timezone attribute. Is there a nice way to handle to the conversion to the user selected timezone?

    Read the article

  • Reused UIWebView showing previous loaded content for a brief second on iPhone

    - by Roi
    In one of my apps I reuse a webview. Each time the user enters a certain view on reload cached data to the webview using the method - (void)loadData:(NSData *)data MIMEType:(NSString *)MIMEType textEncodingName:(NSString *)encodingName baseURL:(NSURL *)baseURL and I wait for the callback call - (void) webViewDidFinishLoad:(UIWebView *)webView. In the mean time I hide the webview and show a 'loading' label. Only when I receive webViewDidFinishLoad do I show the webview. Many times what happens is I see the previous data that was loaded to the webview for a brief second before the new data I loaded kicks in. I already added a delay of 0.2 seconds before showing the webview but it didn't help. Instead of solving this by adding more time to the delay does anyone know how to solve this issue or maybe clear old data from a webview without release and allocating it every time?

    Read the article

  • MySQL column not found

    - by vian
    The SQL query without where statement runs great and outputs good results, but when I include WHERE condition it shows Unknown column 'date1' in 'where clause'. What's the problem? SELECT IF(e.weekly, DATE_ADD(DATE(e.time), INTERVAL CEIL(DATEDIFF('2010-04-08', e.time)/7) WEEK), DATE(e.time)) AS `e.date1`, `v`.`lat`, `v`.`lng` FROM `events` AS `e` INNER JOIN `venues` AS `v` ON e.venue_id = v.id WHERE e.date1 > '2010-09-01'

    Read the article

  • When to use Spring Integration vs. Camel?

    - by ngeek
    As a seasoned Spring user I was assuming that Spring Integration would make the most sense in a recent project requiring some (JMS) messaging capabilities (more details). After some days working with Spring Integration it still feels like a lot of configuration overhead given the amount of channels you have to configure to bring some request-response (listening on different JMS queues) communications in place. Therefore I was looking for some background information how Camel is different from Spring Integration, but it seems like information out there are pretty spare, I found: http://java.dzone.com/articles/spring-integration-and-apache (Very neutral comparison between implementing a real-world integration scenario in Spring Integration vs. Camel, from December 2009) http://hillert.blogspot.com/2009/10/apache-camel-alternatives.html (Comparing Camel with other solutions, October 2009) http://raibledesigns.com/rd/entry/taking_apache_camel_for_a (Matt Raible, October 2008) Question is: what experiences did you make on using the one stack over the other? In which scenarios would you recommend Camel were Spring Integration lacks support? Where do you see pros and cons of each? Any advise from real-world projects are highly appreciated.

    Read the article

  • Bring object to the front, in Flash AS3

    - by jimbo
    Hi All, i have the below code, which is basically animating object across the screen, when roll-over happens it pauses the anim, and displays some information. Everything works fine, but when its paused, i wold like that current object to be 'on top' so other items run behind. I have looked at setChildIndex, but didn't have much luck. package { import flash.display.MovieClip; import flash.display.Sprite; import flash.events.MouseEvent; import flash.geom.Point; import flash.events.KeyboardEvent; import flash.events.*; import caurina.transitions.Tweener; import fl.motion.Color; public class carpurchase extends Sprite { public function carpurchase() { var carX = 570; //Set cars var car1:fullCar = new fullCar(); car1.info.alpha = 0; //var c:Color = new Color(); //c.setTint(0xff0000, 0.8); //car2.car.transform.colorTransform=c; car1.x = carX; car1.y = 280; car1.info.title.text = "test"; car1.info.desc.text = "test"; addChild(car1); car1.addEventListener(MouseEvent.ROLL_OVER, carPause); car1.addEventListener(MouseEvent.ROLL_OUT, carContinue); function car1Reset():void { Tweener.addTween(car1, {x:carX, time:0, onComplete:car1Tween}); } function car1Tween():void { Tweener.addTween(car1, {x:-120, time:2, delay:3, transition:"linear", onComplete:car1Reset}); } car1Tween(); var car2:fullCar = new fullCar(); car2.info.alpha = 0; var c:Color = new Color(); c.setTint(0xff0000, 0.8); car2.car.transform.colorTransform=c; car1.x = carX; car2.y = 175; car2.info.title.text = "test"; car2.info.desc.text = "test"; addChild(car2); car2.addEventListener(MouseEvent.ROLL_OVER, carPause); car2.addEventListener(MouseEvent.ROLL_OUT, carContinue); function car2Reset():void { Tweener.addTween(car2, {x:carX, time:0, onComplete:car2Tween}); } function car2Tween():void { Tweener.addTween(car2, {x:-120, time:3, delay:0, transition:"linear", onComplete:car2Reset}); } car2Tween(); function carPause(e:MouseEvent):void { Tweener.pauseTweens(e.target); Tweener.addTween(e.target.info, {y:-150, alpha:1, time:.5, transition:"easeout"}); } function carContinue(e:MouseEvent):void { Tweener.addTween(e.target.info, {y:10, alpha:0, time:.5, transition:"easeout"}); Tweener.resumeTweens(e.target); } } } Any help welcome

    Read the article

  • export and import utf8 data in mysql: best practices

    - by ChrisRamakers
    We're often faced with the need to send a data file to one of our clients with data from the database he/she needs to translate. Most of the time this export is CSV or XLS. Most of the time we create a csv dump with phpmyadmin and get an xls file in return with the translated data. The problem is that most of the time the data is UTF8 and when the file is returned as xls each and every time we load the data into mysql again we end up with utf8 problems, characters not being displayed properly, etc ... We've already doublechecked everything in mysql from my.conf to column charactersets and everything is set correctly to UTF8. My question is not how to fix the encoding issue since that's been solved but how we would best proceed in the future handling this situation? What export format should we hand over? How should we import (just mysql load data infile or our own processing scripts). What is the general consensus on how to handle this situation? We would like to continue using excel if possible since that's the format almost everybody expects including our clients' translation agencies. Our clients' ease of use is the most important factor here, without overloading us with major issues each time. The best of both worlds :)

    Read the article

  • Practically expected transfer rates for sdhc class 6

    - by bobobobo
    I went and bought an expensive SanDisk Extreme III SDHC 8GB class 6 chip. However when I dump data from the card to the machine via USB 2.0 cable, its only getting 5.0 MB/second maximum according to Windows 7 disk explorer. It still can take up to 20 minutes to dump the card when its near full. This is so far below the rated 20MB/s transfer speed I can't believe it. Is this normal or might I have a defective chip?

    Read the article

  • An affordable way to use multiple Delayed::Job queues

    - by NudeCanalTroll
    I have a Ruby on Rails app that needs process many background jobs simultaneously: anywhere from 5-6 at a time to up to 50-60 at a time depending on the time of day. Right now my app is running on Heroku, which charges $.05/hour per worker, regardless of how much CPU or memory the worker is using. This is costing me a boatload each month... up to $1200/mo. Are there any hosts that will allow me to do what I'm doing for significantly cheaper?

    Read the article

  • mysql query: SELECT DISTINCT column1, GROUP BY column2

    - by Adam
    Right now I have the following query: SELECT name, COUNT(name), time, price, ip, SUM(price) FROM tablename WHERE time >= $yesterday AND time <$today GROUP BY name And what I'd like to do is add a DISTINCT by column 'ip', i.e. SELECT DISTINCT ip FROM tablename So my final output would be all the columns, from all the rows that where time is today, grouped by name (with name count for each repeating name) and no duplicate ip addresses. What should my query look like? (or alternatively, how can I add the missing filter to the output with php)? Thanks in advance.

    Read the article

  • Rationale behind Python's preferred for syntax

    - by susmits
    What is the rationale behind the advocated use of the for i in xrange(...)-style looping constructs in Python? For simple integer looping, the difference in overheads is substantial. I conducted a simple test using two pieces of code: File idiomatic.py: #!/usr/bin/env python M = 10000 N = 10000 if __name__ == "__main__": x, y = 0, 0 for x in xrange(N): for y in xrange(M): pass File cstyle.py: #!/usr/bin/env python M = 10000 N = 10000 if __name__ == "__main__": x, y = 0, 0 while x < N: while y < M: y += 1 x += 1 Profiling results were as follows: bash-3.1$ time python cstyle.py real 0m0.109s user 0m0.015s sys 0m0.000s bash-3.1$ time python idiomatic.py real 0m4.492s user 0m0.000s sys 0m0.031s I can understand why the Pythonic version is slower -- I imagine it has a lot to do with calling xrange N times, perhaps this could be eliminated if there was a way to rewind a generator. However, with this deal of difference in execution time, why would one prefer to use the Pythonic version?

    Read the article

  • APC not working as expected?

    - by Alix Axel
    I've the following function: function Cache($key, $value = null, $ttl = 60) { if (isset($value) === true) { apc_store($key, $value, intval($ttl)); } return apc_fetch($key); } And I'm testing it using the following code: Cache('ktime', time(), 3); // Store sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should Fetch sleep(5); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch sleep(1); var_dump(Cache('ktime') . '-' . time()); echo '<hr />'; // Should NOT Fetch And this is the output: string(21) "1273966771-1273966772" string(21) "1273966771-1273966777" string(21) "1273966771-1273966778" string(21) "1273966771-1273966779" Shouldn't it look like this: string(21) "1273966771-1273966772" string(21) "-1273966777" string(21) "-1273966778" string(21) "-1273966779" I don't understand, can anyone help me figure out this strange behavior?

    Read the article

  • Software to store frequently used text in PC

    - by user15660
    Hi, I am a looking for a free software that can run on the task bar (near the system time) where I can store frequently used text like my full street address, paths of specific deep folders & files in the computer etc etc. This way I can just click the icon which should popup a screen where I should be able to copy the text/string I am looking for Any ideas? thanks in advance

    Read the article

  • Simplest possible rack app -> permission error

    - by 7stud
    Here's the program(1.rb) blah blah blah blah blah blah: require 'rack' my_rack = lambda { |env| [200, {}, ["Hello. The time is: #{Time.now}"]] } handler = Rack::Handler::WEBrick handler.run(my_rack, :PORT => 12_500) Here's the error (blah blah blah blah blah): ~/ruby_programs$ ruby 1.rb [2012-12-07 21:49:09] INFO WEBrick 1.3.1 [2012-12-07 21:49:09] INFO ruby 1.9.3 (2012-04-20) [x86_64-darwin10.8.0] [2012-12-07 21:49:09] WARN TCPServer Error: Permission denied - bind(2) [2012-12-07 21:49:09] WARN TCPServer Error: Permission denied - bind(2) /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/utils.rb:85:in `initialize': Permission denied - bind(2) (Errno::EACCES) from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/utils.rb:85:in `new' from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/utils.rb:85:in `block in create_listeners' from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/utils.rb:82:in `each' from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/utils.rb:82:in `create_listeners' from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/server.rb:82:in `listen' from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/server.rb:70:in `initialize' from /Users/7stud/.rvm/rubies/ruby-1.9.3-p194/lib/ruby/1.9.1/webrick/httpserver.rb:45:in `initialize' from /Users/7stud/.rvm/gems/ruby-1.9.3-p194@programming/gems/rack-1.4.1/lib/rack/handler/webrick.rb:10:in `new' from /Users/7stud/.rvm/gems/ruby-1.9.3-p194@programming/gems/rack-1.4.1/lib/rack/handler/webrick.rb:10:in `run' from 1.rb:5:in `<main>' ~/ruby_programs$ Here's line 85 of ../webrick/utils.rb: sock = TCPServer.new(ai[3], port) If I replace the code in 1.rb with this: require 'socket' server = TCPServer.new 12_000 # Server bind to port 2000 loop do client = server.accept # Wait for a client to connect client.puts "Hello !" client.puts "Time is #{Time.now}" client.close end I don't get any errors, and if I enter the address: http://localhost:12000/ in my browser, I get the expected output: Hello ! Time is 2012-12-07 18:58:15 -0700

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • Overview of PHP shorthand

    - by James Simpson
    I've been programming in PHP for years now, but I've never learned how to use any shorthand. I come across it from time to time in code and have a hard time reading it, so I'd like to learn the different shorthand that exists for the language sot hat I can read it and start saving time/lines by using it, but I can't seem to find a comprehensive overview of all of the shorthand. A Google search pretty much exclusively shows the shorthand for if/else statements, but I know there must be more than just that. By shorthand, I am talking about stuff like: ($var) ? true : false;

    Read the article

  • ArrayList in Java, adding code to methods

    - by yaz
    Why am I getting a compile time error for the two method headers at the end of my code in my class ArrayListTest? ArrayListTest: http://pastebin.com/dUHn9vPr Student: http://pastebin.com/3Vz1Aytr I have a compiler error at the two lines: delete(CS242, s3) replace(CS242, s, s4); When I try to run the code it states: Exception in thread "main" java.lang.Error: Unresolved compilation problems: The method replace(ArrayList<Student>, Student, Student)in the type ArrayListTest is not applicable for the arguments (List<Student>, Student, Student) The method delete(ArrayList<Student>, Student) in the type ArrayListTest is not applicable for the arguments (List<Student>, Student) at ArrayListTest.main(ArrayListTest.java:54) I fixed the compile time errors since I use Eclipse and Eclipse gives options of code you can use to fix the compile time error. I chose to "Change method to 'replace(ArrayList, Student, Student)' to 'replace(List, Student, Student)' Although it fixed that compile time error, I don't understand why I was getting a compile time error to begin with and why that effectively fixed the error I don't really know what missing code I need to write to correct these two methods below: public static void replace(List<Student> cS242, Student oldItem, Student newItem) { public static void delete(List<Student> cS242, Student target){

    Read the article

  • Why does "request.getUserPrincipal().getName()" sometimes return a blank string?

    - by Marcus
    Has somebody an idea, why the getName method of the requests getUserPrincipal Method sometimes provides an empty String? Most of the time it returns the correct user name but not every time. This behaviour does occur randonmly. I can start the application, run the command and it works. The next time I start the application and run the command (exactly the same way as before!) it does not work... Any ideas?

    Read the article

  • Custome action during Uninstall

    - by sijith
    I want to take system time and date during install of my program. How to set my current time into system registry. I s it possible to do coding in VC++ Give me a sample code to set current time to registry using VC++.. Thanks in advance.

    Read the article

  • How to order a TimeZone list in .NET?

    - by reallyJim
    I have an ASP.NET web application that requires users to select their appropriate time zone so that it can correctly show local times for events. In creating a simple approach for selecting the time zone, I started by just using the values from TimeZoneInfo.GetSystemTimeZones(), and showing that list. The only problem with this is that since our application is primarily targeted at the United States, I'd like to show those entries first, basically starting with Eastern Time and working backwards (West) until I reach Atlantic time. What's a good approach to do this in code?

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >