Search Results

Search found 613 results on 25 pages for 'spawn fcgi'.

Page 20/25 | < Previous Page | 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Uploadify with ruby on rails 'bad content body' 500 Internal Server Error

    - by Mr_Nizzle
    I'm Getting this error in my development log while uploadify is uploading the file and in the view i get an 'IO ERROR' beside filename. /!\ FAILSAFE /!\ Thu Mar 18 11:54:53 -0500 2010 Status: 500 Internal Server Error bad content body /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:351:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `loop' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/utils.rb:323:in `parse_multipart' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/request.rb:133:in `POST' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/methodoverride.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/params_parser.rb:15:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/session/cookie_store.rb:93:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/reloader.rb:29:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/failsafe.rb:26:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `synchronize' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/lock.rb:11:in `call' /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.3/lib/action_controller/dispatcher.rb:106:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/content_length.rb:13:in `call' /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/fastcgi.rb:58:in `serve' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:103:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:153:in `with_signal_handler' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:101:in `process_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:78:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:77:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `catch' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:76:in `process_each_request' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:51:in `process!' /usr/lib/ruby/gems/1.8/gems/rails-2.3.3/lib/fcgi_handler.rb:23:in `process!' dispatch.fcgi:24 any idea on this?

    Read the article

  • Logging in worker threads spawned from a pylons application does not seem to work

    - by TimM
    I have a pylons application where, under certain cirumstances I want to spawn multiple worker threads to process items in a queue. Right now we aren't making use of a ThreadPool (would be ideal, but we'll add that in later). The main problem is that the worker threads logging does not get written to the log files. When I run the code outside of the pylons application the logging works fine. So I think its something to do with the pylons log handler but not sure what. Here is a basic example of the code (trimmed down): import logging log = logging.getLogger(__name__) import sys from Queue import Queue from threading import Thread, activeCount def run(input, worker, args = None, simulteneousWorkerLimit = None): queue = Queue() threads = [] if args is not None: if len(args) > 0: args = list(args) args = [worker, queue] + args args = tuple(args) else: args = (worker, queue) # start threads for i in range(4): t = Thread(target = __thread, args = args) t.daemon = True t.start() threads.append(t) # add ThreadTermSignal inputData = list(input) inputData.extend([ThreadTermSignal] * 4) # put in the queue for data in inputData: queue.put(data) # block until all contents are downloaded queue.join() log.critical("** A log line that appears fine **") del queue for thread in threads: del thread del threads class ThreadTermSignal(object): pass def __thread(worker, queue, *args): try: while True: data = queue.get() if data is ThreadTermSignal: sys.exit() try: log.critical("** I don't appear when run under pylons **") finally: queue.task_done() except SystemExit: queue.task_done() pass Take note, that the log lin within the RUN method will show up in the log files, but the log line within the worker method (which is run in a spawned thread), does not appear. Any help would be appreciated. Thanks ** EDIT: I should mention that I tried passing in the "log" variable to the worker thread as well as redefining a new "log" variable within the thread and neither worked. ** EDIT: Adding the configuration used for the pylons application (which comes out of the INI file). So the snippet below is from the INI file. [loggers] keys = root [handlers] keys = wsgierrors [formatters] keys = generic [logger_root] level = WARNING handlers = wsgierrors [handler_console] class = StreamHandler args = (sys.stderr,) level = WARNING formatter = generic [handler_wsgierrors] class = pylons.log.WSGIErrorsHandler args = () level = WARNING format = generic

    Read the article

  • Looking for a good world map generation algorithm

    - by FalconNL
    I'm working on a Civilization-like game and I'm looking for a good algorithm for generating Earth-like world maps. I've experimented with a few alternatives, but haven't hit on a real winner yet. One option is to generate a heightmap using perlin noise and add water at a level so that about 30% of the world is land. While perlin noise (or similar fractal-based techniques) are frequently used for terrain and is reasonably realistic, it doesn't offer much in the way of control over the number, size and position of the resulting continents, which I'd like to have from a gameplay perspective. See http://farm3.static.flickr.com/2792/4462870263_ff26c40365_o.jpg for an example (sorry, can't post pictures yet). A second option is to start with a randomly positioned one-tile seed (I'm working on a grid of tiles), determine the desired size for the continent and each turn add a tile that is horizontally or vertically adjacent to the existing continent until you've reached the desired size. Repeat for the other continents. This technique is part of the algorithm used in Civilization 4. The problem is that after placing the first few continents, it's possible to pick a starting location that's surrounded by other continents, and thus won't fit the new one. Also, it has a tendency to spawn continents too close together, resulting in something that looks more like a river than continents. See http://farm5.static.flickr.com/4069/4462870383_46e86b155c_o.jpg for an example. Does anyone happen to know a good algorithm for generating realistic continents on a grid-based map while keeping control over their number and relative sizes?

    Read the article

  • C# Process Exited event not firing from within webservice

    - by davidpizon
    I am attempting to wrap a 3rd party command line application within a web service. If I run the following code from within a console application: Process process= new System.Diagnostics.Process(); process.StartInfo.FileName = "some_executable.exe"; // Do not spawn a window for this process process.StartInfo.CreateNoWindow = true; process.StartInfo.ErrorDialog = false; // Redirect input, output, and error streams process.StartInfo.UseShellExecute = false; process.StartInfo.RedirectStandardOutput = true; process.StartInfo.RedirectStandardError = true; process.StartInfo.RedirectStandardInput = true; process.EnableRaisingEvents = true; process.ErrorDataReceived += (sendingProcess, eventArgs) => { // Make note of the error message if (!String.IsNullOrEmpty(eventArgs.Data)) if (this.WarningMessageEvent != null) this.WarningMessageEvent(this, new MessageEventArgs(eventArgs.Data)); }; process.OutputDataReceived += (sendingProcess, eventArgs) => { // Make note of the message if (!String.IsNullOrEmpty(eventArgs.Data)) if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs(eventArgs.Data)); }; process.Exited += (object sender, EventArgs e) => { // Make note of the exit event if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs("The command exited")); }; process.Start(); process.StandardInput.Close(); process.BeginOutputReadLine(); process.BeginErrorReadLine(); process.WaitForExit(); int exitCode = process.ExitCode; process.Close(); process.Dispose(); if (this.DebugMessageEvent != null) this.DebugMessageEvent(this, new MessageEventArgs("The command exited with code: " + exitCode)); All events, including the "process.Exited" event fires as expected. However, when this code is invoked from within a web service method, all events EXCEPT the "process.Exited" event fire. The execution appears to hang at the line: process.WaitForExit(); Would anyone be able to shed some light as to what I might be missing?

    Read the article

  • Working with a large data object between ruby processes

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds Update: Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped. Presently I'm considering two options: Create a Sinatra application to store this hash with an API to modify/access it. Create a C application to do the same as #1, but a lot faster. The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API. A good walkthrough through how best to implement #1 or #2 may receive best answer credit.

    Read the article

  • Setting LD_LIBRARY_PATH in Apache PassEnv/SetEnv still cant find library

    - by DoMoreASAP
    I am trying to test the Cybersource 3d party implementation. I was able to get the test files running fine from the command line, which requires that on Linux I export the path to the payment libraries to LD_LIBRARY_PATH. to try to test this on my server I have created the apache config below <VirtualHost 127.0.0.1:12345> AddHandler cgi-script .cgi AddHandler fcgid-script .php .fcgi FCGIWrapper /my/path/to/php_fcgi/bin/php-cgi .php AddType text/html .shtml AddOutputFilter INCLUDES .shtml DocumentRoot /my/path/to/cybersource/simapi-php-5.0.1/ ProxyPreserveHost on <Directory /my/path/to/cybersource/simapi-php-5.0.1> SetEnv LD_LIBRARY_PATH /my/path/to/cybersource/LinkedLibraries/lib/ AllowOverride all Options +Indexes IndexOptions Charset=UTF-8 </Directory> </VirtualHost> I have set the env variable there with SetEnv command, which seems to be working when i run a page that prints <?php phpinfo(); ?> however the test script when called through the browser still wont work, apache says: tail /my/apache/error_log [Tue Mar 30 23:11:46 2010] [notice] mod_fcgid: call /my/path/to/cybersource/index.php with wrapper /my/path/to/cybersource/php_fcgi/bin/php-cgi PHP Warning: PHP Startup: Unable to load dynamic library '/my/path/to/cybersource/extensionsdir/php5_cybersource.so' - libspapache.so: cannot open shared object file: No such file or directory in Unknown on line 0 so it cant find the linked file libspapache.so even though it is in the LD_LIBRARY_PATH that is supposedly defined i really appreciate the help. thanks so much.

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Why is Django/FastCGI/Apache logging HTTP status code 200 for every request, even 404s?

    - by jl6
    Edit: I have now discovered that the status code is returned correctly, it just isn't recorded correctly in Apache's access.log. Title amended. This is still a problem. Any ideas? Original question follows. Hi all. I run the following stack: Django(svn) on WSGI on FastCGI on Apache on Dreamhost. Every page served by Django returns HTTP status code 200, even those resulting from statements such as raise Http404 There is a .htaccess file which directs most pages to Django, via my dispatch.fcgi file, and other pages elsewhere. The other pages return correct status codes, e.g. trying to access /.htaccess itself results in status code 403. When I run my Django project on a local development server (Apache, not Django's built-in development server), I get correct status codes, so I don't think this is caused by my Django code. My current thinking is that the problem lies somewhere in how the FastCGI/WSGI interface is configured, but I'm not sure how to proceed debugging this. Any tips on how I can find out what's causing this?

    Read the article

  • Java synchronized seems ignored

    - by viraptor
    Hi, I've got the following code, which I expected to deadlock after printing out "Main: pre-sync". But it looks like synchronized doesn't do what I expect it to. What happens here? import java.util.*; public class deadtest { public static class waiter implements Runnable { Object obj; public waiter(Object obj) { this.obj = obj; } public void run() { System.err.println("Thead: pre-sync"); synchronized(obj) { System.err.println("Thead: pre-wait"); try { obj.wait(); } catch (Exception e) { } System.err.println("Thead: post-wait"); } System.err.println("Thead: post-sync"); } } public static void main(String args[]) { Object obj = new Object(); System.err.println("Main: pre-spawn"); Thread waiterThread = new Thread(new waiter(obj)); waiterThread.start(); try { Thread.sleep(1000); } catch (Exception e) { } System.err.println("Main: pre-sync"); synchronized(obj) { System.err.println("Main: pre-notify"); obj.notify(); System.err.println("Main: post-notify"); } System.err.println("Main: post-sync"); try { waiterThread.join(); } catch (Exception e) { } } } Since both threads synchronize on the created object, I expected the threads to actually block each other. Currently, the code happily notifies the other thread, joins and exits.

    Read the article

  • Recursively adding threads to a Java thread pool

    - by Leith
    I am working on a tutorial for my Java concurrency course. The objective is to use thread pools to compute prime numbers in parallel. The design is based on the Sieve of Eratosthenes. It has an array of n bools, where n is the largest integer you are checking, and each element in the array represents one integer. True is prime, false is non prime, and the array is initially all true. A thread pool is used with a fixed number of threads (we are supposed to experiment with the number of threads in the pool and observe the performance). A thread is given a integer multiple to process. The thread then finds the first true element in the array that is not a multiple of thread's integer. The thread then creates a new thread on the thread pool which is given the found number. After a new thread is formed, the existing thread then continues to set all multiples of it's integer in the array to false. The main program thread starts the first thread with the integer '2', and then waits for all spawned threads to finish. It then spits out the prime numbers and the time taken to compute. The issue I have is that the more threads there are in the thread pool, the slower it takes with 1 thread being the fastest. It should be getting faster not slower! All the stuff on the internet about Java thread pools create n worker threads the main thread then wait for all threads to finish. The method I use is recursive as a worker can spawn more worker threads. I would like to know what is going wrong, and if Java thread pools can be used recursively.

    Read the article

  • How to make Processes Run Parallel in Erlang?

    - by Ankit S
    Hello, startTrains() -> TotalDist = 100, Trains = [trainA,trainB ], PID = spawn(fun() -> train(1,length(Trains)) end), [ PID ! {self(),TrainData,TotalDist} || TrainData <- Trains], receive {_From, Mesg} -> error_logger:info_msg("~n Mesg ~p ~n",[Mesg]) after 10500 -> refresh end. so, I created Two Processes named trainA, trainB. I want to increment these process by 5 till it gets 100. I made different processes to make each of the train (process) increments its position parallely. But I was surprised to get the output sequentially i.e process trainA ends then process trainB starts. But I want to increment themselves at simultaneously. I want to run processes like this trainA 10 trainB 0 trainA 15 trainB 5 .... trainA 100 trainB 100 but I m getting trainA 0 .... trainA 90 trainA 95 trainA 100 trainA ends trainB 0 trainB 5 trainB 10 ..... trainB 100 How to make the processes run parallel/simultaneously? Hope you get my Q's. Please help me.

    Read the article

  • Sharing large objects between ruby processes without a performance hit

    - by Gdeglin
    I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes. Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast. The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it. Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds. One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly. Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it? Here is the code I'm using to generate a hash similar to the one I'm working with: @a = [] 0.upto(500) do |r| @a[r] = [] 0.upto(10_000) do |c| if rand(10) == 0 @a[r][c] = 1 # 10% chance of being 1 else @a[r][c] = 0 end end end @c = Marshal.dump(@a) # 1000 milliseconds Marshal.load(@c) # 400 milliseconds

    Read the article

  • Application doesn't exit with 0 threads

    - by Bryce Wagner
    We have a WinForms desktop application, which is heavily multithreaded. 3 threads run with Application.Run and a bunch of other background worker threads. Getting all the threads to shut down properly was kind of tricky, but I thought I finally got it right. But when we actually deployed the application, users started experiencing the application not exiting. There's a System.Threading.Mutex to prevent them from running the app multiple times, so they have to go into task manager and kill the old one before they can run it again. Every thread gets a Thread.Join before the main thread exits, and I added logging to each thread I spawn. According to the log, every single thread that starts also exits, and the main thread also exits. Even stranger, running SysInternals ProcessExplorer show all the threads disappear when the application exits. As in, there are 0 threads (managed or unmanaged), but the process is still running. I can't reproduce this on any developers computers or our test environment, and so far I've only seen it happen on Windows XP (not Vista or Windows 7 or any Windows Server). How can a process keep running with 0 threads?

    Read the article

  • How to stop tcpdump remotely using expect from a new telnet session

    - by The CodeWriter
    I am trying to stop the tcpdump command from running on a remote terminal. If I telnet to the terminal, start tcpdump, and then send a ^c, tcpdump stops with no issues. However if I telnet to the same terminal, start tcpdump, and then exit the telnet session, when I reconnect to the same telnet session I am unable to stop tcpdump via a ^c. When I do this instead of stopping tcpdump it seems that it just quits the telnet session and tcpdump continues to run on the remote terminal. I provided my script below. Any help is greatly appreciated. #!/usr/local/bin/expect -f exp_internal 1 set timeout 30 spawn /bin/bash expect "] " send "telnet 192.168.62.133 10006\r" expect "Escape character is '^]'." send "\r" expect "# " set now [clock format [clock seconds] -format {%d_%b_%Y_%H%M%S}] set command "tcpdump -vv -i trf400 ip proto 89 -s 65535 -w /tmp/test_term420_${now}.pcp " send "$command\r" expect "tcpdump: listening on" # This works correctly. tcpdump quits and I am returned to the expected prompt send "\x03" expect "# " send "$command\r" expect "tcpdump: listening on" # Exit telnet session send -- "\x1d" expect "telnet> " send -- "q\r" expect "] " # Reconnect to telnet session send "telnet 192.168.62.133 10006\r" expect "Escape character is '^]'." send "\r" # This does not work as intended. The ^c quits the telnet session instead of stopping tcpdump send "\x03" expect "] " send "ls\r" expect "] "

    Read the article

  • How to terminate all [grand]child processes using C# on WXP (and newer MSWindows)

    - by NVRAM
    Question: How can I determine all processes in the child's Process Tree to kill them? I have an application, written in C# that will: Get a set of data from the server, Spawn a 3rd party utility to process the data, then Return the results to the server. This is working fine. But since a run consumes a lot of CPU and may take as long as an hour, I want to add the ability to have my app terminate its child processes. Some issues that make the simple solutions I've found elsewhere are: My app's child process "A" (InstallAnywhere EXE I think) spawns the real processing app "B" (a java.exe), which in turns spawns more children "C1".."Cn" (most of which are also written in Java). There will likely be multiple copies of my application (and hence, multiple sets of its children) running on the same machine. The child process is not in my control so there might be some "D" processes in the future. My application must run on 32-bit and 64-bit versions of MSWindows. On the plus side there is no issue of data loss, a "clean" shutdown doesn't matter as long as the processes end fairly quickly.

    Read the article

  • Command not write in buffer with Expect

    - by Romuald
    Hello, I try to backup a Linkproof device with expect script and i have some trouble. It's my first script in expect and i have reach my limits ;) #!/usr/bin/expect spawn ssh @IPADDRESS expect "username:" # Send the username, and then wait for a password prompt. send "@username\r" expect "password:" # Send the password, and then wait for a shell prompt. send "@password\r" expect "#" # Send the prebuilt command, and then wait for another shell prompt. send "system config immediate\r" #Send space to pass the pause expect -re "^ *--More--\[^\n\r]*" send "" expect -re "^ *--More--\[^\n\r]*" send "" expect -re "^ *--More--\[^\n\r]*" send "" # Capture the results of the command into a variable. This can be displayed, or written to disk. sleep 10 expect -re .* set results $expect_out(buffer) # Copy buffer in a file set config [open linkproof.txt w] puts $config $results close $config # Exit the session. expect "#" send "logout\r" expect eof The content of the output file: The authenticity of host '@IP (XXX.XXX.XXX.XXX)' can't be established. RSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX. Are you sure you want to continue connecting (yes/no)? @username Please type 'yes' or 'no': @password Please type 'yes' or 'no': system config immediate Please type 'yes' or 'no': Like you can see, the result of the command is not in the file. Could you, please, help me to understantd why ? Thanks for your help. Romuald

    Read the article

  • Cocoa/Objective-C - Child window with text input without main window becoming inactive

    - by Josh
    Hello All, I have a need to spawn a window that will hover just above my main window in a cocoa application. I want this main window to allow the user to enter some text in an input box. All is well until the text input box actually gains focus. The main window becomes "deactivated." This window is borderless and is a slightly custom shape -- its more like a hover card than anything else, I suppose. Basically, I'd like this thing to work almost exactly like Spotlight (Apple + Space) -- you can enter text, but this is such an an ancillary operation that in the context of the greater UX, you don't want the jarring effect of the main window graying out (becoming inactive). You'll notice when you have some application open and in-focus, spotlight will not cause the window of that application to become inactive. This problem arises because text input seems to REQUIRE that the child window become the key window (it will not let you place the cursor in the text input field). When it becomes key, the main window becomes inactive. So far I've tried: Subclassing NSWindow for my main application and overriding isKeyWindow such that it only loses key when the application is no longer the users focus (as opposed to the window). This had the unintended effect of colliding with key status of the child window and having very strange effects on the keyboard input (some keys are not captured, like delete) Creating a view instead of a window. Doesn't work because of this problem -- you cannot draw over a Webkit WebView these days. Anybody Cocoa/OSX wizards have any ideas? I've become a little obsessed with this one. An itch I can't scratch.

    Read the article

  • What is the prefered or accepted method for testing proxy settings?

    - by Mike Webb
    I have a lot of trouble with the internet connectivity in the program I am working on and it all seems to spawn from some issue with the proxy settings. Most of the issues at this point are fixed, but the issue I am having now is that my method of testing the proxy settings makes some users wait for long periods of time. Here is what I do: System.Net.WebClient webClnt = new System.Net.WebClient(); webClnt.Proxy = proxy; webClnt.Credentials = proxy.Credentials; byte[] tempBytes; try { tempBytes = webClnt.DownloadData(url.Address); } catch { //Invalid proxy settings //Code to handle the exception goes here } This is the only way that I've found to test if the proxy settings are correct. I tried making a web service call to our web service, but no proxy settings are needed when making the call. It will work even if I have bogus proxy settings. The above method, though, has no timeout member that I can set that I can find and I use the DownloadData as opposed to the DownloadDataAsync because I need to wait til the method is done so that I can know if the settings are correct before continuing on in the program. Any suggestions on a better method or a work around for this method is appreciated. Mike

    Read the article

  • mod_cgi , mod_fastcgi, mod_scgi , mod_wsgi, mod_python, FLUP. I don't know how many more. what is mo

    - by claws
    I recently learnt Python. I liked it. I just wanted to use it for web development. This thought caused all the troubles. But I like these troubles :) Coming from PHP world where there is only one way standardized. I expected the same and searched for python & apache. http://stackoverflow.com/questions/449055/setting-up-python-on-windows-apache says Stay away from mod_python. One common misleading idea is that mod_python is like mod_php, but for python. That is not true. So what is equivalent of mod_php in python? I need little clarification on this one http://stackoverflow.com/questions/219110/how-python-web-frameworks-wsgi-and-cgi-fit-together CGI, FastCGI and SCGI are language agnostic. You can write CGI scripts in Perl, Python, C, bash, or even Assembly :). So, I guess mod_cgi , mod_fastcgi, mod_scgi are their corresponding apache modules. Right? WSGI is some kind of optimized/improved inshort an efficient version specifically designed for python language only. In order to use this mod_wsgi is a way to go. right? This leaves out mod_python. What is it then? Apache - mod_fastcgi - FLUP (via CGI protocol) - Django (via WSGI protocol) Flup is another way to run with wsgi for any webserver that can speak FCGI, SCGI or AJP What is FLUP? What is AJP? How did Django come in the picture? These questions raise quetions about PHP. How is it actually running? What technology is it using? mod_php & mod_python what are the differences? In future if I want to use Perl or Java then again will I have to get confused? Kindly can someone explain things clearly and give a Complete Picture.

    Read the article

  • EAGLContext, EAGLSharegroups, RenderBuffers, FrameBuffers, oh my!

    - by quixoto
    Hi all, I'm trying to wrap my head around the OpenGL object model on iPhone OS. I'm currently rendering into a few different UIViews (build on CAEAGLayers) on the screen. I currently have each of these as using separate EAGLContext, each of which has a color renderbuffer and a framebuffer. I'm rendering similar things in them, and I'd like to share textures between these instances to save memory overhead. My current understanding is that I could use the same setup (some number of contexts, each with a FBO/RBO) but if I spawn the later ones using the EAGLShareGroup of the first one, then I can simply use the texture names (GLuints) from the first one in the later ones. Is this accurate? If this is the case, I guess the followup question is: what's the benefit to having it be a "sharegroup"? Could I just reuse the same context, and attach multiple FBOs/RBOs to that context? I think I'm struggling with the abstraction layer of a sharegroup, which seems to share "objects" (textures and other named things) but not "state" (matrices, enabled/disabled states) which are owned by the context. What's the best way to think of this? Thanks for any enlightenment!

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • C# Execute Method (with Parameters) with ThreadPool

    - by washtik
    We have the following piece of code (idea for this code was found on this website) which will spawn new threads for the method "Do_SomeWork()". This enables us to run the method multiple times asynchronously. The code is: var numThreads = 20; var toProcess = numThreads; var resetEvent = new ManualResetEvent(false); for (var i = 0; i < numThreads; i++) { new Thread(delegate() { Do_SomeWork(Parameter1, Parameter2, Parameter3); if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }).Start(); } resetEvent.WaitOne(); However we would like to make use of ThreadPool rather than create our own new threads which can be detrimental to performance. The question is how can we modify the above code to make use of ThreadPool keeping in mind that the method "Do_SomeWork" takes multiple parameters and also has a return type (i.e. method is not void). Also, this is C# 2.0.

    Read the article

  • HTTP crawler in Erlang

    - by ctp
    I'm coding on a simple HTTP crawler but I have an issue running the code at the bottom. I'm requesting 50 URLs and get the content of 20+ back. I've generated few files with 150kB size each to test the crawler. So I think the 20+ responses are limited by the bandwidth? BUT: how to tell the Erlang snippet not to quit until the last file is not fetched? The test data server is online, so plz try the code out and any hints are welcome :) -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/1]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> lists:foreach(fun do_spawn/1, Ids). do_spawn(Id) -> proc_lib:spawn_link(?MODULE, do_send_req, [Id]). do_send_req(Id) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]); Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]) end. That's the output: Requesting ID 1 ... Requesting ID 2 ... Requesting ID 3 ... Requesting ID 4 ... Requesting ID 5 ... Requesting ID 6 ... Requesting ID 7 ... Requesting ID 8 ... Requesting ID 9 ... Requesting ID 10 ... Requesting ID 11 ... Requesting ID 12 ... Requesting ID 13 ... Requesting ID 14 ... Requesting ID 15 ... Requesting ID 16 ... Requesting ID 17 ... Requesting ID 18 ... Requesting ID 19 ... Requesting ID 20 ... Requesting ID 21 ... Requesting ID 22 ... Requesting ID 23 ... Requesting ID 24 ... Requesting ID 25 ... Requesting ID 26 ... Requesting ID 27 ... Requesting ID 28 ... Requesting ID 29 ... Requesting ID 30 ... Requesting ID 31 ... Requesting ID 32 ... Requesting ID 33 ... Requesting ID 34 ... Requesting ID 35 ... Requesting ID 36 ... Requesting ID 37 ... Requesting ID 38 ... Requesting ID 39 ... Requesting ID 40 ... Requesting ID 41 ... Requesting ID 42 ... Requesting ID 43 ... Requesting ID 44 ... Requesting ID 45 ... Requesting ID 46 ... Requesting ID 47 ... Requesting ID 48 ... Requesting ID 49 ... Requesting ID 50 ... OK -- ID: 49 -- Status: "200" -- Content length: 150000 OK -- ID: 47 -- Status: "200" -- Content length: 150000 OK -- ID: 50 -- Status: "200" -- Content length: 150000 OK -- ID: 17 -- Status: "200" -- Content length: 150000 OK -- ID: 48 -- Status: "200" -- Content length: 150000 OK -- ID: 45 -- Status: "200" -- Content length: 150000 OK -- ID: 46 -- Status: "200" -- Content length: 150000 OK -- ID: 10 -- Status: "200" -- Content length: 150000 OK -- ID: 09 -- Status: "200" -- Content length: 150000 OK -- ID: 19 -- Status: "200" -- Content length: 150000 OK -- ID: 13 -- Status: "200" -- Content length: 150000 OK -- ID: 21 -- Status: "200" -- Content length: 150000 OK -- ID: 16 -- Status: "200" -- Content length: 150000 OK -- ID: 27 -- Status: "200" -- Content length: 150000 OK -- ID: 03 -- Status: "200" -- Content length: 150000 OK -- ID: 23 -- Status: "200" -- Content length: 150000 OK -- ID: 29 -- Status: "200" -- Content length: 150000 OK -- ID: 14 -- Status: "200" -- Content length: 150000 OK -- ID: 18 -- Status: "200" -- Content length: 150000 OK -- ID: 01 -- Status: "200" -- Content length: 150000 OK -- ID: 30 -- Status: "200" -- Content length: 150000 OK -- ID: 40 -- Status: "200" -- Content length: 150000 OK -- ID: 05 -- Status: "200" -- Content length: 150000 Update: thanks stemm for the hint with the wait_workers. I've combined your and mine code but same behaviour :( -module(crawler). -define(BASE_URL, "http://46.4.117.69/"). -export([start/0, send_reqs/0, do_send_req/2]). start() -> ibrowse:start(), proc_lib:spawn(?MODULE, send_reqs, []). to_url(Id) -> ?BASE_URL ++ integer_to_list(Id). fetch_ids() -> lists:seq(1, 50). send_reqs() -> spawn_workers(fetch_ids()). spawn_workers(Ids) -> %% collect reference to each worker Refs = [ do_spawn(Id) || Id <- Ids ], %% wait for response from each worker wait_workers(Refs). wait_workers(Refs) -> lists:foreach(fun receive_by_ref/1, Refs). receive_by_ref(Ref) -> %% receive message only from worker with specific reference receive {Ref, done} -> done end. do_spawn(Id) -> Ref = make_ref(), proc_lib:spawn_link(?MODULE, do_send_req, [Id, {self(), Ref}]), Ref. do_send_req(Id, {Pid, Ref}) -> io:format("Requesting ID ~p ... ~n", [Id]), Result = (catch ibrowse:send_req(to_url(Id), [], get, [], [], 10000)), case Result of {ok, Status, _H, B} -> io:format("OK -- ID: ~2..0w -- Status: ~p -- Content length: ~p~n", [Id, Status, length(B)]), %% send message that work is done Pid ! {Ref, done}; Err -> io:format("ERROR -- ID: ~p -- Error: ~p~n", [Id, Err]), %% repeat request if there was error while fetching a page, do_send_req(Id, {Pid, Ref}) %% or - if you don't want to repeat request, put there: %% Pid ! {Ref, done} end. Running the crawler forks fine for a handful of files, but then the code even doesnt fetch the entire files (file size each 150000 bytes) - he crawler fetches some files partially, see the following web server log :( 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /10 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /1 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /3 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /8 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /39 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /7 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /6 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /2 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /5 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /50 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /9 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /44 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /38 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /47 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /49 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /43 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /37 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /46 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /48 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:00 +0200] "GET /36 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /42 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /41 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /45 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /17 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /35 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /16 HTTP/1.1" 200 150000 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /15 HTTP/1.1" 200 17020 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /21 HTTP/1.1" 200 120360 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /40 HTTP/1.1" 200 117600 "-" "-" 82.114.62.14 - - [13/Sep/2012:15:17:01 +0200] "GET /34 HTTP/1.1" 200 60660 "-" "-" Any hints are welcome. I have no clue what's going wrong there :(

    Read the article

  • "Scheduling restart of crashed service", but no call to onStart() follows

    - by kostmo
    In the 1.6 API, is there a way to ensure that the onStart() method of a Service is called after the service is killed due to memory pressure? From the logs, it seems that the "process" that the service belongs to is restarted, but the service itself is not. I have placed a Log.d() call in the onStart() method, and this is not reached. To test my service under memory pressure, I spawn it from an activity, then launch the web browser and visit some Javascript-heavy websites like Slashdot until my service is killed. The logcat reads: 03-07 16:44:13.778: INFO/ActivityManager(52): Process com.kostmo.charbuilder.full (pid 2909) has died. 03-07 16:44:13.778: WARN/ActivityManager(52): Scheduling restart of crashed service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService in 5000ms 03-07 16:44:13.778: INFO/ActivityManager(52): Low Memory: No more background processes. 03-07 16:44:13.778: ERROR/ActivityThread(52): Failed to find provider info for android.server.checkin 03-07 16:44:13.778: WARN/Checkin(52): Can't log event SYSTEM_SERVICE_LOOPING: java.lang.IllegalArgumentException: Unknown URL content://android.server.checkin/events 03-07 16:44:18.908: INFO/ActivityManager(52): Start proc com.kostmo.charbuilder.full for service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService: pid=3560 uid=10027 gids={3003, 1015} 03-07 16:44:19.868: DEBUG/ddm-heap(3560): Got feature list request 03-07 16:44:20.128: INFO/ActivityThread(3560): Publishing provider com.kostmo.charbuilder.full.provider.character: com.kostmo.charbuilder.provider.ImageFileContentProvider

    Read the article

  • Comet, responseText and memory usage

    - by ithcy
    Is there a way to clear out the responseText of an XHR object without destroying the XHR object? I need to keep a persistent connection open to a web server to feed live data to a browser. The problem is, there is a relatively large amount of data coming through (several hundred K per second constantly), so memory usage is a big problem, because this connection must remain open for at least several minutes. responseText gets very big very quickly, even though the JSON I send back has been crunched as small as it can get. Due to the way the server-side app works, if I use AJAX-style short polling and just destroy the XHR object when I'm done with it, I miss significant amounts of important data even in the few milliseconds it takes to parse the response, create a new XHR and send it out. I do not have the option to use overlapping requests, as the web server only accepts one connection at a time. (Don't ask.) So Comet is exactly the model I need. What I would like to do is parse each JSON chunk as it comes back from the server, and then clear out responseText so that I can keep using the same connection. However, responseText is read-only. It cannot be directly emptied by any method I have found. Is there a part of the picture I am missing here? Does anyone know any tricks I can use to free up responseText when I'm done reading it? Or is there another place the server responses can go? I am not including code because this is really almost a code-agnostic question. The Javascript routines that spawn the XHRs and handle the returned data are very, very simple.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25  | Next Page >