Search Results

Search found 613 results on 25 pages for 'spawn fcgi'.

Page 21/25 | < Previous Page | 17 18 19 20 21 22 23 24 25  | Next Page >

  • How to know if all the Thread Pool's thread are already done with its tasks?

    - by mcxiand
    I have this application that will recurse all folders in a given directory and look for PDF. If a PDF file is found, the application will count its pages using ITextSharp. I did this by using a thread to recursively scan all the folders for pdf, then if then PDF is found, this will be queued to the thread pool. The code looks like this: //spawn a thread to handle the processing of pdf on each folder. var th = new Thread(() => { pdfDirectories = Directory.GetDirectories(pdfPath); processDir(pdfDirectories); }); th.Start(); private void processDir(string[] dirs) { foreach (var dir in dirs) { pdfFiles = Directory.GetFiles(dir, "*.pdf"); processFiles(pdfFiles); string[] newdir = Directory.GetDirectories(dir); processDir(newdir); } } private void processFiles(string[] files) { foreach (var pdf in files) { ThreadPoolHelper.QueueUserWorkItem( new { path = pdf }, (data) => { processPDF(data.path); } ); } } My problem is, how do i know that the thread pool's thread has finished processing all the queued items so i can tell the user that the application is done with its intended task?

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • How can I solve this NHibernate Querying in an n-tier architecture?

    - by Tyler Wright
    I've hit a wall with trying to decouple NHibernate from my services layer. My architecture looks like this: web - services - repositories - nhibernate - db I want to be able to spawn nhibernate queries from my services layer and possibly my web layer without those layers knowing what orm they are dealing with. Currently, I have a find method on all of my repositories that takes in IList<object[]> criteria. This allows me to pass in a list of criteria such as new object() {"Username", usernameVariable}; from anywhere in my architecture. NHibernate takes this in and creates a new Criteria object and adds in the passed in criteria. This works fine for basic searches from my service layer, but I would like to have the ability to pass in a query object that my repository translates into an NHibernate Criteria. Really, I would love to implement something like what is described in this question: Is there value in abstracting nhibernate criterion. I'm just not finding any good resources on how to implement something like this. Is the method described in that question a good approach? If so, could anyone provide some pointers on how to implement such a solution?

    Read the article

  • Replace low level web-service reference call transport with custom one

    - by hoodoos
    I'm not sure if title sounds right actually, so I will give more explanation here. I will begin from very beginning :) I'm using c# and .net for my development. I have an application that makes requests to some soap web-service and for each user request it produces 3 to 10 requests for web-service, they should all run async to finish in one time, so I use Async method of the web-service generated reference and then wait for result on callback. But it seems like it starts a thread (or takes it from pool) for every async call I make, so if I have 10 clients I got to spawn 30 to 100 threads and it sounds terrible even for my 16 cores server :) So i wanted to replace low level transport implementation with my own which uses non-blocking sockets and can handle at least 50 sockets run parallel in one thread with not much overhead. But I actually dunno where to put my override best. I analyzed System.Web.Services.Protocols.SoapHttpClientProtocol class and see that it has some GetWebRequest method which I actually could use. If only I could somehow interupt the object it creates and get a http request with all headers and body from there and then send it with my own sockets.. Any ideas what approach to use? Or maybe there's something built in the framework I can use?

    Read the article

  • ACL architechture for a Software As a service in Sprgin 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • how to hide ssh expect user/password

    - by raindrop18
    my perl cgi script I have the password/user on clear text and want to hide it or the user enter the credential interactively.is that possible? here is my code. please any help!! i am very new for perl. #!/usr/local/bin/expect ####################################################################################################### # Input: It will handle two arguments -> a device and a show command. ####################################################################################################### # ######### Start of Script ###################### # #### Set up Timeouts - Debugging Variables log_user 0 set timeout 10 set userid "USER" set password "PASS" # ############## Get two arguments - (1) Device (2) Command to be executed set device [lindex $argv 0] set command [lindex $argv 1] spawn /usr/local/bin/ssh -l $userid $device match_max [expr 32 * 1024] expect { -re "RSA key fingerprint" {send "yes\r"} timeout {puts "Host is known"} } expect { -re "username: " {send "$userid\r"} -re "(P|p)assword: " {send "$password\r"} -re "Warning:" {send "$password\r"} -re "Connection refused" {puts "Host error -> $expect_out(buffer)";exit} -re "Connection closed" {puts "Host error -> $expect_out(buffer)";exit} -re "no address.*" {puts "Host error -> $expect_out(buffer)";exit} timeout {puts "Timeout error. Is device down or unreachable?? ssh_expect";exit} } expect { -re "\[#>]$" {send "term len 0\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect { -re "\[#>]$" {send "$command\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect -re "\[#>]$" set output $expect_out(buffer) send "exit\r" puts "$output\r\n"

    Read the article

  • C# WinForms MultiThreading in Loop

    - by Goober
    Scenario I have a background worker in my application that runs off and does a bunch of processing. I specifically used this implementation so as to keep my User Interface fluid and prevent it from freezing up. I want to keep the background worker, but inside that thread, spawn off ONLY 3 MORE threads - making them share the processing (currently the worker thread just loops through and processes each asset one-by-one. However I would like to speed this up but using only a limited number of threads. Question Given the code below, how can I get the loop to choose a thread that is free, and then essentially wait if there isn't one free before it continues. CODE foreach (KeyValuePair<int, LiveAsset> kvp in laToHaganise) { Haganise h = new Haganise(kvp.Value, busDate, inputMktSet, outputMktSet, prodType, noOfAssets, bulkSaving); h.DoWork(); } Thoughts I'm guessing that I would have to start off by creating 3 new threads, but my concern is that if I'm instantiating a new Haganise object each time - how can I pass the correct "h" object to the correct thread..... Thread firstThread = new Thread(new ThreadStart(h.DoWork)); Thread secondThread =new Thread(new ThreadStart(h.DoWork)); Thread thirdThread = new Thread(new ThreadStart(h.DoWork)); Help greatly appreciated.

    Read the article

  • Passive and active sockets

    - by davsan
    Quoting from this socket tutorial: Sockets come in two primary flavors. An active socket is con­nect­ed to a remote active socket via an open data con­nec­tion... A passive socket is not con­nect­ed, but rather awaits an in­com­ing con­nec­tion, which will spawn a new active socket once a con­nec­tion is es­tab­lished ... Each port can have a single passive socket binded to it, await­ing in­com­ing con­nec­tions, and mul­ti­ple active sockets, each cor­re­spond­ing to an open con­nec­tion on the port. It's as if the factory worker is waiting for new mes­sages to arrive (he rep­re­sents the passive socket), and when one message arrives from a new sender, he ini­ti­ates a cor­re­spon­dence (a con­nec­tion) with them by del­e­gat­ing someone else (an active socket) to ac­tu­al­ly read the packet and respond back to the sender if nec­es­sary. This permits the factory worker to be free to receive new packets. ... Then the tutorial explains that, after a connection is established, the active socket continues receiving data until there are no remaining bytes, and then closes the connection. What I didn't understand is this: Suppose there's an incoming connection to the port, and the sender wants to send some little data every 20 minutes. If the active socket closes the connection when there are no remaining bytes, does the sender have to reconnect to the port every time it wants to send data? How do we persist a once established connection for a longer time? Can you tell me what I'm missing here? My second question is, who determines the limit of the concurrently working active sockets?

    Read the article

  • How to optimize an asp.net spawning a new process for each request ?

    - by Recycle Bin
    I have an asp.net mvc application that spawns a Process as follows: Process p = new Process(); p.EnableRaisingEvents = true; p.Exited += new EventHandler(p_Exited); p.StartInfo.Arguments = "-interaction=nonstopmode " + inputpath; p.StartInfo.WorkingDirectory = dir; p.StartInfo.UseShellExecute = false; p.StartInfo.FileName = "pdflatex.exe"; p.StartInfo.LoadUserProfile = true; p.Start(); p.WaitForExit(); Before going further, I need to know whether, e.g., pdflatex.exe is a managed code or a native code? Edit I need to consider this because: (Hopely I am not wrong...) Each Asp.net application runs in an separate/isolated AppDomain as opposed to a separate/isolated process. A native executable cannot live in an AppDomain. to be continued... Shortly speaking, I hope my site does not spawn a new process for each request. Because a process is more expensive than an application domain.

    Read the article

  • How to receive Email in JEE application

    - by Hank
    Obviously it's not so difficult to send out emails from a JEE application via JavaMail. What I am interested in is the best pattern to receive emails (notification bounces, mostly)? I am not interested in IMAP/POP3-based approaches (polling the inbox) - my application shall react to inbound emails. One approach I could think of would be Keep existing MTA (postfix on linux in my case) - ops team already knows how to configure / operate it For every mail that arrives, spawn a Java app that receives the data and sends it off via JMS. I could do this via an entry in /etc/aliases like myuser: "|/path/to/javahelper" with javahelper calling the Java app, passing STDIN along. MDB (part of JEE application) receives JMS message, parses it, detects bounce message and acts accordingly. Another approach could be Open a listening network socket on port 25 on the JEE application container. Associate a SessionBean with the socket. Bean is part of JEE application and can parse/detect bounces/handle the messages directly. Keep existing MTA as inbound relay, do all its security/spam filtering, but forward emails to myuser (that pass the filter) to the JEE application container, port 25. The first approach I have done before (albeit in a different language/setup). From a performance and (perceived) cleanliness point of view, I think the second approach is better, but it would require me to provide a proper SMTP transport implementation. Also, I don't know if it's at all possible to connect a network socket with a bean... What is your recommendation? Do you have details about the second approach?

    Read the article

  • Why can't I run a Perl program from TextMate?

    - by JZ
    I'm following a bioinformatics text, and this represents one of my first Perl scripts. While in TextMate, this does not produce any result. Is it functioning? I added "hello world" at the bottom and I don't see that when I run the script in TextMate. What have I done wrong? #!/usr/local/bin/perl -w use lib "/Users/fogonthedowns/myperllib"; use LWP::Simple; use strict; #Set base URL for all eutils my $utils = "http://eutils.ncbi.nlm.nih.gov/entrez/eutils"; my $db = "Pubmed"; my $query ="Cancer+Prostate"; my $retmax = 10; my $esearch = "$utils/esearch.fcgi?" . "db=$db&retmax=$retmax&term="; my $esearch_result = get($esearch.$query); print "ESEARCH RESULT: $esearch_result\n"; print "Using Query: \n$esearch$query\n"; print "hello world\n";

    Read the article

  • PHP Socket Server vs node.js: Web Chat

    - by Eliasdx
    I want to program a HTTP WebChat using long-held HTTP requests (Comet), ajax and websockets (depending on the browser used). Userdatabase is in mysql. Chat is written in PHP except maybe the chat stream itself which could also be written in javascript (node.js): I don't want to start a php process per user as there is no good way to send the chat messages between these php childs. So I thought about writing an own socket server in either PHP or node.js which should be able to handle more then 1000 connections (chat users). As a purely web developer (php) I'm not much familiar with sockets as I usually let web server care about connections. The chat messages won't be saved on disk nor in mysql but in RAM as an array or object for best speed. As far as I know there is no way to handle multiple connections at the same time in a single php process (socket server), however you can accept a great amount of socket connections and process them successive in a loop (read and write; incoming message - write to all socket connections). The problem is that there will most-likely be a lag with ~1000 users and mysql operations could slow the whole thing down which will then affect all users. My question is: Can node.js handle a socket server with better performance? Node.js is event-based but I'm not sure if it can process multiple events at the same time (wouldn't that need multi-threading?) or if there is just an event queue. With an event queue it would be just like php: process user after user. I could also spawn a php process per chat room (much less users) but afaik there are singlethreaded IRC servers which are also capable to handle thousands of users. (written in c++ or whatever) so maybe it's also possible in php. I would prefer PHP over Node.js because then the project would be php-only and not a mixture of programming languages. However if Node can process connections simultaneously I'd probably choose it.

    Read the article

  • how to automate the testing of a text based menu

    - by Reagan Penner
    Hi there, I have a text based menu running on a remote Linux host. I am using expect to ssh into this host and would like to figure out how to interact with the menus. Interaction involves arrowing up, down and using the enter and back arrow keys. For example, Disconnect Data Collection > Utilities > Save Changes When you enter the system Disconnect is highlighted. So simply pressing enter twice you can disconnect from the system. Second enter confirms the disconnect. The following code will ssh into my system and bring up the menu. If I remove the expect eof and try to send "\r" thinking that this would select the Disconnect menu option I get the following error: "write() failed to write anything - will sleep(1) and retry..." #!/usr/bin/expect set env(TERM) vt100 set password abc123 set ipaddr 162.116.11.100 set timeout -1 match_max -d 100000 spawn ssh root@$ipaddr exp_internal 1 expect "*password:*" send -- "$password\r" expect "Last login: *\r" expect eof I have looked at the virterm and term_expect examples but cannot figure out how to tweak them to work for me. If someone can point me in the right direction I would greatly appreciate it. What I need to know is can I interact with a text based menu system and what is the correct method for doing this, examples if any exist would be great. thanks, -reagan

    Read the article

  • Understanding the workflow of the messages in a generic server implementation in Erlang

    - by Chiron
    The following code is from "Programming Erlang, 2nd Edition". It is an example of how to implement a generic server in Erlang. -module(server1). -export([start/2, rpc/2]). start(Name, Mod) -> register(Name, spawn(fun() -> loop(Name, Mod, Mod:init()) end)). rpc(Name, Request) -> Name ! {self(), Request}, receive {Name, Response} -> Response end. loop(Name, Mod, State) -> receive {From, Request} -> {Response, State1} = Mod:handle(Request, State), From ! {Name, Response}, loop(Name, Mod, State1) end. -module(name_server). -export([init/0, add/2, find/1, handle/2]). -import(server1, [rpc/2]). %% client routines add(Name, Place) -> rpc(name_server, {add, Name, Place}). find(Name) -> rpc(name_server, {find, Name}). %% callback routines init() -> dict:new(). handle({add, Name, Place}, Dict) -> {ok, dict:store(Name, Place, Dict)}; handle({find, Name}, Dict) -> {dict:find(Name, Dict), Dict}. server1:start(name_server, name_server). name_server:add(joe, "at home"). name_server:find(joe). I tried so hard to understand the workflow of the messages. Would you please help me to understand the workflow of this server implementation during the executing of the functions: server1:start, name_server:add and name_server:find?

    Read the article

  • iPhone shooter game bullet physics!

    - by user298261
    Hello, Making a new shooter game here in the vein of "Galaga" (my fav shooter game growing up). Here's the code I have for bullet physics: -(IBAction)shootBullet:(id)sender{ imgBullet.hidden = NO; timer = [NSTimer scheduledTimerWithTimeInterval:0.05 target:self selector:@selector(fireBullet) userInfo:Nil repeats:YES]; } -(void)fireBullet{ imgBullet.center = CGPointMake(imgBullet.center.x + bulletVelocity.x , imgBullet.center.y + bulletVelocity.y); if(imgBullet.center.y <= 0){ imgBullet.hidden = YES; imgBullet.center = self.view.center; [timer invalidate]; } } Anyway, the obvious issue is that once the bullet leaves the screen, its center is being reset, so I'm reusing the same bullet for each press of the "fire" button. Ideally, I would like the user to be able to spam the "fire" button without causing the program to crash. How would I tinker this existing code so that a bullet object would spawn on the button press each time, and then despawn after it exits the screen, or collides with an enemy? Thank you for any assistance you can offer!

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Python Multiprocessing Question

    - by Avan
    My program has 2 parts divided into the core and downloader. The core handles all the app logic while the downloader just downloads urls. Right now, I am trying to use the python multiprocessing module to accomplish the task of the core as a process and the downloader as a process. The first problem I noticed was that if I spawn the downloader process from the core process so that the downloader is the child process and the core is the parent, the core process(parent) is blocked until the child is finished. I do not want this behaivor though. I would like to have a core process and a downloader process that are both able to execute their code and communicate between each other. example ... def main(): jobQueue = Queue() jobQueue.put("http://google.com) d = Downloader(jobQueue) p = Process(target=d.start()) p.start() if __name__ == '__main__': freeze_support() main() where Downloader's start() just takes the url out of the queue and downloads it. In order to have the 2 processes unblocked, would I need to create 2 processes from the parent process and then share something between them?

    Read the article

  • Problems with Threading in Python 2.5, KeyError: 51, Help debugging?

    - by vignesh-k
    I have a python script which runs a particular script large number of times (for monte carlo purpose) and the way I have scripted it is that, I queue up the script the desired number of times it should be run then I spawn threads and each thread runs the script once and again when its done. Once the script in a particular thread is finished, the output is written to a file by accessing a lock (so my guess was that only one thread accesses the lock at a given time). Once the lock is released by one thread, the next thread accesses it and adds its output to the previously written file and rewrites it. I am not facing a problem when the number of iterations is small like 10 or 20 but when its large like 50 or 150, python returns a KeyError: 51 telling me element doesn't exist and the error it points out to is within the lock which puzzles me since only one thread should access the lock at once and I do not expect an error. This is the class I use: class errorclass(threading.Thread): def __init__(self, queue): self.__queue=queue threading.Thread.__init__(self) def run(self): while 1: item = self.__queue.get() if item is None: break result = myfunction() lock = threading.RLock() lock.acquire() ADD entries from current thread to entries in file and REWRITE FILE lock.release() queue = Queue.Queue() for i in range(threads): errorclass(queue).start() for i in range(desired iterations): queue.put(i) for i in range(threads): queue.put(None) Python returns with KeyError: 51 for large number of desired iterations during the adding/write file operation after lock access, I am wondering if this is the correct way to use the lock since every thread has a lock operation rather than every thread accessing a shared lock? What would be the way to rectify this?

    Read the article

  • Ruby on Rails: Uploading a modifed site.

    - by Califer
    I'm having a heck of a time getting a site I modified to work correctly. I didn't set the site up originally, and since the person that set it up no longer works with me I had to learn ruby just to make some changes. I made all the changes in the development server and everything worked fine. Then I did a diff on the production and development and moved only my changes over. Unfortunately when I loaded my changes onto the production server I got a lot of errors. I've changed all of the permissions to 755, which took care being able to access anything at all, but after that I started getting a lot of 500 errors. Nothing showed up in the production.log file. I really have no clue what's going wrong except that perhaps things are not noticing the new changes. I moved the old site to a backup folder, and the new site crashes whenever it goes to anything that I've changed. In particular, I added a link to a new setup with an extra controller/model/view group. It works fine on development but in production it gives me a 404. Yes, I did copy all the files up. I even put everything back how it was, but the website is still showing the broken version of it. I checked the tmp/cache folder but it was empty. Running dispatch.fcgi shows the old site (which I expected) but it still shows the flawed new site when I connect through a browser. I've been tearing my hair out trying to get this to work. Any ideas as to how I can get this to work?

    Read the article

  • How to solve this problem with Python

    - by morpheous
    I am "porting" an application I wrote in C++ into Python. This is the current workflow: Application is started from the console Application parses CLI args Application reads an ini configuration file which specifies which plugins to load etc Application starts a timer Application iterates through each loaded plugin and orders them to start work. This spawns a new worker thread for the plugin The plugins carry out their work and when completed, they die When time interval (read from config file) is up, steps 5-7 is repeated iteratively Since I am new to Python (2 days and counting), the distinction between script, modules and packages are still a bit hazy to me, and I would like to seek advice from Pythonista as to how to implement the workflow described above, using Python as the programing language. In order to keep things simple, I have decided to leave out the time interval stuff out, and instead run the python script/scripts as a cron job instead. This is how I am thinking of approaching it: Encapsulate the whole application in a package which is executable (i.e. can be run from the command line with arguments. Write the plugins as modules (I think maybe its better to implement each module in a separate file?) I havent seen any examples of using threading in Python yet. Could someone provide a snippet of how I could spawn a thread to run a module. Also, I am not sure how to implement the concept of plugins in Python - any advice would be helpful - especially with a code snippet.

    Read the article

  • django multiprocess problem

    - by iKiR
    I have django application, running under lighttpd via fastcgi. FCGI running script looks like: python manage.py runfcgi socket=<path>/main.socket method=prefork \ pidfile=<path>/server.pid \ minspare=5 maxspare=10 maxchildren=10 maxrequests=500 \ I use SQLite. So I have 10 proccess, which all work with the same DB. Next I have 2 views: def view1(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param1 = <some value> obj.save () def view2(request) ... obj = MyModel.objects.get_or_create(id=1) obj.param2 = <some value> obj.save () And If this views are executed in two different threads sometimes I get MyModel instance in DB with id=1 and updated either param1 or param2 (BUT not both) - it depends on which process was the first. (of course in real life id changes, but sometimes 2 processes execute these two views with same id) The question is: What should I do to get instance with updated param1 and param2? I need something for merging changes in different processes. One decision is create interprocess lock object but in this case I will get sequence executing views and they will not be able to be executed simultaneously, so I ask help

    Read the article

  • How can I merge two Linq IEnumerable<T> queries without running them?

    - by makerofthings7
    How do I merge a List<T> of TPL-based tasks for later execution? public async IEnumerable<Task<string>> CreateTasks(){ /* stuff*/ } My assumption is .Concat() but that doesn't seem to work: void MainTestApp() // Full sample available upon request. { List<string> nothingList = new List<string>(); nothingList.Add("whatever"); cts = new CancellationTokenSource(); delayedExecution = from str in nothingList select AccessTheWebAsync("", cts.Token); delayedExecution2 = from str in nothingList select AccessTheWebAsync("1", cts.Token); delayedExecution = delayedExecution.Concat(delayedExecution2); } /// SNIP async Task AccessTheWebAsync(string nothing, CancellationToken ct) { // return a Task } I want to make sure that this won't spawn any task or evaluate anything. In fact, I suppose I'm asking "what logically executes an IQueryable to something that returns data"? Background Since I'm doing recursion and I don't want to execute this until the correct time, what is the correct way to merge the results if called multiple times? If it matters I'm thinking of running this command to launch all the tasks var AllRunningDataTasks = results.ToList(); followed by this code: while (AllRunningDataTasks.Count > 0) { // Identify the first task that completes. Task<TableResult> firstFinishedTask = await Task.WhenAny(AllRunningDataTasks); // ***Remove the selected task from the list so that you don't // process it more than once. AllRunningDataTasks.Remove(firstFinishedTask); // TODO: Await the completed task. var taskOfTableResult = await firstFinishedTask; // Todo: (doen't work) TrustState thisState = (TrustState)firstFinishedTask.AsyncState; // TODO: Update the concurrent dictionary with data // thisState.QueryStartPoint + thisState.ThingToSearchFor Interlocked.Decrement(ref thisState.RunningDirectQueries); Interlocked.Increment(ref thisState.CompletedDirectQueries); if (thisState.RunningDirectQueries == 0) { thisState.TimeCompleted = DateTime.UtcNow; } }

    Read the article

  • Does video tag (HTML 5) injection via JavaScript work in any browsers?

    - by JoshNaro
    I'm trying to dynamically spawn a video element on a page using JavaScript. JavaScript <script type="text/javascript"> $(document).ready(function() { var video = $(document.createElement('video')) .attr('id', 'VideoElement') .attr('controls', 'controls') .attr('src', 'videopath.mp4') // Changed 'href' attribute to 'src' .css({ width: 640, height: 360 }); $('#VideoContainer').append(video); }); HTML <body> <div id="VideoContainer"></div> </body> In Firefox I get the video harness, but the actual video doesn't load. In IE8 the video harness doesn't even appear. Is HTML 5 just not supported enough to accomplish this yet? Edit: Got this to work with Artiom's fix. Looks like this works fine with Chrome and Safari. I'm using a codec Firefox doesn't support, so it doesn't work there; although I suspect it will work with a supported codec. IE8 sure enough doesn't work (high five IE).

    Read the article

  • A question about TBB/C++ code

    - by Jackie
    I am reading The thread building block book. I do not understand this piece of code: FibTask& a=*new(allocate_child()) FibTask(n-1,&x); FibTask& b=*new(allocate_child()) FibTask(n-2,&y); What do these directive mean? class object reference and new work together? Thanks for explanation. The following code is the defination of this class FibTask. class FibTask: public task { public: const long n; long* const sum; FibTask(long n_,long* sum_):n(n_),sum(sum_) {} task* execute() { if(n FibTask& a=*new(allocate_child()) FibTask(n-1,&x); FibTask& b=*new(allocate_child()) FibTask(n-2,&y); set_ref_count(3); spawn(b); spawn_and_wait_for_all(a); *sum=x+y; } return 0; } };

    Read the article

  • Nginx - Treats PHP as binary

    - by Think Floyd
    We are running Nginx+FastCgi as the backend for our Drupal site. Everything seems to work like fine, except for this one url. http:///sites/all/modules/tinymce/tinymce/jscripts/tiny_mce/plugins/smimage/index.php (We use TinyMCE module in Drupal, and the url above is invoked when user tries to upload an image) When we were using Apache, everything was working fine. However, nginx treats that above url Binary and tries to Download it. (We've verified that the file pointed out by the url is a valid PHP file) Any idea what could be wrong here? I think it's something to do with the NGINX configuration, but not entirely sure what that is. Any help is greatly appreciated. Config: Here's the snippet from the nginx configuration file: root /var/www/; index index.php; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } error_page 404 index.php; location ~* \.(engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(code-style\.pl|Entries.*|Repository|Root|Tag|Template)$ { deny all; } location ~* ^.+\.(jpg|jpeg|gif|png|ico)$ { access_log off; expires 7d; } location ~* ^.+\.(css|js)$ { access_log off; expires 7d; } location ~ .php$ { include /etc/nginx/fcgi.conf; fastcgi_pass 127.0.0.1:8888; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; } location ~ /\.ht { deny all; }

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25  | Next Page >