Search Results

Search found 1071 results on 43 pages for 'pessimistic locking'.

Page 37/43 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • ZeroMQ REQ/REP on ipc:// and concurrency

    - by Metiu
    I implemented a JSON-RPC server using a REQ/REP 0MQ ipc:// socket and I'm experiencing strange behavior which I suspect is due to the fact that the ipc:// underlying unix socket is not a real socket, but rather a single pipe. From the documentation, one has to enforce strict zmq_send()/zmq_recv() alternation, otherwise the out-of-order zmq_send() will return an error. However, I expected the enforcement to be per-client, not per-socket. Of course with a Unix socket there is just one pipeline from multiple clients to the server, so the server won't know who it is talking with. Two clients could zmq_send() simultaneously and the server would see this as an alternation violation. The sequence could be: ClientA: zmq_send() ClientB: zmq_send() : will it block until the other send/receive completes? will it return -1? (I suspect it will with ipc:// due to inherent low-level problems, but with TCP it could distinguish the two clients) ClientA: zmq_recv() ClientB: zmq_recv() so what about tcp:// sockets? Will it work concurrently? Should I use some other locking mechanism to work around this?

    Read the article

  • Nginx , Apache , Mysql , Memcache with server 4G ram. How optimize to enoigh of memory?

    - by TomSawyer
    i have 1 dedicated server with Nginx proxy for Apache. Memcache, mysql, 4G Ram. These day, my visitor on my site wasn't increased, but my server get overload always in some specified time. (9AM - 15PM) Ram in use is increased second by second to full. that's moment, my server will get overload. i have to kill all apache , mysql service and reboot it to get free memory. and it'll full again. that's the terrible circle. here is my ram in use at the moment 160(nginx) 220(apache) 512(memcache) 924(mysql) here's process number 4(nginx) 14(apache) 5(memcache) 20(mysql) and here's my my.cnf config. someone can help me to optimize it? [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql skip-locking skip-networking skip-name-resolve # enable log-slow-queries log-slow-queries = /var/log/mysql-slow-queries.log long_query_time=3 max_connections=200 wait_timeout=64 connect_timeout = 10 interactive_timeout = 25 thread_stack = 512K max_allowed_packet=16M table_cache=1500 read_buffer_size=4M join_buffer_size=4M sort_buffer_size=4M read_rnd_buffer_size = 4M max_heap_table_size=256M tmp_table_size=256M thread_cache=256 query_cache_type=1 query_cache_limit=4M query_cache_size=16M thread_concurrency=8 myisam_sort_buffer_size=128M # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 [mysqldump] quick max_allowed_packet=16M [mysql] no-auto-rehash [isamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [myisamchk] key_buffer=256M sort_buffer=256M read_buffer=64M write_buffer=64M [mysqlhotcopy] interactive-timeout [mysql.server] user=mysql basedir=/var/lib [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid

    Read the article

  • How many users are sufficient to make a heavy load for web application

    - by galymzhan
    I have a web application, which has been suffering high load recent days. The application runs on single server which has 8-core Intel CPU and 4gb of RAM. Software: Drupal 5 (Apache 2, PHP5, MySQL5) running on Debian. After reaching 500 authenticated and 200 anonymous users (simultaneous), the application drastically decreases its performance up to total failure. The biggest load comes from authenticated users, who perform activities, causing insert/update/deletes on db. I think mysql is a bottleneck. Is it normal to slow down on such number of users? EDIT: I forgot to mention that I did some kind of profiling. I runned commands top, htop and they showed me that all memory was being used by MySQL! After some time MySQL starts to perform terribly slow, site goes down, and we have to restart/stop apache to reduce load. Administrators said that there was about 200 active mysql connections at that moment. The worst point is that we need to solve this ASAP, I can't do deep profiling analysis/code refactoring, so I'm considering 2 ways: my tables are MyIsam, I heard they use table-level locking which is very slow, is it right? could I change it to Innodb without worry? what if I take MySQL, and move it to dedicated machine with a lot of RAM?

    Read the article

  • Boost threading/mutexs, why does this work?

    - by Flamewires
    Code: #include <iostream> #include "stdafx.h" #include <boost/thread.hpp> #include <boost/thread/mutex.hpp> using namespace std; boost::mutex mut; double results[10]; void doubler(int x) { //boost::mutex::scoped_lock lck(mut); results[x] = x*2; } int _tmain(int argc, _TCHAR* argv[]) { boost::thread_group thds; for (int x = 10; x>0; x--) { boost::thread *Thread = new boost::thread(&doubler, x); thds.add_thread(Thread); } thds.join_all(); for (int x = 0; x<10; x++) { cout << results[x] << endl; } return 0; } Output: 0 2 4 6 8 10 12 14 16 18 Press any key to continue . . . So...my question is why does this work(as far as i can tell, i ran it about 20 times), producing the above output, even with the locking commented out? I thought the general idea was: in each thread: calculate 2*x copy results to CPU register(s) store calculation in correct part of array copy results back to main(shared) memory I would think that under all but perfect conditions this would result in some part of the results array having 0 values. Is it only copying the required double of the array to a cpu register? Or is it just too short of a calculation to get preempted before it writes the result back to ram? Thanks.

    Read the article

  • Hibernate Exception Fixed By Alt+Tab

    - by Lee Theobald
    Hi all, I've got a very curious problem in Hibernate that I would like some opinions on. In my code if I do the following: Go to page A Click a link on page A to be taken to page B Click on data item on page B Exception thrown I get an error telling me: failed to lazily initialize a collection of role: XYZ, no session or session was closed Fair enough. But when I do the same thing but add an alt+tab in the middle, everything is fine. E.g. Go to page A Click a link on page A to be taken to page B Hit ALt+Tab to switch to another application Hit ALt+Tab to switch back to the web browser Click on data item on page B Everything is fine. I'm a little confused as to how switching focus from my application makes it act as I want it to. Does anyone have any light to shine on the subject? I don't think it's a locking issue as even if I do the second set of steps quicker than the first, still no error. It's a Seam application using Hibernate 3.3.2.GA & 3.4.0.GA.

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • I have a fucntion that create histogram of each Bitmap. How can i create another 3 histograms for R.G.B of each Bitmap?

    - by Daniel Lip
    This is the histogram function im using today and if im not worng it's creating an histogram by Gray color. What i want is another fucntion that will return me 3 histograms of each Bitmap: The first histogram will be of the Red color of the bitmap the second for the Green color and the last one for the Blue color. public static long[] GetHistogram(Bitmap b) { long[] myHistogram = new long[256]; BitmapData bmData = null; try { //Lock it fixed with 32bpp bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb); int scanline = bmData.Stride; System.IntPtr Scan0 = bmData.Scan0; unsafe { byte* p = (byte*)(void*)Scan0; int nWidth = b.Width; int nHeight = b.Height; for (int y = 0; y < nHeight; y++) { for (int x = 0; x < nWidth; x++) { long Temp = 0; Temp += p[0]; // p[0] - blue, p[1] - green , p[2]-red Temp += p[1]; Temp += p[2]; Temp = (int)Temp / 3; myHistogram[Temp]++; //we do not need to use any offset, we always can increment by pixelsize when //locking in 32bppArgb - mode p += 4; } } } b.UnlockBits(bmData); } catch { try { b.UnlockBits(bmData); } catch { } } return myHistogram; } How may i do it ?

    Read the article

  • inconsistent setTexture behavior in cocos2d on iPhone after using CCAnimate/CCAnimation

    - by chillid
    Hi, I have a character that goes through multiple states. The state changes are reflected by means of a sprite image (texture) change. The state change is triggered by a user tapping on the sprite. This works consistently and quite well. I then added an animation during State0. Now, when the user taps - setTexture gets executed to change the texture to reflect State1, however some of the times (unpredictable) it does not change the texture. The code flows as below: // 1. // Create the animation sequence CGRect frame1Rect = CGRectMake(0,32,32,32); CGRect frame2Rect = CGRectMake(32,32,32,32); CCTexture2D* texWithAnimation = [[CCTextureCache sharedTextureCache] addImage:@"Frames0_1_thinkNthickoutline32x32.png"]; id anim = [[[CCAnimation alloc] initWithName:@"Sports" delay:1/25.0] autorelease]; [anim addFrame:[CCSpriteFrame frameWithTexture:texWithAnimation rect:frame1Rect offset:ccp(0,0)]]; [anim addFrame:[CCSpriteFrame frameWithTexture:texWithAnimation rect:frame2Rect offset:ccp(0,0)]]; // Make the animation sequence repeat forever id myAction = [CCAnimate actionWithAnimation: anim restoreOriginalFrame:NO]; // 2. // Run the animation: sports = [[CCRepeatForever alloc] init]; [sports initWithAction:myAction]; [self.sprite runAction:sports]; // 3. stop action on state change and change texture: NSLog(@"Stopping action"); [sprite stopAction:sports]; NSLog(@"Changing texture for kCJSports"); [self setTexture: [[CCTextureCache sharedTextureCache] addImage:@"SportsOpen.png"]]; [self setTextureRect:CGRectMake(0,0,32,64)]; NSLog(@"Changed texture for kCJSports"); Note that all the NSLog lines get logged - and the texture RECT changes - but the image/texture changes only some of the times - fails for around 10-30% of times. Locking/threading/timing issue somewhere? My app (game) is single threaded and I only use the addImage and not the Async version. Any help much appreciated.

    Read the article

  • Test the sequentiality of a column with a single SQL query

    - by LauriE
    Hey, I have a table that contains sets of sequential datasets, like that: ID set_ID some_column n 1 'set-1' 'aaaaaaaaaa' 1 2 'set-1' 'bbbbbbbbbb' 2 3 'set-1' 'cccccccccc' 3 4 'set-2' 'dddddddddd' 1 5 'set-2' 'eeeeeeeeee' 2 6 'set-3' 'ffffffffff' 2 7 'set-3' 'gggggggggg' 1 At the end of a transaction that makes several types of modifications to those rows, I would like to ensure that within a single set, all the values of "n" are still sequential (rollback otherwise). They do not need to be in the same order according to the PK, just sequential, like 1-2-3 or 3-1-2, but not like 1-3-4. Due to the fact that there might be thousands of rows within a single set I would prefer to do it in the db to avoid the overhead of fetching the data just for verification after making some small changes. Also there is the issue of concurrency. The way locking in InnoDB (repeatable read) works (as I understand) is that if I have an index on "n" then InnoDB also locks the "gaps" between values. If I combine set_ID and n to a single index, would that eliminate the problem of phantom rows appearing? Looks to me like a common problem. Any brilliant ideas? Thanks! Note: using MySQL + InnoDB

    Read the article

  • how can i make sure only a single record is inserted when multiple apache threads are trying to acce

    - by Ed Gl
    I have a web service (xmlrpc service to be exact) that handles among other things writing data into the database. Here's the scenario: I often receive requests to either update or insert a record. What I would do is this: If the record already exists, append to the record, If not, create a new record The issue is that there are certain times I would get a 'burst' of requests, which spawns several apache threads to handle the request. These 'bursts' would come within less than milliseconds of each other. I now have several threads performing #1 and #2. Often two threads would would 'pass' number #1 and actually create two duplicate records (except for the primary key). I'd like to use some locking mechanism to prevent other threads from accessing the table while the other thread finishes its work. I'm just afraid of using it because if something happens I don't want to leave the table locked. Is there a solid way of handling this? I'm open to using locks if I can do it properly. Thanks,

    Read the article

  • .NET Application with SQL Server CE Database

    - by blu
    I just started using SQL Server CE 3.5 in my WinForms Application (C# in VS 2008 SP1). I've noticed a couple of interesting things I'd like some input on: 1. Copying of sdf file to bin My sdf file is located inside of an Infrastructure project that houses my repository implementations. When the application is first debugged the sdf was copied to debug\bin. This is where all future reads/writes operate. At some point when this is deployed the file will go into a data folder using Click Once, but during development where should I be putting this sdf? Is having it in the bin typical, or are there any other recommendations? 2. Updating sdf It appears that writing to the sdf file does not immediately update the database. I am using Linq-to-SQL and am calling SubmitChanges, but on read the values are not returned. However if I close the application and re-open it the added value is there. Is there an additional flush step I need to take? What is causing this, file locking, buffering, something else? Update 3. Unit Tests I have an MS test project, and the sdf file is not being copied to the correct output directory. I have the settings: Build Action: Content Copy to Output Directory: Copy Always The message is: System.Data.SqlServerCe.SqlCeException: The database file cannot be found. Check the path to the database. I appreciate any guidance on these questions, thanks. If there is a tutorial other than what is on MSDN that you know about that would be great too. Working with CE is proving to be a difficult task and I welcome any help I can find.

    Read the article

  • How should I lock the table in this VB6 / Access application?

    - by Brian Hooper
    I'm working on a VB6 application using an Access database. The application writes messages to a log table from time to time. Several instances of the application may be running simultaneously and to distinguish them they each have their own run number. The run number is deduced from the log table thus... Set record_set = New ADODB.Recordset query_string = "SELECT MAX(RUN_NUMBER) + 1 AS NEW_RUN_NUMBER FROM ERROR_LOG" record_set.CursorLocation = adUseClient record_set.Open query_string, database_connection, adOpenStatic, , adCmdText record_set.MoveLast If IsNull(record_set.Fields("NEW_RUN_NUMBER")) Then run_number = 0 Else run_number = record_set.Fields("NEW_RUN_NUMBER") End If command_string = "INSERT INTO ERROR_LOG (RUN_NUMBER, SEVERITY, MESSAGE) " & _ " VALUES (" & Str$(run_number) & ", " & _ " " & Str$(SEVERITY_INFORMATION) & ", " & _ " 'Run Started'); " database_connection.Execute command_string Obviously there is a small gap between the calculation of the run number and the appearance of the new row in the database, and to prevent another instance getting access between the two operations I'd like to lock the table; something along the lines of SET TRANSACTION READ WRITE RESERVING ERROR_LOG FOR PROTECTED WRITE; How should I go about doing this? Would locking the recordset do any good (the row in the record set doesn't match any particular row in the database)?

    Read the article

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • How to avoid concurrent execution of a time-consuming task without blocking?

    - by Diego V
    I want to efficiently avoid concurrent execution of a time-consuming task in a heavily multi-threaded environment without making threads wait for a lock when another thread is already running the task. Instead, in that scenario, I want them to gracefully fail (i.e. skip its attempt to execute the task) as fast as possible. To illustrate the idea considerer this unsafe (has race condition!) code: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing running = true; try { runExpensiveTask(); } finally { running = false; } } I though about using a variation of Double-Checked Locking (consider that running is a primitive 32-bit field, hence atomic, it could work fine even for Java below 5 without the need of volatile). It could look like this: private static boolean running = false; public void launchExpensiveTask() { if (running) return; // Do nothing synchronized (ThisClass.class) { if (running) return; running = true; try { runExpensiveTask(); } finally { running = false; } } } Maybe I should also use a local copy of the field as well (not sure now, please tell me). But then I realized that anyway I will end with an inner synchronization block, that still could hold a thread with the right timing at monitor entrance until the original executor leaves the critical section (I know the odds usually are minimal but in this case we are thinking in several threads competing for this long-running resource). So, could you think in a better approach?

    Read the article

  • threading in Python taking up too much CPU

    - by KevinShaffer
    I wrote a chat program and have a GUI running using Tkinter, and to go and check when new messages have arrived, I create a new thread so Tkinter keeps doing its thing without locking up while the new thread goes and grabs what I need and updates the Tkinter window. This however becomes a huge CPU hog, and my guess is that it has to do somehow with the fact that the Thread is started and never really released when the function is done. Here's the relevant code (it's ugly and not optimized at the moment, but it gets the job done, and itself does not use too much processing power, as when I run it not threaded, it doesn't take up much CPU but it locks up Tkinter) Note: This is inside of a class, hence the extra tab. def interim(self): threading.Thread(target=self.readLog).start() self.after(5000,self.interim) def readLog(self): print 'reading' try: length = len(str(self.readNumber)) f = open('chatlog'+str(myport),'r') temp = f.readline().replace('\n','') while (temp[:length] != str(self.readNumber)) or temp[0] == '<': temp = f.readline().replace('\n','') while temp: if temp[0] != '<': self.updateChat(temp[length:]) self.readNumber +=1 else: self.updateChat(temp) temp = f.readline().replace('\n','') f.close() Is there a way to better manage the threading so I don't consume 100% of the CPU very quickly?

    Read the article

  • wxpython - Running threads sequentially without blocking GUI

    - by ryantmer
    I've got a GUI script with all my wxPython code in it, and a separate testSequences module that has a bunch of tasks that I run based on input from the GUI. The tasks take a long time to complete (from 20 seconds to 3 minutes), so I want to thread them, otherwise the GUI locks up while they're running. I also need them to run one after another, since they all use the same hardware. (My rationale behind threading is simply to prevent the GUI from locking up.) I'd like to have a "Running" message (with varying number of periods after it, i.e. "Running", "Running.", "Running..", etc.) so the user knows that progress is occurring, even though it isn't visible. I'd like this script to run the test sequences in separate threads, but sequentially, so that the second thread won't be created and run until the first is complete. Since this is kind of the opposite of the purpose of threads, I can't really find any information on how to do this... Any help would be greatly appreciated. Thanks in advance! gui.py import testSequences from threading import Thread #wxPython code for setting everything up here... for j in range(5): testThread = Thread(target=testSequences.test1) testThread.start() while testThread.isAlive(): #wait until the previous thread is complete time.sleep(0.5) i = (i+1) % 4 self.status.SetStatusText("Running"+'.'*i) testSequences.py import time def test1(): for i in range(10): print i time.sleep(1) (Obviously this isn't the actual test code, but the idea is the same.)

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • Python threading question (Working with a method that blocks forever)

    - by Nix
    I am trying to wrap a thread around some receiving logic in python. Basically we have an app, that will have a thread in the background polling for messages, the problem I ran into is that piece that actually pulls the messages waits forever for a message. Making it impossible to terminate... I ended up wrapping the pull in another thread, but I wanted to make sure there wasn't a better way to do it. Original code: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() #do other stuff... class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): #stop is a flag that i use to stop the thread... while(not stopped ): #can never stop because pull below blocks message = receiver.pull() print "Message" + message What I refectored to: class Manager: def __init__(self): receiver = MessageReceiver() receiver.start() class MessageReceiver(Thread): receiver = Receiver() def __init__(self): Thread.__init__(self) def run(self): pullThread = PullThread(self.receiver) pullThread.start() #stop is a flag that i use to stop the thread... while(not stopped and pullThread.last_message ==None): pass message = pullThread.last_message print "Message" + message class PullThread(Thread): last_message = None def __init__(self, receiver): Thread.__init(self, target=get_message, args=(receiver)) def get_message(self, receiver): self.last_message = None self.last_message = receiver.pull() return self.last_message I know the obvious locking issues exist, but is this the appropriate way to control a receive thread that waits forever for a message? One thing I did notice was this thing eats 100% cpu while waiting for a message... **If you need to see the stopping logic please let me know and I will post.

    Read the article

  • How do I daemonize an arbitrary script in unix?

    - by dreeves
    I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon. There are two common cases I'd like to deal with: I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case). I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once. Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)). Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts. More specifically, I'd like a program "daemonize" that I can run like % daemonize myscript arg1 arg2 or, for example, % daemonize 'echo `date` >> /tmp/times.txt' which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly). NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy. Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome. (If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)

    Read the article

  • Looking for PyQt4 embeddable terminal widget

    - by redShadow
    I wrote an application that, among other things, launches some "backend" processes to do some stuff. These subprocesses are very likely to fail or have unexpected behavior since they have to operate in quite hard conditions, so I prefer to give full control over them to the operator. NOTE: I am running these processes using a subprocess module based class instead of QProcess to have some more control functionality over the running process. At the moment, I'm using a QPlainTextEdit widget to which I append standard output/error from the subprocess, plus some buttons to quickly send some common signals (INT, STOP, CONT, KILL, ..), but: In some cases it would be useful to send some input too. Although it could be done with a text input box, I would prefer using something more "professional" Of course, there is no direct way to interpret special control characters, such as color codes, cursor movement, etc.. I had to implement an auto-scroll management of the console, but it is not guaranteed 100% to work nicely (sometimes the scroll locking doesn't work as expected, etc.) So: does anyone know something I could use to accomplish these needs? I found qtermwidget but it seems more oriented on handling a shell process (and the Python bindings seems to let you run /bin/bash only) by itself than communicating with an already existing process I/O.

    Read the article

  • How do I make this Java code operate properly? [Multi-threaded, race condition]

    - by Fixee
    I got this code from a student, and it does not work properly because of a race condition involving x++ and x--. He added synchronized to the run() method trying to get rid of this bug, but obviously this only excludes threads from entering run() on the same object (which was never a problem in the first place) but doesn't prevent independent objects from updating the same static variable x at the same time. public class DataRace implements Runnable { static volatile int x; public synchronized void run() { for (int i = 0; i < 10000; i++) { x++; x--; } } public static void main(String[] args) throws Exception { Thread [] threads = new Thread[100]; for (int i = 0; i < threads.length; i++) threads[i] = new Thread(new DataRace()); for (int i = 0; i < threads.length; i++) threads[i].start(); for (int i = 0; i < threads.length; i++) threads[i].join(); System.out.println(x); // x not always 0! } } Since we cannot synchronize on x (because it is primitive), the best solution I can think of is to create a new static object like static String lock = ""; and enclose the x++ and x-- within a synchronized block, locking on lock. But this seems really awkward. Is there a better way?

    Read the article

  • Java - multithreaded access to a local value store which is periodically cleared

    - by Telax
    I'm hoping for some advice or suggestions on how best to handle multi threaded access to a value store. My local value storage is designed to hold onto objects which are currently in use. If the object is not in use then it is removed from the store. A value is pumped into my store via thread1, its entry into the store is announced to listeners, and the value is stored. Values coming in on thread1 will either be totally new values or updates for existing values. A timer is used to periodically remove any value from the store which is not currently in use and so all that remains of this value is its ID held locally by an intermediary. Now, an active element on thread2 may wake up and try to access a set of values by passing a set of value IDs which it knows about. Some values will be stored already (great) and some may not (sadface). Those values which are not already stored will be retrieved from an external source. My main issue is that items which have not already been stored and are currently being queried for may arrive in on thread1 before the query is complete. I'd like to try and avoid locking access to the store whilst a query is being made as it may take some time.

    Read the article

  • how to delete the pluginassembly after AppDomain.Unload(domain)

    - by Ase
    hello, i have a weird problem. i would like to delete an assembly(plugin.dll on harddisk) which is already loaded, but the assembly is locked by the operating system (vista), even if i have unloaded it. f.e. AppDomainSetup setup = new AppDomainSetup(); setup.ShadowCopyFiles = "true"; AppDomain appDomain = AppDomain.CreateDomain(assemblyName + "_AppDomain", AppDomain.CurrentDomain.Evidence, setup); IPlugin plugin = (IPlugin)appDomain.CreateInstanceFromAndUnwrap(assemblyName, "Plugin.MyPlugins"); I also need the assemblyinfos, because I don't know which classes in the pluginassembly implements the IPlugin Interface. It should be possible to have more than one Plugin in one Pluginassembly. Assembly assembly = appDomain.Load(assemblyName); if (assembly != null) { Type[] assemblyTypes = assembly.GetTypes(); foreach (Type assemblyTyp in assemblyTypes) { if (typeof(IPlugin).IsAssignableFrom(assemblyTyp)) { IPlugin plugin = (IPlugin)Activator.CreateInstance(assemblyTyp); plugin.AssemblyName = assemblyNameWithEx; plugin.Host = this; } } } AppDomain.Unload(appDomain); How is it possible to get the assemblyinfos from the appDomain without locking the assembly? best regards

    Read the article

  • Windows Server 2012 Branchcache vs. DFS-R

    - by TheCleaner
    Warning, subjective question ahead! But hopefully a good one that won't get closed. SCENARIO: I have a branch office that currently has no on-premise server. They access everything including a DC across a 12Mbps WAN link (MPLS). The link isn't saturated, averaging around 20% utilization. The circuit is very stable and has a high SLA and excellent uptime. However, large file transfers (mainly reads, not writes) from the file server across the WAN can be slow. We don't currently utilize DFS. RESEARCH DONE: I'm aware of WAN acceleration, using either dedicated hardware (Riverbed) or a dedicated software VM (Silver Peak) for example. But the pricing is outside of our current budget and the need isn't quite there yet from our perspective (since the issue is mainly in a "pull" scenario not necessarily push/pull). I'm mainly looking at deploying a Windows server at this branch office and either utilizing DFS-R or BranchCache. Looking at a table comparison and assuming we are looking at a "hosted branchcache server" and not simply distributed: It would appear there are benefits to both, even if both are "hosted" on a server. QUESTIONS I ACTUALLY HAVE: In what scenarios do each of these techs shine and where do you choose one over the other? Looking at a hosted Branchcache server, can you set "pre-fetching" of certain folders/files on the central file server so that they are immediately accessible locally at the branch? Do you have to do this on a schedule (if it is possible)? Looking at DFS-R my concern (and apparently solved with 3rd party apps) is file locking and making sure the file gets updated properly during a write operation (ie, making sure if both copies are accessed and both are written to, which file takes precedence and what happens to the changes?). Ideal it would seem would be to lock any alternate replicas of the data, but is it really that big of an issue? Does Branchcache lock the central file for editing? Does branchcache only transmit the deltas back to the central file of what has changed? Would either technology be ill advised if the branch office server was going to be utilized as a domain controller as well?

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43  | Next Page >