Search Results

Search found 3641 results on 146 pages for 'threads'.

Page 117/146 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • What does this svn2git error mean?

    - by Hisham
    I am trying to import my repository from svn to git using svn2git, but it seems like it's failing when it hits a branch. What's the problem? Found possible branch point: https://s.aaa.com/repo/trunk/project => https://s.aaa.com/repo/branches/project-beta1.0, 128 Use of uninitialized value in substitution (s///) at /opt/local/libexec/git-core/git-svn line 1728. Use of uninitialized value in concatenation (.) or string at /opt/local/libexec/git-core/git-svn line 1728. refs/remotes/trunk: 'https://s.aaa.com/repo' not found in '' Running command: git branch -l --no-color * master Running command: git branch -r --no-color trunk Running command: git checkout trunk Note: checking out 'trunk'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b new_branch_name HEAD is now at f4e6268... Changing svn repository in cap files Running command: git branch -D master Deleted branch master (was f4e6268). Running command: git checkout -f -b master Switched to a new branch 'master' Running command: git gc Counting objects: 450, done. Delta compression using up to 2 threads. Compressing objects: 100% (368/368), done. Writing objects: 100% (450/450), done. Total 450 (delta 63), reused 450 (delta 63)

    Read the article

  • Nested multithread operations tracing

    - by Sinix
    I've a code alike void ExecuteTraced(Action a, string message) { TraceOpStart(message); a(); TraceOpEnd(message); } The callback (a) could call ExecuteTraced again, and, in some cases, asynchronously (via ThreadPool, BeginInvoke, PLINQ etc, so I've no ability to explicitly mark operation scope). I want to trace all operation nested (even if they perform asynchronously). So, I need the ability to get last traced operation inside logical call context (there may be a lot of concurrent threads, so it's impossible to use lastTraced static field). There're CallContext.LogicalGetData and CallContext.LogicalSetData, but unfortunately, LogicalCallContext propagates changes back to the parent context as EndInvoke() called. Even worse, this may occure at any moment if EndInvoke() was called async. http://stackoverflow.com/questions/883486/endinvoke-changes-current-callcontext-why Also, there is Trace.CorrelationManager, but it based on CallContext and have all the same troubles. There's a workaround: use the CallContext.HostContext property which does not propagates back as async operation ended. Also, it does'nt clone, so the value should be immutable - not a problem. Though, it's used by HttpContext and so, workaround is not usable in Asp.Net apps. The only way I see is to wrap HostContext (if not mine) or entire LogicalCallContext into dynamic and dispatch all calls beside last traced operation. Help, please!

    Read the article

  • git branch naming best practices

    - by skiphoppy
    I've been using a local git repository interacting with my group's CVS repository for several months, now. I've made an almost neurotic number of branches, most of which have thankfully merged back into my trunk. But naming is starting to become an issue. If I have a task easily named with a simple label, but I accomplish it in three stages which each include their own branch and merge situation, then I can repeat the branch name each time, but that makes the history a little confusing. If I get more specific in the names, with a separate description for each stage, then the branch names start to get long and unwieldy. I did learn looking through old threads here that I could start naming branches with a / in the name, i.e., topic/task, or something like that. I may start doing that and seeing if it helps keep things better organized. What are some best practices for naming git branches? Edit: Nobody has actually suggested any naming conventions. I do delete branches when I'm done with them. I just happen to have several around due to management constantly adjusting my priorities. :) As an example of why I might need more than one branch on a task, suppose I need to commit the first discrete milestone in the task to the group's CVS repository. At that point, due to my imperfect interaction with CVS, I would perform that commit and then kill that branch. (I've seen too much weirdness interacting with CVS if I try to continue to use the same branch at that point.)

    Read the article

  • Anybody seen this behavior with Sql Server Reporting Services, a 64bit OS and an Oracle datasource?

    - by dkackman
    I'm working on a Sql Server Reporting Services solution that queries across both a Sql Server data source and an Oracle 10g data source. My dev box is Windows 7 64bit with Sql Server 2008R2 and I'm hosting IIS7 and SSRS on that system for development; using VS.NET for designing the reports. I have been having errors when running the report where SSRS complains about loading the 32 bit Oracle client in a 64bit process. There a number of threads out there about how to solve that. The thing is, they all come down to making sure you have the 64bit Oracle, client which I do. The weird chain of events I have goes like this: Create initial Oracle datasource and wire up report (it works) Edit Oracle datasource connection (it stops working with BadImageFormatException 32bit/64bit error message) uninstall and reinstall Oracle client (it works) Edit Oracle connection again (it stops working with BadImageFormatException 32bit/64bit error message) So short of reinstalling the client every time I change the connection string I am at a complete loss. Has anybody seen this sort of behavior? And if so what the heck am I doing wrong?

    Read the article

  • Java 1.4 singleton containing a mutable field

    - by Philippe
    Hi, I'm working on a legacy Java 1.4 project, and I have a factory that instantiates a csv file parser as a singleton. In my csv file parser, however, I have a HashSet that will store objects created from each line of my CSV file. All that will be used by a web application, and users will be uploading CSV files, possibly concurrently. Now my question is : what is the best way to prevent my list of objects to be modified by 2 users ? So far, I'm doing the following : final class MyParser { private File csvFile = null; private static Set myObjects = Collections.synchronizedSet(new HashSet); public synchronized void setFile(File file) { this.csvFile = file; } public void parse() FileReader fr = null; try { fr = new FileReader(csvFile); synchronized(myObjects) { myObjects.clear(); while(...) { // foreach line of my CSV, create a "MyObject" myObjects.add(new MyObject(...)); } } } catch (Exception e) { //... } } } Should I leave the lock only on the myObjects Set, or should I declare the whole parse() method as synchronized ? Also, how should I synchronize - both - the setting of the csvFile and the parsing ? I feel like my actual design is broken because threads could modify the csv file several times while a possibly long parse process is running. I hope I'm being clear enough, because myself am a bit confused on those multi-synchronization issues. Thanks ;-)

    Read the article

  • iPhone - Failed to save the videos metadata to the filesystem.

    - by cameron
    My application uses the UIImagePicker to allow the user to use the camera and capture a photo to edit/etc. I am getting the error message below: 2010-02-03 10:41:24.018 LivingRoom[5333:5303] Failed to save the videos metadata to the filesystem. Maybe the information did not conform to a plist. Program received signal: “EXC_BAD_ACCESS”. A search in Google brings up a number of threads in various forums, with no ultimate response/root cause/suggestions on how to fix/debug. An example is the thread below with code which is very similar to my app: http://groups.google.com/group/iphonesdkdevelopment/browse_thread/thread/6b7b396c62bef398 The error disappears for a while (10 tests in a row, no errors) if I reboot the iPhone. I have not been able to determine what makes it reoccur after a reboot, but it does. I am not using the video source and the fact that a reboot solves the problem for a while points to some sort of mem leak (perhaps?). The problem always shows up on both the iPhone (even after the reboot) and the simulator when choosing a photo from the album, but the app does not crash on the iPhone or the simulator. The same app with exact code did not have the error message when compiled using SDK 3.0 (last August/September). But 3.1.x has always produced the error message, which means that once a week or so the iPhone needs to be rebooted for the error to disappear. The users are not happy with that solution any longer!! Any suggestion/clues would be greatly appreciated.

    Read the article

  • Are Django template tags cached?

    - by thebossman
    I have gone through the (painful) process of writing a custom template tag for use in Django. It is registered as an inclusion_tag so that it renders a template. However, this tag breaks as soon as I try to change something. I've tried changing the number of parameters and correspondingly changing the parameters when it's called. It's clear the new tag code isn't being loaded, because an error is thrown stating that there is a mismatch in the number of parameters, and it's evident that it's attempting to call the old function. The same problem occurs if I try to change the name of the template being rendered and correspondingly change the name of the template on disk. It continues to try to call the old template. I've tried clearing old .pyc files with no luck. Overall, the system is acting as though it's caching the template tags, likely due to the register command. I have dug through endless threads trying to find out if this is so, but all could find it James Bennett stating here that register doesn't do anything. Please help!

    Read the article

  • iPhone UI addSubview causing concurrency exception

    - by Eli
    This is really odd... I run my app, and while it is opening and the views are constructing I get: Collection <CALayerArray: 0x124650> was mutated while being enumerated. The code trace goes through the following: main UIApplicationMain -[UIApplication _run] CFRunLoopRunInMode CFRunLoopRunSpecific _UIApplicationHandleEvent -[UIApplication sendEvent:] -[UIApplication handleEvent:withNewEvent:] -[UIApplication _runWithURL:sourceBundleID:] -[UIApplication _performInitilizationWithURL:sourceBundleID:] -[AppDelegate applicationDidFinishLaunching:] +[Controller initializeController] //This is my own function [window addSubview: pauseMenuController.view] //This is the last point of my code it goes through -[UIView(Hierarchy) addSubview:] -[UIView(Internal) _addSubview:positioned:relativeTo:] -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:] -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] -[UIView(Hierarchy) _makeSubtreePerformSelector:withObject:withObject:copySublayers:] _NSFastEnumerationMutationHandler objc_exception_throw I've run the game lots and lots and lots of times and I've never seen this, then suddenly it popped up. The weird thing is that I'm not creating any other threads (that I know of) until after this code all gets called. It'll be easier for me to debug this if someone can give me some explanation of what might be getting modified while it's being accessed in a UIView. Does it have something to do with adding something to the view while it's already adding something, maybe? Any ideas?

    Read the article

  • Asynchronously populate datagridview in Windows Forms application

    - by dcryptd
    howzit! I'm a web developer that has been recently requested to develop a Windows forms application, so please bear with me (or dont laugh!) if my question is a bit elementary. After many sessions with my client, we eventually decided on an interface that contains a tabcontrol with 5 tabs. Each tab has a datagridview that may eventually hold up to 25,000 rows of data (with about 6 columns each). I have successfully managed to bind the grids when the tab page is loaded and it works fine for a few records, but the UI freezes when I bound the grid with 20,000 dummy records. The "freeze" occurs when I click on the tab itself, and the UI only frees up (and the tab page is rendered) once the bind is complete. I communicated this to the client and mentioned the option of paging for each grid, but she is adament w.r.t. NOT wanting this. My only option then is to look for some asynchronous method of doing this in the background. I don't know much about threading in windows forms, but I know that I can use the BackgroundWorker control to achieve this. My only issue after reading up a bit on it is that it is ideally used for "long-running" tasks and I/O operations. My questions: How does one determine a long-running task? How does one NOT MISUSE the BackgroundWorker control, ie. is there a general guideline to follow when using this? (I understand that opening/spawning multiple threads may be undesirable in certain instances) Most importantly: How can I achieve (asychronously) binding of the datagridview after the tab page - and all its child controls - loads. Thank you for reading this (ahem) lengthy query, and I highly appreciate any responses/thoughts/directions on this matter! Cheers!

    Read the article

  • Syncronization Exception

    - by Kurru
    Hi I have two threads, one thread processes a queue and the other thread adds stuff into the queue. I want to put the queue processing thread to sleep when its finished processing the queue I want to have the 2nd thread tell it to wake up when it has added an item to the queue However these functions call System.Threading.SynchronizationLockException: Object synchronization method was called from an unsynchronized block of code on the Monitor.PulseAll(waiting); call, because I havent syncronized the function with the waiting object. [which I dont want to do, i want to be able to process while adding items to the queue]. How can I achieve this? Queue<object> items = new Queue<object>(); object waiting = new object(); 1st Thread public void ProcessQueue() { while (true) { if (items.Count == 0) Monitor.Wait(waiting); object real = null; lock(items) { object item = items.Dequeue(); real = item; } if(real == null) continue; .. bla bla bla } } 2nd Thread involves public void AddItem(object o) { ... bla bla bla lock(items) { items.Enqueue(o); } Monitor.PulseAll(waiting); }

    Read the article

  • remove an ASIHTTPRequest thread when it is done

    - by user262325
    Hello everyone I have a project in which it runs 5 download threads - (void)fetchThisURLFiveTimes:(NSURL *)url { [myProgressIndicator setProgress:0]; NSLog(@"Value: %f", [myProgressIndicator progress]); [myQueue cancelAllOperations]; [myQueue setDownloadProgressDelegate:myProgressIndicator]; [myQueue setDelegate:self]; [myQueue setRequestDidFinishSelector:@selector(queueComplete:)]; int i; for (i=0; i<5; i++) { ASIHTTPRequest *request = [ASIHTTPRequest requestWithURL:url]; [request setDidFinishSelector:@selector(requestDone:)]; [request setDelegate:self]; [myQueue addOperation:request]; } [myQueue go]; } - (void)requestWentDone:(ASIHTTPRequest *)request { //.. } - (void)queueComplete:(ASINetworkQueue *)queue { NSLog(@"Value: %f", [myProgressIndicator progress]); } I hope to remove a thread when it is done. I do not know where I need to place my codes: requestWentDone:(ASIHTTPRequest *)request or queueComplete:(ASINetworkQueue *)queue I noticed there is no tag property for request, so I added it to ASIHTTPRequest to help me recognize which thread prompts the function reuqestWentDone, I can get the tag of different request, but I do not know how to remove the ASIHTTPRequest thread from myQueue. Welcome any comment Thanks interdev

    Read the article

  • What are alternatives to Win32 PulseEvent() function?

    - by Bill
    The documentation for the Win32 API PulseEvent() function (kernel32.dll) states that this function is “… unreliable and should not be used by new applications. Instead, use condition variables”. However, condition variables cannot be used across process boundaries like (named) events can. I have a scenario that is cross-process, cross-runtime (native and managed code) in which a single producer occasionally has something interesting to make known to zero or more consumers. Right now, a well-known named event is used (and set to signaled state) by the producer using this PulseEvent function when it needs to make something known. Zero or more consumers wait on that event (WaitForSingleObject()) and perform an action in response. There is no need for two-way communication in my scenario, and the producer does not need to know if the event has any listeners, nor does it need to know if the event was successfully acted upon. On the other hand, I do not want any consumers to ever miss any events. In other words, the system needs to be perfectly reliable – but the producer does not need to know if that is the case or not. The scenario can be thought of as a “clock ticker” – i.e., the producer provides a semi-regular signal for zero or more consumers to count. And all consumers must have the correct count over any given period of time. No polling by consumers is allowed (performance reasons). The ticker is just a few milliseconds (20 or so, but not perfectly regular). Raymen Chen (The Old New Thing) has a blog post pointing out the “fundamentally flawed” nature of the PulseEvent() function, but I do not see an alternative for my scenario from Chen or the posted comments. Can anyone please suggest one? Please keep in mind that the IPC signal must cross process boundries on the machine, not simply threads. And the solution needs to have high performance in that consumers must be able to act within 10ms of each event.

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • when long polling, Why are my other requests taking so long?

    - by Pascal
    The client makes 2 concurrent requests. (1 which takes 60 seconds - long polling) and another which is NOT long polling - supposed to return right away. It does return right away when I'm not doing long polling. But as soon as I start doing long polling with the other thread, the other one takes forever to execute. Firebug shows that the request is waiting for 10-50 seconds. On the server, I profiled ALL requests from the moment the php script starts to the time it goes back to the client, and it shows that each one only took 300ms or less. This problem started about the same time I started doing long polling (with the other XHR requests). I'm using jquery for both requests. The server shows that it is under very light load. CPU and memory less then 2%. 8 processes running out of a pool of 15. (it doesn't seem to deviate much from that number 8, even when I run more ajax requests). I guess each process can run multiple ajax threads concurrently. I made sure to EXIT from all processes as soon as their done executing. I don't see how the process pool has run out, if there are still 7 unused processes listed under prstat -J. Also, the problem happens somewhat intermittently. Firefox should be able to handle 2 concurrent ajax requests. i dont get what the problem is.

    Read the article

  • Unit Testing a Java Chat Application

    - by Epitaph
    I have developed a basic Chat application in Java. It consists of a server and multiple client. The server continually monitors for incoming messages and broadcasts them to all the clients. The client is made up of a Swing GUI with a text area (for messages sent by the server and other clients), a text field (to send Text messages) and a button (SEND). The client also continually monitors for incoming messages from other clients (via the Server). This is achieved with Threads and Event Listeners and the application works as expected. But, how do I go about unit testing my chat application? As the methods involve establishing a connection with the server and sending/receiving messages from the server, I am not sure if these methods should be unit tested. As per my understanding, Unit Testing shouldn't be done for tasks like connecting to a database or network. The few test cases that I could come up with are: 1) The max limit of the text field 2) Client can connect to the Server 3) Server can connect to the Client 4) Client can send message 5) Client can receive message 6) Server can send message 7) Server can receive message 8) Server can accept connections from multiple clients But, since most of the above methods involve some kind of network communication, I cannot perform unit testing. How should I go about unit testing my chat application?

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • AsyncTask and Contexts

    - by Michael
    So I'm working out my first multi-threaded application using Android with the AsyncTask class. I'm trying to use it to fire off a Geocoder in a second thread, then update the UI with onPostExecute, but I keep running into an issue with the proper Context. I kind of hobbled my way through using Contexts on the main thread, but I'm not exactly sure what the Context is or how to use it on background threads, and I haven't found any good examples on it. Any help? Here is an excerpt of what I'm trying to do: public class GeoCode extends AsyncTask<GeoThread, Void, GeoThread> { @Override protected GeoThread doInBackground(GeoThread... i) { List<Address> addresses = null; Geocoder geoCode = null; geoCode = new Geocoder(null); //Expects at minimum Geocoder(Context context); addresses = geoCode.getFromLocation(GoldenHour.lat, GoldenHour.lng, 1); } } It keeps failing at the sixth line there, because of the improper Context.

    Read the article

  • How can I improve my real-time behavior in multi-threaded app using pthreads and condition variables

    - by WilliamKF
    I have a multi-threaded application that is using pthreads. I have a mutex() lock and condition variables(). There are two threads, one thread is producing data for the second thread, a worker, which is trying to process the produced data in a real time fashion such that one chuck is processed as close to the elapsing of a fixed time period as possible. This works pretty well, however, occasionally when the producer thread releases the condition upon which the worker is waiting, a delay of up to almost a whole second is seen before the worker thread gets control and executes again. I know this because right before the producer releases the condition upon which the worker is waiting, it does a chuck of processing for the worker if it is time to process another chuck, then immediately upon receiving the condition in the worker thread, it also does a chuck of processing if it is time to process another chuck. In this later case, I am seeing that I am late processing the chuck many times. I'd like to eliminate this lost efficiency and do what I can to keep the chucks ticking away as close to possible to the desired frequency. Is there anything I can do to reduce the delay between the release condition from the producer and the detection that that condition is released such that the worker resumes processing? For example, would it help for the producer to call something to force itself to be context switched out? Bottom line is the worker has to wait each time it asks the producer to create work for itself so that the producer can muck with the worker's data structures before telling the worker it is ready to run in parallel again. This period of exclusive access by the producer is meant to be short, but during this period, I am also checking for real-time work to be done by the producer on behalf of the worker while the producer has exclusive access. Somehow my hand off back to running in parallel again results in significant delay occasionally that I would like to avoid. Please suggest how this might be best accomplished.

    Read the article

  • Pros & Cons of Google App Engine

    - by Rishi
    Pros & Cons of Google App Engine [An Updated List 21st Aug 09] Help me Compile a List of all the Advantages & Disadvantages of Building an Application on the Google App Engine Pros: 1) No Need to buy Servers or Server Space (no maintenance). 2) Makes solving the problem of scaling much easier. Cons: 1) Locked into Google App Engine ?? 2)Developers have read-only access to the filesystem on App Engine. 3)App Engine can only execute code called from an HTTP request (except for scheduled background tasks). 4)Users may upload arbitrary Python modules, but only if they are pure-Python; C and Pyrex modules are not supported. 5)App Engine limits the maximum rows returned from an entity get to 1000 rows per Datastore call. 6)Java applications may only use a subset (The JRE Class White List) of the classes from the JRE standard edition. 7)Java applications cannot create new threads. Known Issues!! http://code.google.com/p/googleappengine/issues/list Hard limits Apps per developer - 10 Time per request - 30 sec Files per app - 3,000 HTTP response size - 10 MB Datastore item size - 1 MB Application code size - 150 MB Pro or Con? App Engine's infrastructure removes many of the system administration and development challenges of building applications to scale to millions of hits. Google handles deploying code to a cluster, monitoring, failover, and launching application instances as necessary. While other services let users install and configure nearly any *NIX compatible software, App Engine requires developers to use Python or Java as the programming language and a limited set of APIs. Current APIs allow storing and retrieving data from a BigTable non-relational database; making HTTP requests; sending e-mail; manipulating images; and caching. Most existing Web applications can't run on App Engine without modification, because they require a relational database.

    Read the article

  • Long running stateful service in .NET

    - by Asaf R
    Hi, I need to create a service in .NET that maintains (inner) state in-memory, spawns multiple threads and is generally long-running. There are a lot options - Good-old Windows Service Windows Communication Services Windows Workflow Foundation I really don't know which to choose. Most of the functionality is in a library used by this service, so the service itself is rather simple. On one hand, it's important the service host is as close to "simply working" as possible, which excludes Windows Service. On the other hand, it's important that the service is not taken down by the host just because there's no external activity, which makes WCF kind o' "scary". As for WF, it's strongest selling point is the ability to create processes as, um..., workflows, which is something I don't need nor want. To sum it up, the plethora of Microsoft technologies got me a bit confused. I'd appreciate help regarding the pros and cons of each solution (or other's I've failed to mention) for the problem of a stateful, long running service in .NET Thanks, Asaf P.S., I'm using .NET 4. EDIT: What I mean by the host "simply working" is, for example, that the service I create be reactivated if it crashes. I guess the reason for this question is that I've created Windows Services in the past (I think it was in plain C++ with Win32 API), and I don't want to miss out on something simpler if there's is such as thing. Thanks for all the replies thus far! Asaf.

    Read the article

  • Why calling Process.killProcess(Process.myPid()) is a bad idea?

    - by Tal Kanel
    I've read some posts saying using this method is "not good", shouldn't been use, it's not the right way to "close" the application and it's not how android works... I understand and accept the fact that Android OS knows better then me when it's the right time to terminate the process, but I didn't heard yet a good explanation why it's wrong using the killProcess() method?. after all - it's part of the android API... what I do know is that calling this method while other threads doing in potential an important work (operations on files, writing to DB, HTTP requests, running services..) can be terminated in the middle, and it's clearly not good. also I know I can benefit from the fact that "re-open" the application will be faster, cause the system maybe still "holds" in memory state from last time been used, and killProcess() prevents that. beside this reason, in assumption I don't have such operations, and I don't care my application will load from scratch each run, there are other reasons why not using the killProcess() method? I know about finish() method to close an Activity, so don't write me about that please.. finish() is only for Activity. not to all application, and I think I know exactly why and when to use it... and another thing - I'm developing also games with the Unity3D framework, and exporting the project to android. when I decompiled the generated apk, I was very suprised to find out that the java source code created from unity - implementing Unity's - Application.quit() method, with Process.killProcess(Process.myPid()). Application.quit() is suppose to be the right way to close game according to Unity3d guides (is it really?? maybe I'm wrong, and missed something), so how it happens that the Unity's framework developers which doing a very good work as it seems implemented this in native android to killProcess()? anyway - I wish to have a "list of reasons" why not using the killProcess() method, so please write down your answer - if you have something interesting to say about that. TIA

    Read the article

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • Sending custom PyQt signals?

    - by Enfors
    I'm practicing PyQt and (Q)threads by making a simple Twitter client. I have two Qthreads. Main/GUI thread. Twitter fetch thread - fetches data from Twitter every X minutes. So, every X minutes my Twitter thread downloads a new set of status updates (a Python list). I want to hand this list over to the Main/GUI thread, so that it can update the window with these statuses. I'm assuming that I should be using the signal / slot system to transfer the "statuses" Python list from the Twitter thread, to the Main/GUI thread. So, my question is twofold: How do I send the statuses from the Twitter thread? How do I receive them in the Main/GUI thread? As far as I can tell, PyQt can by default only send PyQt-objects via signals / slots. I think I'm supposed to somehow register a custom signal which I can then send, but the documentation on this that I've found is very unclear to a newbie like me. I have a PyQt book on order, but it won't arrive in another week, and I don't want to wait until then. :-) I'm using PyQt 4.6-1 on Ubuntu Update: This is an excert from the code that doesn't work. First, I try to "connect" the signal ("newStatuses", a name I just made up) to the function self.update_tweet_list in the Main/GUI thread: QtCore.QObject.connect(self.twit_in, QtCore.SIGNAL("newStatuses (statuses)"), self.update_tweet_list) Then, in the Twitter thread, I do this: self.emit(SIGNAL("newStatuses (statuses)"), statuses) When this line is called, I get the following message: QObject::connect: Cannot queue arguments of type 'statuses' (Make sure 'statuses' is registered using qRegisterMetaType().) I did a search for qRegisterMetaType() but I didn't find anything relating to Python that I could understand.

    Read the article

  • Retrieving image from database using Ajax

    - by ama
    I'm trying to read image from database with Ajax, but I could not read the xmlhttp.responseText to the img src. The image is saved as binary data in database and also retrieved as binary data. I'm using Ajax in JSP, because I want to give the user the ability to upload images and I will view the last uploaded image, on mouse over action the Ajax will be activated and get the image back, the problem is in reading the img from the response. This is the Ajax function: function ajaxFunction(path) { if (xmlhttp) { var s = path; xmlhttp.open("GET", s, true); xmlhttp.onreadystatechange = handleServerResponse; xmlhttp.send(null); } } function handleServerResponse() { if (xmlhttp.readyState == 4) { var Image = document.getElementById(Image_Element_Name); document.getElementById(Image_Element_Name).src = "data:" + xmlhttp.responseText; } } I also got exception in the server: 10415315 [TP-Processor1] WARN core.MsgContext - Error sending end packet java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92) at java.net.SocketOutputStream.write(SocketOutputStream.java:136) at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:537) at org.apache.jk.common.JkInputStream.endMessage(JkInputStream.java:127) at org.apache.jk.core.MsgContext.action(MsgContext.java:302) at org.apache.coyote.Response.action(Response.java:183) at org.apache.coyote.Response.finish(Response.java:305) at org.apache.catalina.connector.OutputBuffer.close(OutputBuffer.java:281) at org.apache.catalina.connector.Response.finishResponse(Response.java:478) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:154) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:200) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:773) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:703) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:895) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:685) at java.lang.Thread.run(Thread.java:619) 10415316 [TP-Processor1] WARN common.ChannelSocket - processCallbacks status 2

    Read the article

  • Asynchronous daemon processing / ORM interaction with Django

    - by perrierism
    I'm looking for a way to do asynchronous data processing with a daemon that uses Django ORM. However, the ORM isn't thread-safe; it's not thread-safe to try to retrieve / modify django objects from within threads. So I'm wondering what the correct way to achieve asynchrony is? Basically what I need to accomplish is taking a list of users in the db, querying a third party api and then making updates to user-profile rows for those users. As a daemon or background process. Doing this in series per user is easy, but it takes too long to be at all scalable. If the daemon is retrieving and updating the users through the ORM, how do I achieve processing 10-20 users at a time? I would use a standard threading / queue system for this but you can't thread interactions like models.User.objects.get(id=foo) ... Django itself is an asynchronous processing system which makes asynchronous ORM calls(?) for each request, so there should be a way to do it? I haven't found anything in the documentation so far. Cheers

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >