Search Results

Search found 1047 results on 42 pages for 'locking'.

Page 34/42 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • WCF for a shared data access

    - by Audrius
    Hi all, I have a little experience with WCF and would like to get your opinion/suggestion on how the following problem can be solved: A web service needs to be accessible from multiple clients simultaneously and service needs to return a result from a shared data set. The concrete project I'm working on has to store a list of IP addresses/ranges. This list will be queried by a bunch of web servers for a validation purposes and we speak of a couple of thousand or more queries per minute. My initial draft approach was to use Windows service as a WCF host with service contract implementing class that is decorated with ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple) that has a list object and a custom locking for accessing it. So basically I have a WCF service singleton with a list = shared data - multiple clients. What I do not like about it is that data and communication layers are merged into one and performance wise this doesn't feel "right". What I really really (- want is Windows service running an instance of IP list holding container class object, a second service running WCF service contract implementation and a way the latter querying the former in a nice way with a minimal blocking. Using another WCF channel would not really take me far away from the initial draft implementation or would it? What approach would you take? Project is still in a very early stage so complete design re-do is not out of question. All ideas are appreciated. Thanks! UPDATE: The data set will be changed dynamically. Web service will have a separate method to add IP or IP range and on top of that there will be a scheduled task that will trigger data cleanup every 10-15 minutes according to some rules. UPDATE 2: a separate benchmark project will be kicked up that should use MSSQL as a data backend (instead on in-memory list).

    Read the article

  • Right way to dispose Image/Bitmap and PictureBox

    - by kornelijepetak
    I am trying to develop a Windows Mobile 6 (in WF/C#) application. There is only one form and on the form there is only a PictureBox object. On it I draw all desired controls or whatever I want. There are two things I am doing. Drawing custom shapes and loading bitmaps from .png files. The next line locks the file when loading (which is an undesired scenario): Bitmap bmp = new Bitmap("file.png"); So I am using another way to load bitmap. public static Bitmap LoadBitmap(string path) { using (Bitmap original = new Bitmap(path)) { return new Bitmap(original); } } This is I guess much slower, but I don't know any better way to load an image, while quickly releasing the file lock. Now, when drawing an image there is method that I use: public void Draw() { Bitmap bmp = new Bitmap(240,320); Graphics g = Graphics.FromImage(bmp); // draw something with Graphics here. g.Clear(Color.Black); g.DrawImage(Images.CloseIcon, 16, 48); g.DrawImage(Images.RefreshIcon, 46, 48); g.FillRectangle(new SolidBrush(Color.Black), 0, 100, 240, 103); pictureBox.Image = bmp; } This however seems to be some kind of a memory leak. And if I keep doing it for too long, the application eventually crashes. Therefor, I have X questions: 1.) What is the better way for loading bitmaps from files without locking the file? 2.) What objects needs to be manually disposed in the Draw() function (and in which order) so there's no memory leak and no ObjectDisposedException throwing? 3.) If pictureBox.Image is set to bmp, like in the last line of the code, would pictureBox.Image.Dispose() dispose only resources related to maintaining the pictureBox.Image or the underlying Bitmap set to it?

    Read the article

  • UIImagePickerController image editing not working

    - by Greg Reichow
    I am having a problem with implementing UIImagePickerController. When the controller loads, it displays modally, and allows the user to select the image. Good so far. Yet, then when it moves to the editing phase, it often displays somewhat corrupted view (the image cropping box is halfway off the top of the screen) and their is no image. It does not crash, but all UI interaction is blocked. The strange part is that this only happens when I compile with Release settings. Under debug compile settings, the image editing works fine! I have tried checking for memory warnings during this time, but none are showing up. Here is the code calling the image picker controller for reference. When I use the camera (the first method), it always works fine. It is just when selecting images from the Library (called from the second method below) does it fail as described above. And again, only on release build, and with various different types of images. - (IBAction) showCameraController:(id)sender { self.imagePicker =[[UIImagePickerController alloc] init]; self.imagePicker.sourceType=UIImagePickerControllerSourceTypeCamera; self.imagePicker.delegate=self; self.imagePicker.allowsEditing=YES; [self presentModalViewController:self.imagePicker animated:YES]; } - (IBAction) showPictureAlbumController:(id)sender { self.imagePicker =[[UIImagePickerController alloc] init]; self.imagePicker.sourceType=UIImagePickerControllerSourceTypePhotoLibrary; self.imagePicker.delegate=self; self.imagePicker.allowsEditing=YES; [self presentModalViewController:self.imagePicker animated:YES]; } The delegate methods are properly implemented, yet, during the problem I am describing, the controller is not yet calling those methods. It is failing when displaying the editing screen before the user is able to select cancel or save. It is just locking up with no crash. Please help!

    Read the article

  • Practical value for concurrent-request-timeout parameter

    - by Andrei
    In the Seam Reference Guide, one can find this paragraph: We can set a sensible default for the concurrent request timeout (in ms) in components.xml: <core:manager concurrent-request-timeout="500" /> However, we found that 500 ms is not nearly enough time for most of the cases we had to deal with, especially with the severe restriction seam places on conversation access. In our application we have a combination of page scoped ajax requests (triggered by various use actions), some global scoped polling notification logic (part of the header, so included in every page) and regular links that invoke actions and/or navigate to other pages. Therefore, we get the dreaded concurrent access to conversation exception way too often, even without any significant load on the site. After researching the options for quite a bit, we ended up bumping this value to several seconds (we're debating whether to bump it up to 10s), as none of the recommended solutions seemed able to solve our issue completely (even forcing a global queue for all the ajax requests would still leave us exposed to a user deciding to click a link right when one of our polling calls was in progress). And we'd much rather have the users wait for a second or two instead of getting an error page just because they clicked a link at the wrong moment. And now to the question: is there something obvious we're missing (like a way to allow concurrent access to conversations and taking care of the needed locking ourselves, for instance :)? How do people solve this problem (ajax requests mixed with user driven interaction) in seam? Disabling all the links on the page while ajax requests are in progress (as suggested by one blog page) is really not a viable option. Any other suggestions? TIA, Andrei

    Read the article

  • How to call JSON asynchronous in xcode/ iphone develope

    - by Frames84
    I'm using the JSON framework hosting on Google. What and it's a news app that loads JSON feeds, when app goes off to load the feed I want to display the UIActivityIndicatorView but I've found my JSON Access code is not being called asynchronous which is locking the user interface. I have highlighted the function in the code and can't figuree out without breaking how to change the code. #import "JSON DataAccess Wrapper.h" #import "JSON.h" @implementation JSON_DataAccess_Wrapper @synthesize dataItemList; ////////////////////////////////////////////// /* START FEED CONNECTION/ HANDLE METHODS */ ////////////////////////////////////////////// - (void)connection:(NSURLConnection *)connection didReceiveResponse:(NSURLResponse *)response { [responseData setLength:0]; } - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { [responseData appendData:data]; } - (void)connection:(NSURLConnection *)connection didFailWithError:(NSError *)error { //label.text = [NSString stringWithFormat:@"Connection failed: %@", [error description]]; } - (void)connectionDidFinishLoading:(NSURLConnection *)connection { [connection release]; } - (NSString *)stringWithUrl:(NSURL *)url { NSURLRequest *urlRequest = [NSURLRequest requestWithURL:url cachePolicy:NSURLRequestReturnCacheDataElseLoad timeoutInterval:30]; NSData *urlData; NSURLResponse *response = nil; NSError *error = nil; /* HOW TO MAKE THE CALL BELOW ASYNCHRONOUS */ urlData = [NSURLConnection sendSynchronousRequest:urlRequest returningResponse:&response error:&error]; return [[NSString alloc] initWithData:urlData encoding:NSUTF8StringEncoding]; } -(id) objectWithUrl:(NSURL *)url { SBJSON *jsonParser = [SBJSON new]; NSString *jsonString = [self stringWithUrl:url]; return [jsonParser objectWithString:jsonString error:NULL]; } - (NSMutableArray *) downloadJSONFeed { id response = [self objectWithUrl:[NSURL URLWithString:@"http://www.mysite.co.uk/index2.php?option=JSON"]]; NSMutableArray *feed = (NSMutableArray *) response; return feed; }

    Read the article

  • What makes static initialization functions good, bad, or otherwise?

    - by Richard Levasseur
    Suppose you had code like this: _READERS = None _WRITERS = None def Init(num_readers, reader_params, num_writers, writer_params, ...args...): ...logic... _READERS = new ReaderPool(num_readers, reader_params) _WRITERS = new WriterPool(num_writers, writer_params) ...more logic... class Doer: def __init__(...args...): ... def Read(self, ...args...): c = _READERS.get() try: ...work with conn finally: _READERS.put(c) def Writer(...): ...similar to Read()... To me, this is a bad pattern to follow, some cons: Doers can be created without its preconditions being satisfied The code isn't easily testable because ConnPool can't be directly mocked out. Init has to be called right the first time. If its changed so it can be called multiple times, extra logic has to be added to check if variables are already defined, and lots of NULL values have to be passed around to skip re-initializing. In the event of threads, the above becomes more complicated by adding locking Globals aren't being used to communicate state (which isn't strictly bad, but a code smell) On the other hand, some pros: its very convenient to call Init(5, "user/pass", 2, "user/pass") It simple and "clean" Personally, I think the cons outweigh the pros, that is, testability and assured preconditions outweigh simplicity and convenience.

    Read the article

  • Reference for proper handling of PID file on Unix

    - by bignose
    Where can I find a well-respected reference that details the proper handling of PID files on Unix? On Unix operating systems, it is common practice to “lock” a program (often a daemon) by use of a special lock file: the PID file. This is a file in a predictable location, often ‘/var/run/foo.pid’. The program is supposed to check when it starts up whether the PID file exists and, if the file does exist, exit with an error. So it's a kind of advisory, collaborative locking mechanism. The file contains a single line of text, being the numeric process ID (hence the name “PID file”) of the process that currently holds the lock; this allows an easy way to automate sending a signal to the process that holds the lock. What I can't find is a good reference on expected or “best practice” behaviour for handling PID files. There are various nuances: how to actually lock the file (don't bother? use the kernel? what about platform incompatibilities?), handling stale locks (silently delete them? when to check?), when exactly to acquire and release the lock, and so forth. Where can I find a respected, most-authoritative reference (ideally on the level of W. Richard Stevens) for this small topic?

    Read the article

  • python / sqlite - database locked despite large timeouts

    - by Chris Phillips
    Hi, I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds: cx = sqlite.connect("database.sql", timeout=30.0) and think I can see some evidence of the timeouts in that i get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL5.4 on a 64 bit 4 cpu HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of RedHat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general. I have had autocommit=1 on previously on both scripts, but have since disabled on both, and am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13mb with 3 equal sized tables, which was about 1 day's worth of data. creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed. Any pointers very much appreciated. Thanks Chris

    Read the article

  • Delphi: Alternative to using Assign/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Assign(filename, f); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Assign with fmShareDenyNone!

    Read the article

  • How to enable and use HTTP PUT and DELETE with Apache2 and PHP?

    - by Andreas Jansson
    Hi, It should be so simple. I've followed every tutorial and forum I could find, yet I can't get it to work. I simply want to build a RESTful API in PHP on Apache2. In my VirtualHost directive I say: <Directory /> AllowOverride All <Limit GET HEAD POST PUT DELETE OPTIONS> Order Allow,Deny Allow from all </Limit> </Directory> Yet every PUT request I make to the server, I get 405 method not supported. Someone advocated using the Script directive, but since I use mod_php, as opposed to CGI, I don't see why that would work. People mention using WebDAV, but to me that seems like overkill. After all, I don't need DAV locking, a DAV filesystem, etc. All I want to do is pass the request on to a PHP script and handle everything myself. I only want to enable PUT and DELETE for the clean semantics. Thanks, Andreas

    Read the article

  • How to make Stack.Pop threadsafe

    - by user260197
    I am using the BlockingQueue code posted in this question, but realized I needed to use a Stack instead of a Queue given how my program runs. I converted it to use a Stack and renamed the class as needed. For performance I removed locking in Push, since my producer code is single threaded. My problem is how can thread working on the (now) thread safe Stack know when it is empty. Even if I add another thread safe wrapper around Count that locks on the underlying collection like Push and Pop do, I still run into the race condition that access Count and then Pop are not atomic. Possible solutions as I see them (which is preferred and am I missing any that would work better?): Consumer threads catch the InvalidOperationException thrown by Pop(). Pop() return a nullptr when _stack-Count == 0, however C++-CLI does not have the default() operator ala C#. Pop() returns a boolean and uses an output parameter to return the popped element. Here is the code I am using right now: generic <typename T> public ref class ThreadSafeStack { public: ThreadSafeStack() { _stack = gcnew Collections::Generic::Stack<T>(); } public: void Push(T element) { _stack->Push(element); } T Pop(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Pop(); } finally { System::Threading::Monitor::Exit(_stack); } } public: property int Count { int get(void) { System::Threading::Monitor::Enter(_stack); try { return _stack->Count; } finally { System::Threading::Monitor::Exit(_stack); } } } private: Collections::Generic::Stack<T> ^_stack; };

    Read the article

  • Shared value in parallel python

    - by Jonathan
    Hey all- I'm using ParallelPython to develop a performance-critical script. I'd like to share one value between the 8 processes running on the system. Please excuse the trivial example but this illustrates my question. def findMin(listOfElements): for el in listOfElements: if el < min: min = el import pp min = 0 myList = range(100000) job_server = pp.Server() f1 = job_server.submit(findMin, myList[0:25000]) f2 = job_server.submit(findMin, myList[25000:50000]) f3 = job_server.submit(findMin, myList[50000:75000]) f4 = job_server.submit(findMin, myList[75000:100000]) The pp docs don't seem to describe a way to share data across processes. Is it possible? If so, is there a standard locking mechanism (like in the threading module) to confirm that only one update is done at a time? l = Lock() if(el < min): l.acquire if(el < min): min = el l.release I understand I could keep a local min and compare the 4 in the main thread once returned, but by sharing the value I can do some better pruning of my BFS binary tree and potentially save a lot of loop iterations. Thanks- Jonathan

    Read the article

  • Accessing class member variables inside a BackgroundWorker's DoWork event handler, and other Backgro

    - by Justin
    Question 1 In the DoWork event handler of a BackgroundWorker, is it safe to access (for both reading and writing) member variables of the class that contains the BackgroundWorker? Is it safe to access other variables that are not declared inside the DoWork event handler itself? Obviously DoWork should not be accessing any UI objects of, say, a WinForms application, as the UI should only be updated from the UI thread. But what about accessing other (not UI-related) member variables? The reason why I ask is that I've seen the occasional comment come up while Googling saying that accessing member variables is not allowed. The only example I can find at the moment is a comment on this MSDN page, which says: Note, that the BGW can cause exceptions if it attempts to access or modify class level variables. All data must be passed to it by delegates and events. And also: NEVER. NEVER. Never try to reference variables not declared inside of DoWork. It may seem to work at times, but in reality you are just getting lucky. As far as I know, MSDN itself does not document any restrictions of this kind (although if I'm wrong, I'd appreciate a link). But comments like these do seem to pop up every now and again. (Of course if DoWork does access/modify a member variable that could be accessed/modified by the main thread at the same time, it is necessary to synchronise access to that field, eg by using a locking object. But the above quotes seem to require a blanket ban of accessing member variables, rather than just synchronising access!) Question 2 To make this into a more general question, are there any other (not documented?) restrictions that users of the BackgroundWorker should be aware of, aside from the above? Any "best practices", perhaps?

    Read the article

  • What is the best practice for using lock within inherited classes

    - by JDMX
    I want to know if one class is inheriting from another, is it better to have the classes share a lock object that is defined at the base class or to have a lock object defined at each inheritance level. A very simple example of a lock object on each level of the class public class Foo { private object thisLock = new object(); private int ivalue; public int Value { get { lock( thisLock ) { return ivalue; } } set { lock( thisLock ) { ivalue= value; } } } } public class Foo2: Foo { private object thisLock2 = new object(); public int DoubleValue { get { lock( thisLock2 ) { return base.Value * 2; } } set { lock( thisLock2 ) { base.Value = value / 2; } } } } public class Foo6: Foo2 { private object thisLock6 = new object(); public int TripleDoubleValue { get { lock( thisLock6 ) { return base.DoubleValue * 3; } } set { lock( thisLock6 ) { base.DoubleValue = value / 3; } } } } A very simple example of a shared lock object public class Foo { protected object thisLock = new object(); private int ivalue; public int Value { get { lock( thisLock ) { return ivalue; } } set { lock( thisLock ) { ivalue= value; } } } } public class Foo2: Foo { public int DoubleValue { get { lock( thisLock ) { return base.Value * 2; } } set { lock( thisLock ) { base.Value = value / 2; } } } } public class Foo6: Foo2 { public int TripleDoubleValue { get { lock( thisLock ) { return base.DoubleValue * 3; } } set { lock( thisLock ) { base.DoubleValue = value / 3; } } } } Which example is the preferred way to manage locking within an inherited class?

    Read the article

  • Are there some cases where Python threads can safely manipulate shared state?

    - by erikg
    Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs. Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable. However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race: from threading import Thread, Lock import operator def contains_all_ints(l, n): l.sort() for i in xrange(0, n): if l[i] != i: return False return True def test(ntests): results = [] threads = [] def lockless_append(i): results.append(i) for i in xrange(0, ntests): threads.append(Thread(target=lockless_append, args=(i,))) threads[i].start() for i in xrange(0, ntests): threads[i].join() if len(results) != ntests or not contains_all_ints(results, ntests): return False else: return True for i in range(0,100): if test(100000): print "OK", i else: print "appending to a list without locks *is* unsafe" exit() I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads? Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?

    Read the article

  • Slow MySQL Query not using filesort

    - by Canadaka
    I have a query on my homepage that is getting slower and slower as my database table grows larger. tablename = tweets_cache rows = 572,327 this is the query I'm currently using that is slow, over 5 seconds. SELECT * FROM tweets_cache t WHERE t.province='' AND t.mp='0' ORDER BY t.published DESC LIMIT 50; If I take out either the WHERE or the ORDER BY, then the query is super fast 0.016 seconds. I have the following indexes on the tweets_cache table. PRIMARY published mp category province author So i'm not sure why its not using the indexes since mp, provice and published all have indexes? Doing a profile of the query shows that its not using an index to sort the query and is using filesort which is really slow. possible_keys = mp,province Extra = Using where; Using filesort I tried adding a new multie-colum index with "profiles & mp". The explain shows that this new index listed under "possible_keys" and "key", but the query time is unchanged, still over 5 seconds. Here is a screenshot of the profiler info on the query. http://i355.photobucket.com/albums/r469/canadaka_bucket/slow_query_profile.png Something weird, I made a dump of my database to test on my local desktop so i don't screw up the live site. The same query on my local runs super fast, milliseconds. So I copied all the same mysql startup variables from the server to my local to make sure there wasn't some setting that might be causing this. But even after that the local query runs super fast, but the one on the live server is over 5 seconds. My database server is only using around 800MB of the 4GB it has available. here are the related my.ini settings i'm using default-storage-engine = MYISAM max_connections = 800 skip-locking key_buffer = 512M max_allowed_packet = 1M table_cache = 512 sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 16M myisam_sort_buffer_size = 64M thread_cache_size = 8 query_cache_size = 128M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 8 # Disable Federated by default skip-federated key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M key_buffer = 512M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M

    Read the article

  • C# WinForms. Multiple Forms in separate threads

    - by Calum Murray
    I'm trying to run an ATM Simulation in C# with Windows Forms that can have more than one instance of an ATM machine transacting with a bank account simultaneously. The idea is to use semaphores/locking to block critical code that may lead to race conditions. My question is this: How can I run two Forms simultaneously on separate threads? In particular, how does all of this fit in with the Application.Run() that's already there? Here's my main class: public class Bank { private Account[] ac = new Account[3]; private ATM atm; public Bank() { ac[0] = new Account(300, 1111, 111111); ac[1] = new Account(750, 2222, 222222); ac[2] = new Account(3000, 3333, 333333); Application.Run(new ATM(ac)); } static void Main(string[] args) { new Bank(); } } ...that I want to run two of these forms on separate threads... public partial class ATM : Form { //local reference to the array of accounts private Account[] ac; //this is a reference to the account that is being used private Account activeAccount = null; private static int stepCount = 0; private string buffer = ""; // the ATM constructor takes an array of account objects as a reference public ATM(Account[] ac) { InitializeComponent(); //Sets up Form ATM GUI in ATM.Designer.cs this.ac = ac; } ... I've tried using Thread ATM2 = new Thread(new ThreadStart(/*What goes in here?*/)); But what method do I put in the ThreadStart constructor, since the ATM form is event-driven and there's no one method controlling it? Thanks, Calum

    Read the article

  • Calling QAxWidget method outside of the GUI thread

    - by user304361
    I'm beginning to wonder if this is impossible, but I thought I'd ask in case there's a clever way to get around the problems I'm having. I have a Qt application that uses an ActiveX control. The control is held by a QAxWidget, and the QAxWidget itself is contained within another QWidget (I needed to add additional signals/slots to the widget, and I couldn't just subclass QAxWidget because the class doesn't permit that). When I need to interact with the ActiveX control, I call a method of the QWidget, which in turn calls the dynamicCall method of the QAxWidget in order to invoke the appropriate method of the ActiveX control. All of that is working fine. However, one method of the ActiveX control takes several seconds to return. When I call this method, my entire GUI locks up for a few seconds until the method returns. This is undesirable. I'd like the ActiveX control to go off and do its processing by itself and come back to me when it's done without locking up the Qt GUI. I've tried a few things without success: Creating a new QThread and calling QAxWidget::dynamicCall from the new thread Connecting a signal to the appropriate slot method of the QAxWidget and calling the method using signals/slots instead of using dynamicCall Calling QAxWidget::dynamicCall using QtConcurrent::run Nothing seems to affect the behavior. No matter how or where I use dynamicCall (or trigger the appropriate slot of the QAxWidget), the GUI locks until the ActiveX control completes its operation. Is there any way to detach this ActiveX processing from the Qt GUI thread so that the GUI doesn't lock up while the ActiveX control is running a method? Is there something clever I can do with QAxBase or QAxObject to get my desired results?

    Read the article

  • Multi-accordion help (CSS issue maybe?)

    - by Josh
    So, I've been trying to develop this multi-accordion news section for this site. It's actually all working, thanks to an insightful plugin. I've modified it a little bit so it works as I want it to, but I've run into two issues, one which is possibly CSS. Issue #1: The idea for the user is that when they view this page, they see all the recent headlines. They can also see who it has been posted by and how many comments have been made to this article. If they wish, they can then click on the headline and the field will expand into the article. They can then either make a comment or view the comments via clicking the View Comments link or clicking the "number of comments" link in the "Posted by..." area (a shortcut to the comments basically). The problem I'm having is if I make the AUTHOR or the "0" comments a link, it breaks the accordion because the accordion uses an A CLASS to open it up. I'm looking for a fix, I've tried making it a H1 or a DIV but that also breaks it. Issue #2: This is a pretty picky one, but when you click the headline it expands, but at least in Firefox (haven't tested it in Chrome yet) the text jumps from the right and to the left, locking in place from which the CSS tells it to (padding-left). I don't know why it's exactly doing that, if anyone has any insight on that, it'd be appreciated. A two-parter to this issue is when you open the Headline to the article and then decide to close the article by clicking on the Headline, parts of the accordion jumps from the darker purple to the light purple before the task is finished. I'm also interested fixing this, but this issue in its entirety are all pretty nit picky things. You can view the demo of the site here: http://www.notedls.com/demo Please if anyone has any advice or fixes, I'd appreciate it, I've been trying to get this all to work to the best of my ability, but I'm clearly no guru or expert. Thanks!

    Read the article

  • Singleton Issues in iOS http client

    - by Andrew Lauer Barinov
    I implemented an HTTP client is iOS that is meant to be a singleton. It is a subclass of AFNetworking's excellent AFHTTPClient My access method is pretty standard: + (LBHTTPClient*) sharedHTTPClient { static dispatch_once_t once = 0; static LBHTTPClient *sharedHTTPClient = nil; dispatch_once(&once, ^{ sharedHTTPClient = [[LBHTTPClient alloc] initHTTPClient]; }); return sharedHTTPClient; } // my init method - (id) initHTTPClient { self = [super initWithBaseURL:[NSURL URLWithString:kBaseURL]]; if (self) { // Sub class specific initializations } return self; } Super's init method is very long, and can be found here However the problem I experience is when the dispatch_once queue is run, the app locks and become unresponsive. When I step through code, I notice that it locks on dispatch_once and then remains frozen. Is this something to do with the main thread locking down? How come it stays locked like that? Also, none of the HTTP client methods fire. Stepping through the code some more, I find this line in dispatch/once.h is where the app locks down: DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_NONNULL_ALL DISPATCH_NOTHROW void _dispatch_once(dispatch_once_t *predicate, dispatch_block_t block) { if (DISPATCH_EXPECT(*predicate, ~0l) != ~0l) { dispatch_once(predicate, block); // App locks here } }

    Read the article

  • Delphi: Alternative to using Reset/ReadLn for text file reading

    - by Ian Boyd
    i want to process a text file line by line. In the olden days i loaded the file into a StringList: slFile := TStringList.Create(); slFile.LoadFromFile(filename); for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; Problem with that is once the file gets to be a few hundred megabytes, i have to allocate a huge chunk of memory; when really i only need enough memory to hold one line at a time. (Plus, you can't really indicate progress when you the system is locked up loading the file in step 1). The i tried using the native, and recommended, file I/O routines provided by Delphi: var f: TextFile; begin Reset(f, filename); while ReadLn(f, oneLine) do begin //process the line end; Problem withAssign is that there is no option to read the file without locking (i.e. fmShareDenyNone). The former stringlist example doesn't support no-lock either, unless you change it to LoadFromStream: slFile := TStringList.Create; stream := TFileStream.Create(filename, fmOpenRead or fmShareDenyNone); slFile.LoadFromStream(stream); stream.Free; for i := 0 to slFile.Count-1 do begin oneLine := slFile.Strings[i]; //process the line end; So now even though i've gained no locks being held, i'm back to loading the entire file into memory. Is there some alternative to Assign/ReadLn, where i can read a file line-by-line, without taking a sharing lock? i'd rather not get directly into Win32 CreateFile/ReadFile, and having to deal with allocating buffers and detecting CR, LF, CRLF's. i thought about memory mapped files, but there's the difficulty if the entire file doesn't fit (map) into virtual memory, and having to maps views (pieces) of the file at a time. Starts to get ugly. i just want Reset with fmShareDenyNone!

    Read the article

  • gridviev add new row problem

    - by Dominating
    My relation is from two tables - table A and table B. B has fk ID pointing to A.ID. In gridview for sourse i have choosen A. When I append new row in A its ok.. But with this action I want to add new row in B too and init it with some values which are copies from row number Row given as argument in PasteCopy private void PasteCopy(int Row) { XPDataTableObject forCopy = gridView1.GetRow(Row) as XPDataTableObject; gridViewEcnMaster.AddNewRow(); XPDataTableObject toCopy = gridView1.GetRow(GridControl.NewItemRowHandle) as XPDataTableObject; SetA(forCopy);// working SetB(ref toCopy, forCopy); XPDataTableObject toCopy1 = gridView1.GetFocusedRow() as XPDataTableObject; XPCollection historyForCopy = toCopy1.GetMemberValue("FK___B__ID") as XPCollection; foreach (XPDataTableObject item in historyForCopy) { MessageBox.Show(item.GetMemberValue("USER").ToString()); } } public void SetB(ref XPDataTableObject toCopy, XPDataTableObject forCopy) { XPCollection historyToCopy = toCopy.GetMemberValue("FK__B__ID") as XPCollection; XPCollection historyForCopy = forCopy.GetMemberValue("FK__B__ID") as XPCollection; XPClassInfo cinfo = session.GetClassInfo(typeof(SPM_ECN_DataSet.BDataTable)); foreach (XPDataTableObject item in historyForCopy) { XPDataTableObject historyRecord = new XPDataTableObject(session, cinfo); historyRecord.SetMemberValue("USER", GetCurWinUser().ToString()); historyRecord.SetMemberValue("ID", forCopy.GetMemberValue("ID"));//if not set == null historyToCopy.Add(historyRecord); } } public void SetA(XPDataTableObject forCopy) { gridView1.SetFocusedRowCellValue("VERSION", 1); } What is wrong with this? why its locking all my application after i do this?

    Read the article

  • Thread-safe initialization of function-local static const objects

    - by sbi
    This question made me question a practice I had been following for years. For thread-safe initialization of function-local static const objects I protect the actual construction of the object, but not the initialization of the function-local reference referring to it. Something like this: namspace { const some_type& create_const_thingy() { lock my_lock(some_mutex); static const some_type the_const_thingy; return the_const_thingy; } } void use_const_thingy() { static const some_type& the_const_thingy = create_const_thingy(); // use the_const_thingy } The idea is that locking takes time, and if the reference is overwritten by several threads, it won't matter. I'd be interested if this is safe enough in practice? safe according to The Rules? (I know, the current standard doesn't even know what "concurrency" is, but what about trampling over an already initialized reference? And do other standards, like POSIX, have something to say that's relevant to this?) For the inquiring minds: Many such function-local static const objects I used are maps which are initialized from const arrays upon first use and used for lookup. For example, I have a few XML parsers where tag name strings are mapped to enum values, so I could later switch over the tags enum values.

    Read the article

  • Lightweight alternative to Manual/AutoResetEvent in C#

    - by sweetlilmre
    Hi, I have written what I hope is a lightweight alternative to using the ManualResetEvent and AutoResetEvent classes in C#/.NET. The reasoning behind this was to have Event like functionality without the weight of using a kernel locking object. Although the code seems to work well in both testing and production, getting this kind of thing right for all possibilities can be a fraught undertaking and I would humbly request any constructive comments and or criticism from the StackOverflow crowd on this. Hopefully (after review) this will be useful to others. Usage should be similar to the Manual/AutoResetEvent classes with Notify() used for Set(). Here goes: using System; using System.Threading; public class Signal { private readonly object _lock = new object(); private readonly bool _autoResetSignal; private bool _notified; public Signal() : this(false, false) { } public Signal(bool initialState, bool autoReset) { _autoResetSignal = autoReset; _notified = initialState; } public virtual void Notify() { lock (_lock) { // first time? if (!_notified) { // set the flag _notified = true; // unblock a thread which is waiting on this signal Monitor.Pulse(_lock); } } } public void Wait() { Wait(Timeout.Infinite); } public virtual bool Wait(int milliseconds) { lock (_lock) { bool ret = true; // this check needs to be inside the lock otherwise you can get nailed // with a race condition where the notify thread sets the flag AFTER // the waiting thread has checked it and acquires the lock and does the // pulse before the Monitor.Wait below - when this happens the caller // will wait forever as he "just missed" the only pulse which is ever // going to happen if (!_notified) { ret = Monitor.Wait(_lock, milliseconds); } if (_autoResetSignal) { _notified = false; } return (ret); } } }

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >