Search Results

Search found 5496 results on 220 pages for 'threaded comments'.

Page 146/220 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • Where to stop/destroy threads in Android Service class?

    - by niko
    Hi, I have created a threaded service the following way: public class TCPClientService extends Service{ ... @Override public void onCreate() { ... Measurements = new LinkedList<String>(); enableDataSending(); } @Override public IBinder onBind(Intent intent) { //TODO: Replace with service binding implementation return null; } @Override public void onLowMemory() { Measurements.clear(); super.onLowMemory(); }; @Override public void onDestroy() { Measurements.clear(); super.onDestroy(); try { SendDataThread.stop(); } catch(Exception e) { } }; private Runnable backgrounSendData = new Runnable() { public void run() { doSendData(); } }; private void enableDataSending() { SendDataThread = new Thread(null, backgrounSendData, "send_data"); SendDataThread.start(); } private void addMeasurementToQueue() { if(Measurements.size() <= 100) { String measurement = packData(); Measurements.add(measurement); } } private void doSendData() { while(true) { try { if(Measurements.isEmpty()) { Thread.sleep(1000); continue; } //Log.d("TCP", "C: Connecting..."); Socket socket = new Socket(); socket.setTcpNoDelay(true); socket.connect(new InetSocketAddress(serverAddress, portNumber), 3000); //socket.connect(new InetSocketAddress(serverAddress, portNumber)); if(!socket.isConnected()) { throw new Exception("Server Unavailable!"); } try { //Log.d("TCP", "C: Sending: '" + message + "'"); PrintWriter out = new PrintWriter( new BufferedWriter( new OutputStreamWriter(socket.getOutputStream())),true); String message = Measurements.remove(); out.println(message); Thread.sleep(200); Log.d("TCP", "C: Sent."); Log.d("TCP", "C: Done."); connectionAvailable = true; } catch(Exception e) { Log.e("TCP", "S: Error", e); connectionAvailable = false; } finally { socket.close(); announceNetworkAvailability(connectionAvailable); } } catch (Exception e) { Log.e("TCP", "C: Error", e); connectionAvailable = false; announceNetworkAvailability(connectionAvailable); } } } } After I close the application the phone works really slow and I guess it is due to thread termination failure. Does anyone know what is the best way to terminate all threads before terminating the application?

    Read the article

  • Synchronization requirements for FileStream.(Begin/End)(Read/Write)

    - by Doug McClean
    Is the following pattern of multi-threaded calls acceptable to a .Net FileStream? Several threads calling a method like this: ulong offset = whatever; // different for each thread byte[] buffer = new byte[8192]; object state = someState; // unique for each call, hence also for each thread lock(theFile) { theFile.Seek(whatever, SeekOrigin.Begin); IAsyncResult result = theFile.BeginRead(buffer, 0, 8192, AcceptResults, state); } if(result.CompletedSynchronously) { // is it required for us to call AcceptResults ourselves in this case? // or did BeginRead already call it for us, on this thread or another? } Where AcceptResults is: void AcceptResults(IAsyncResult result) { lock(theFile) { int bytesRead = theFile.EndRead(result); // if we guarantee that the offset of the original call was at least 8192 bytes from // the end of the file, and thus all 8192 bytes exist, can the FileStream read still // actually read fewer bytes than that? // either: if(bytesRead != 8192) { Panic("Page read borked"); } // or: // issue a new call to begin read, moving the offsets into the FileStream and // the buffer, and decreasing the requested size of the read to whatever remains of the buffer } } I'm confused because the documentation seems unclear to me. For example, the FileStream class says: Any public static members of this type are thread safe. Any instance members are not guaranteed to be thread safe. But the documentation for BeginRead seems to contemplate having multiple read requests in flight: Multiple simultaneous asynchronous requests render the request completion order uncertain. Are multiple reads permitted to be in flight or not? Writes? Is this the appropriate way to secure the location of the Position of the stream between the call to Seek and the call to BeginRead? Or does that lock need to be held all the way to EndRead, hence only one read or write in flight at a time? I understand that the callback will occur on a different thread, and my handling of state, buffer handle that in a way that would permit multiple in flight reads. Further, does anyone know where in the documentation to find the answers to these questions? Or an article written by someone in the know? I've been searching and can't find anything. Relevant documentation: FileStream class Seek method BeginRead method EndRead IAsyncResult interface

    Read the article

  • Does oneway declaration in Android .aidl guarantee that method will be called in a separate thread?

    - by Dan Menes
    I am designing a framework for a client/server application for Android phones. I am fairly new to both Java and Android (but not new to programming in general, or threaded programming in particular). Sometimes my server and client will be in the same process, and sometimes they will be in different processes, depending on the exact use case. The client and server interfaces look something like the following: IServer.aidl: package com.my.application; interface IServer { /** * Register client callback object */ void registerCallback( in IClient callbackObject ); /** * Do something and report back */ void doSomething( in String what ); . . . } IClient.aidl: package com.my.application; oneway interface IClient { /** * Receive an answer */ void reportBack( in String answer ); . . . } Now here is where it gets interesting. I can foresee use cases where the client calls IServer.doSomething(), which in turn calls IClient.reportBack(), and on the basis of what is reported back, IClient.reportBack() needs to issue another call to IClient.doSomething(). The issue here is that IServer.doSomething() will not, in general, be reentrant. That's OK, as long as IClient.reportBack() is always invoked in a new thread. In that case, I can make sure that the implementation of IServer.doSomething() is always synchronized appropriately so that the call from the new thread blocks until the first call returns. If everything works the way I think it does, then by declaring the IClient interface as oneway, I guarantee this to be the case. At least, I can't think of any way that the call from IServer.doSomething() to IClient.reportBack() can return immediately (what oneway is supposed to ensure), yet IClient.reportBack still be able to reinvoke IServer.doSomething recursively in the same thread. Either a new thread in IServer must be started, or else the old IServer thread can be re-used for the inner call to IServer.doSomething(), but only after the outer call to IServer.doSomething() has returned. So my question is, does everything work the way I think it does? The Android documentation hardly mentions oneway interfaces.

    Read the article

  • Upload large files via a webpage

    - by Hultner
    What way is the best way to let users upload large files from there webbrowser to a server. I'm talking 200MB+ possible up to a few gigatyes. I have been thinking of a few possible solutions to the problem (not tried them yet) and this is basically the things I came up with. Server download speed will not be a problem but the users connection possibly could. Having some sort of applet on the client side written in Java or Flash which sends the file in parts (is this possible with an applet) to a php/other script on the server and a checksum+ some other info about the file. On the server scripts all the parts and the info file is saved in a temporary directory wich has a unique name based on the checksum of the file and the ip of the user. When the last chunk is sent the applet sends a signal to the server saying it's finished and the server put the file together in the right location. If a chunk doesn't match the checksum for that part the server will send a response to the applet telling it to reupload that chunk. I don't know how important the checksum checking is since it's all tcpackages, someone with more insigth migth be able to answer on that. This is probably the worst way, changing the settings on your server to allow huge fileuploads via an inputfiel. Do it like a normal transfer. User an uploadmanager which does pretty much the same thing as applet i mentioned above. Pros of the first is probably that it would most likely be rather secure, you could show progress as well and possibly resume an upload if ip hasn't changed and do a threaded upload of the chunks. Cons of the first is that the user will need flash/java for it to work. Pros of the 2nd is that it will pretty much work for everyone but cons are big, first there's no way resuming an intruppted download and if something is wrong the whole file would have to be reuploaded is a few of cons. For the third one the pros is pretty muc the same as for the first but the cons is that the user would have to download an application to their computer and run and the application will have to be have to be compatible with their computer and OS. Another way may be a combination of two. Lets say an applet for bigger or more files and a simple input which is rather restricted to maybe max 10-20MB for smaller files and comability. There are probably other much smarter ways to tackle this and that's why I'm asking for advice here on SO.

    Read the article

  • crash in calloc

    - by mmd
    I'm trying to debug a program I wrote. I ran it inside gdb and I managed to catch a SIGABRT from inside calloc(). I'm completely confused about how this can arise. Can it be a bug in gcc or even libc?? More details: My program uses OpenMP. I ran it through valgrind in single-threaded mode with no errors. I also use mmap() to load a 40GB file, but I doubt that is relevant. Inside gdb, I'm running with 30 threads. Several identical runs (same input&CL) finished correctly, until the problematic one that I caught. On the surface this suggests there might be a race condition of some type. However, the SIGABRT comes from calloc() which is out of my control. Here is some relevant gdb output: (gdb) info threads [...] * 11 Thread 0x7ffff0056700 (LWP 73449) 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 [...] (gdb) thread 11 [Switching to thread 11 (Thread 0x7ffff0056700 (LWP 73449))]#0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 (gdb) bt #0 0x00007ffff6a948a5 in raise () from /lib64/libc.so.6 #1 0x00007ffff6a96085 in abort () from /lib64/libc.so.6 #2 0x00007ffff6ad1fe7 in __libc_message () from /lib64/libc.so.6 #3 0x00007ffff6ad7916 in malloc_printerr () from /lib64/libc.so.6 #4 0x00007ffff6adb79f in _int_malloc () from /lib64/libc.so.6 #5 0x00007ffff6adbdd6 in calloc () from /lib64/libc.so.6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 #7 read_get_hit_list_per_strand (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/mapping.c:1046 #8 0x000000000041308a in read_get_hit_list (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1239 #9 handle_read (re=<value optimized out>, options=0x632010, n_options=1) at gmapper/mapping.c:1806 #10 0x0000000000404f35 in launch_scan_threads (.omp_data_i=<value optimized out>) at gmapper/gmapper.c:557 #11 0x00007ffff7230502 in ?? () from /usr/lib64/libgomp.so.1 #12 0x00007ffff6dfc851 in start_thread () from /lib64/libpthread.so.0 #13 0x00007ffff6b4a11d in clone () from /lib64/libc.so.6 (gdb) f 6 #6 0x000000000040e87f in my_calloc (re=0x7fff2867ef10, st=0, options=0x632020) at gmapper/../gmapper/../common/my-alloc.h:286 286 res = calloc(size, 1); (gdb) p size $2 = 814080 (gdb) The function my_calloc() is just a wrapper, but the problem is not in there, as the real calloc() call looks legit. These are the limits set in the shell: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 2067285 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited The program is not out of memory, it's using 41GB on a machine with 256GB available: $ top -b -n 1 | grep gmapper 73437 user 20 0 41.5g 16g 15g T 0.0 6.6 55:17.24 gmapper-ls $ free -m total used free shared buffers cached Mem: 258437 195567 62869 0 82 189677 -/+ buffers/cache: 5807 252629 Swap: 0 0 0 I compiled using gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4), with flags -g -O2 -DNDEBUG -mmmx -msse -msse2 -fopenmp -Wall -Wno-deprecated -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS.

    Read the article

  • Asynchronous vs Synchronous vs Threading in an iPhone App

    - by Coocoo4Cocoa
    I'm in the design stage for an app which will utilize a REST web service and sort of have a dilemma in as far as using asynchronous vs synchronous vs threading. Here's the scenario. Say you have three options to drill down into, each one having its own REST-based resource. I can either lazily load each one with a synchronous request, but that'll block the UI and prevent the user from hitting a back navigation button while data is retrieved. This case applies almost anywhere except for when your application requires a login screen. I can't see any reason to use synchronous HTTP requests vs asynchronous because of that reason alone. The only time it makes sense is to have a worker thread make your synchronous request, and notify the main thread when the request is done. This will prevent the block. The question then is bench marking your code and seeing which has more overhead, a threaded synchronous request or an asynchronous request. The problem with asynchronous requests is you need to either setup a smart notification or delegate system as you can have multiple requests for multiple resources happening at any given time. The other problem with them is if I have a class, say a singleton which is handling all of my data, I can't use asynchronous requests in a getter method. Meaning the following won't go: - (NSArray *)users { if(users == nil) users = do_async_request // NO GOOD return users; } whereas the following: - (NSArray *)users { if(users == nil) users == do_sync_request // OK. return users; } You also might have priority. What I mean by priority is if you look at Apple's Mail application on the iPhone, you'll notice they first suck down your entire POP/IMAP tree before making a second request to retrieve the first 2 lines (the default) of your message. I suppose my question to you experts is this. When are you using asynchronous, synchronous, threads -- and when are you using either async/sync in a thread? What kind of delegation system do you have setup to know what to do when a async request completes? Are you prioritizing your async requests? There's a gamut of solutions to this all too common problem. It's simple to hack something out. The problem is, I don't want to hack and I want to have something that's simple and easy to maintain.

    Read the article

  • Synchronizing thread communication?

    - by Roger Alsing
    Just for the heck of it I'm trying to emulate how JRuby generators work using threads in C#. Also, I'm fully aware that C# haas built in support for yield return, I'm just toying around a bit. I guess it's some sort of poor mans coroutines by keeping multiple callstacks alive using threads. (even though none of the callstacks should execute at the same time) The idea is like this: The consumer thread requests a value The worker thread provides a value and yields back to the consumer thread Repeat untill worker thread is done So, what would be the correct way of doing the following? //example class Program { static void Main(string[] args) { ThreadedEnumerator<string> enumerator = new ThreadedEnumerator<string>(); enumerator.Init(() => { for (int i = 1; i < 100; i++) { enumerator.Yield(i.ToString()); } }); foreach (var item in enumerator) { Console.WriteLine(item); }; Console.ReadLine(); } } //naive threaded enumerator public class ThreadedEnumerator<T> : IEnumerator<T>, IEnumerable<T> { private Thread enumeratorThread; private T current; private bool hasMore = true; private bool isStarted = false; AutoResetEvent enumeratorEvent = new AutoResetEvent(false); AutoResetEvent consumerEvent = new AutoResetEvent(false); public void Yield(T item) { //wait for consumer to request a value consumerEvent.WaitOne(); //assign the value current = item; //signal that we have yielded the requested enumeratorEvent.Set(); } public void Init(Action userAction) { Action WrappedAction = () => { userAction(); consumerEvent.WaitOne(); enumeratorEvent.Set(); hasMore = false; }; ThreadStart ts = new ThreadStart(WrappedAction); enumeratorThread = new Thread(ts); enumeratorThread.IsBackground = true; isStarted = false; } public T Current { get { return current; } } public void Dispose() { enumeratorThread.Abort(); } object System.Collections.IEnumerator.Current { get { return Current; } } public bool MoveNext() { if (!isStarted) { isStarted = true; enumeratorThread.Start(); } //signal that we are ready to receive a value consumerEvent.Set(); //wait for the enumerator to yield enumeratorEvent.WaitOne(); return hasMore; } public void Reset() { throw new NotImplementedException(); } public IEnumerator<T> GetEnumerator() { return this; } System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator() { return this; } } Ideas?

    Read the article

  • Event sourcing: Write event before or after updating the model

    - by Magnus
    I'm reasoning about event sourcing and often I arrive at a chicken and egg problem. Would be grateful for some hints on how to reason around this. If I execute all I/O-bound processing async (ie writing to the event log) then how do I handle, or sometimes even detect, failures? I'm using Akka Actors so processing is sequential for each event/message. I do not have any database at this time, instead I would persist all the events in an event log and then keep an aggregated state of all the events in a model stored in memory. Queries are all against this model, you can consider it to be a cache. Example Creating a new user: Validate that the user does not exist in model Persist event to journal Update model (in memory) If step 3 breaks I still have persisted my event so I can replay it at a later date. If step 2 breaks I can handle that as well gracefully. This is fine, but since step 2 is I/O-bound I figured that I should do I/O in a separate actor to free up the first actor for queries: Updating a user while allowing queries (A0 = Front end/GUI actor, A1 = Processor Actor, A2 = IO-actor, E = event bus). (A0-E-A1) Event is published to update user 'U1'. Validate that the user 'U1' exists in model (A1-A2) Persist event to journal (separate actor) (A0-E-A1-A0) Query for user 'U1' profile (A2-A1) Event is now persisted continue to update model (A0-E-A1-A0) Query for user 'U1' profile (now returns fresh data) This is appealing since queries can be processed while I/O-is churning along at it's own pace. But now I can cause myself all kinds of problems where I could have two incompatible commands (delete and then update) be persisted to the event log and crash on me when replayed up at a later date, since I do the validation before persisting the event and then update the model. My aim is to have a simple reasoning around my model (since Actor processes messages sequentially single threaded) but not be waiting for I/O-bound updates when Querying. I get the feeling I'm modeling a database which in itself is might be a problem. If things are unclear please write a comment.

    Read the article

  • Handling exceptions raised in observers

    - by sparky
    I have a Rails (2.3.5) application where there are many groups, and each Group has_many People. I have a Group edit form where users can create new people. When a new person is created, they are sent an email (the email address is user entered on the form). This is accomplished with an observer on the Person model. The problem comes when ActionMailer throws an exception - for example if the domain does not exist. Clearly that cannot be weeded out with a validation. There would seem to be 2 ways to deal with this: A begin...rescue...end block in the observer around the mailer. The problem with this is that the only way to pass any feedback to the user would be to set a global variable - as the observer is out of the MVC flow, I can't even set a flash[:error] there. A rescue_from in the Groups controller. This works fine, but has 2 problems. Firstly, there is no way to know which person threw the exception (all I can get is the 503 exception, no way to know which person caused the problem). This would be useful information to be able to pass back to the user - at the moment, there is no way for me to let them know which email address is the problem - at the moment, I just have to chuck the lot back at them, and issue an unhelpful message saying that one of them is not correct. Secondly (and to a certain extent this make the first point moot) it seems that it is necessary to call a render in the rescue_from, or it dies with a rather bizarre "can't convert Array into String" error from webbrick, with no stack trace & nothing in the log. Thus, I have to throw it back to the user when I come across the first error and have to stop processing the rest of the emails. Neither of the solutions are optimal. It would seem that the only way to get Rails to do what I want is option 1, and loathsome global variables. This would also rely on Rails being single threaded. Can anyone suggest a better solution to this problem?

    Read the article

  • How do you prevent IDisposable from spreading to all your classes?

    - by GrahamS
    Start with these simple classes... Let's say I have a simple set of classes like this: class Bus { Driver busDriver = new Driver(); } class Driver { Shoe[] shoes = { new Shoe(), new Shoe() }; } class Shoe { Shoelace lace = new Shoelace(); } class Shoelace { bool tied = false; } A Bus has a Driver, the Driver has two Shoes, each Shoe has a Shoelace. All very silly. Add an IDisposable object to Shoelace Later I decide that some operation on the Shoelace could be multi-threaded, so I add an EventWaitHandle for the threads to communicate with. So Shoelace now looks like this: class Shoelace { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; // ... other stuff .. } Implement IDisposable on Shoelace Buit now FxCop will complain: "Implement IDisposable on 'Shoelace' because it creates members of the following IDisposable types: 'EventWaitHandle'." Okay, I implement IDisposable on Shoelace and my neat little class becomes this horrible mess: class Shoelace : IDisposable { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; private bool disposed = false; public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } ~Shoelace() { Dispose(false); } protected virtual void Dispose(bool disposing) { if (!this.disposed) { if (disposing) { if (waitHandle != null) { waitHandle.Close(); waitHandle = null; } } // No unmanaged resources to release otherwise they'd go here. } disposed = true; } } Or (as pointed out by commenters) since Shoelace itself has no unmanaged resources, I might use the simpler dispose implementation without needing the Dispose(bool) and Destructor: class Shoelace : IDisposable { private AutoResetEvent waitHandle = new AutoResetEvent(false); bool tied = false; public void Dispose() { if (waitHandle != null) { waitHandle.Close(); waitHandle = null; } GC.SuppressFinalize(this); } } Watch in horror as IDisposable spreads Right that's that fixed. But now FxCop will complain that Shoe creates a Shoelace, so Shoe must be IDisposable too. And Driver creates Shoe so Driver must be IDisposable. and Bus creates Driver so Bus must be IDisposable and so on. Suddenly my small change to Shoelace is causing me a lot of work and my boss is wondering why I need to checkout Bus to make a change to Shoelace. The Question How do you prevent this spread of IDisposable, but still ensure that your unmanaged objects are properly disposed?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • Implementing events to communicate between two processes - C

    - by Jamie Keeling
    Hello all! I have an application consisting of two windows, one communicates to the other and sends it a struct constaining two integers (In this case two rolls of a dice). I will be using events for the following circumstances: Process a sends data to process b, process b displays data Process a closes, in turn closing process b Process b closes a, in turn closing process a I have noticed that if the second process is constantly waiting for the first process to send data then the program will be just sat waiting, which is where the idea of implementing threads on each process occurred and I have started to implement this already. The problem i'm having is that I don't exactly have a lot of experience with threads and events so I'm not sure of the best way to actually implement what I want to do. Following is a small snippet of what I have so far in the producer application; Create thread: case IDM_FILE_ROLLDICE: { hDiceRoll = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc(hMainWindow), // lpStartAddress NULL, // lpParameter 0, // dwCreationFlags &hDiceID // lpThreadId (returned by function) ); } break; The data being sent to the other process: DWORD WINAPI ThreadFunc(LPVOID passedHandle) { HANDLE hMainHandle = *((HANDLE*)passedHandle); WCHAR buffer[256]; LPCTSTR pBuf; LPVOID lpMsgBuf; LPVOID lpDisplayBuf; struct diceData storage; HANDLE hMapFile; DWORD dw; //Roll dice and store results in variable storage = RollDice(); hMapFile = CreateFileMapping( (HANDLE)0xFFFFFFFF, // use paging file NULL, // default security PAGE_READWRITE, // read/write access 0, // maximum object size (high-order DWORD) BUF_SIZE, // maximum object size (low-order DWORD) szName); // name of mapping object if (hMapFile == NULL) { dw = GetLastError(); MessageBox(hMainHandle,L"Could not create file mapping object",L"Error",MB_OK); return 1; } pBuf = (LPTSTR) MapViewOfFile(hMapFile, // handle to map object FILE_MAP_ALL_ACCESS, // read/write permission 0, 0, BUF_SIZE); if (pBuf == NULL) { MessageBox(hMainHandle,L"Could not map view of file",L"Error",MB_OK); CloseHandle(hMapFile); return 1; } CopyMemory((PVOID)pBuf, &storage, (_tcslen(szMsg) * sizeof(TCHAR))); //_getch(); MessageBox(hMainHandle,L"Completed!",L"Success",MB_OK); UnmapViewOfFile(pBuf); return 0; } I'm trying to find out how I would integrate an event with the threaded code to signify to the other process that something has happened, I've seen an MSDN article on using events but it's just confused me if anything, I'm coming up on empty whilst searching on the internet too. Thanks for any help Edit: I can only use the Create/Set/Open methods for events, sorry for not mentioning it earlier.

    Read the article

  • C#: My callback function gets called twice for every Sent Request

    - by Madi D.
    I've Got a program that uploads/downloads files into an online server,Has a callback to report progress and log it into a textfile, The program is built with the following structure: public void Upload(string source, string destination) { //Object containing Source and destination to pass to the threaded function KeyValuePair<string, string> file = new KeyValuePair<string, string>(source, destination); //Threading to make sure no blocking happens after calling upload Function Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.TUpload)); t.Start(file); } private void TUpload(object fileInfo) { KeyValuePair<string, string> file = (KeyValuePair<string, string>)fileInfo; /* Some Magic goes here,Checking The file and Authorizing Upload */ var ftiObject = new FtiObject () { FileNameOnHDD = file.Key, DestinationPath = file.Value, //Has more data used for calculations. }; //Threading to make sure progress gets callback gets called. Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.UploadOP)); t.Start(ftiObject); //Signal used to stop progress untill uploadCompleted is called. uploadChunkDoneSignal.WaitOne(); /* Some Extra Code */ } private void UploadOP(object ftiSentObject) { FtiObject ftiObject = (FtiObject)ftiSentObject; /* Some useless code to create the uri and prepare the ftiObject. */ // webClient.UploadFileAsync will open a thread that // will upload the file and report // progress/complete using registered callback functions. webClient.UploadFileAsync(uri, "PUT", ftiObject.FileNameOnHDD, ftiObject); } I got a callback that is registered to the Webclient's UploadProgressChanged event , however it is getting called twice per sent request. void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e) { FtiObject ftiObject = (FtiObject )e.UserState; Logger.log(ftiObject.FileNameOnHDD, (double)e.BytesSent ,e.TotalBytesToSend); } Log Output: Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Etc... I am watching the Network Traffic using a watcher, and only 1 request is being sent. Some how i cant Figure out why the callback is being called twice, my doubt was that the callback is getting fired by each thread opened(the main Upload , and TUpload), however i dont know how to test if thats the cause. Note: The reason behind the many /**/ Comments is to indicate that the functions do more than just opening threads, and threading is being used to make sure no blocking occurs (there a couple of "Signal.WaitOne()" around the code for synchronization)

    Read the article

  • Using events in threads between processes - C

    - by Jamie Keeling
    Hello all! I have an application consisting of two windows, one communicates to the other and sends it a struct constaining two integers (In this case two rolls of a dice). I will be using events for the following circumstances: Process a sends data to process b, process b displays data Process a closes, in turn closing process b Process b closes a, in turn closing process a I have noticed that if the second process is constantly waiting for the first process to send data then the program will be just sat waiting, which is where the idea of implementing threads on each process occurred and I have started to implement this already. The problem i'm having is that I don't exactly have a lot of experience with threads and events so I'm not sure of the best way to actually implement what I want to do. Following is a small snippet of what I have so far in the producer application; Create thread: case IDM_FILE_ROLLDICE: { hDiceRoll = CreateThread( NULL, // lpThreadAttributes (default) 0, // dwStackSize (default) ThreadFunc(hMainWindow), // lpStartAddress NULL, // lpParameter 0, // dwCreationFlags &hDiceID // lpThreadId (returned by function) ); } break; The data being sent to the other process: DWORD WINAPI ThreadFunc(LPVOID passedHandle) { HANDLE hMainHandle = *((HANDLE*)passedHandle); WCHAR buffer[256]; LPCTSTR pBuf; LPVOID lpMsgBuf; LPVOID lpDisplayBuf; struct diceData storage; HANDLE hMapFile; DWORD dw; //Roll dice and store results in variable storage = RollDice(); hMapFile = CreateFileMapping( (HANDLE)0xFFFFFFFF, // use paging file NULL, // default security PAGE_READWRITE, // read/write access 0, // maximum object size (high-order DWORD) BUF_SIZE, // maximum object size (low-order DWORD) szName); // name of mapping object if (hMapFile == NULL) { dw = GetLastError(); MessageBox(hMainHandle,L"Could not create file mapping object",L"Error",MB_OK); return 1; } pBuf = (LPTSTR) MapViewOfFile(hMapFile, // handle to map object FILE_MAP_ALL_ACCESS, // read/write permission 0, 0, BUF_SIZE); if (pBuf == NULL) { MessageBox(hMainHandle,L"Could not map view of file",L"Error",MB_OK); CloseHandle(hMapFile); return 1; } CopyMemory((PVOID)pBuf, &storage, (_tcslen(szMsg) * sizeof(TCHAR))); //_getch(); MessageBox(hMainHandle,L"Completed!",L"Success",MB_OK); UnmapViewOfFile(pBuf); return 0; } I'm trying to find out how I would integrate an event with the threaded code to signify to the other process that something has happened, I've seen an MSDN article on using events but it's just confused me if anything, I'm coming up on empty whilst searching on the internet too. Thanks for any help Edit: I can only use the Create/Set/Open methods for events, sorry for not mentioning it earlier.

    Read the article

  • Linker Issues with boost::thread under linux using Eclipse and CMake

    - by OcularProgrammer
    I'm in the process of attempting to port some code across from PC to Ubuntu, and am having some issues due to limited experience developing under linux. We use CMake to generate all our build stuff. Under windows I'm making VS2010 projects, and under Linux I'm making Eclipse projects. I've managed to get my OpenCV stuff ported across successfully, but am having major headaches trying to port my threaded boost apps. Just so we're clear, the steps I have followed so-far on a clean Ubuntu 12 installation. (I've done 2 clean re-installs to try and fix potential library cock-ups, now I'm just giving up and asking): Install Eclipse and Eclipse CDT using my package manager Install CMake and CMake Gui using my package manager Install libboost-all-dev using my package manager So-far that's all I've done. I can create the eclipse project using CMake with no errors, so CMake is successfully finding my boost install. When I try and build through eclipse is when I get issues; The app I'm attempting to build uses boost::asio for some UDP I/O and boost::thread to create worker threads for the asio I/O services. I can successfully compile each module, but when I come to link I get spammed with errors such as: /usr/bin/c++ CMakeFiles/RE05DevelopmentDemo.dir/main.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/RE05FusionListener/RE05FusionListener.cpp.o CMakeFiles/RE05DevelopmentDemo.dir/NewEye/NewEye.cpp.o -o RE05DevelopmentDemo -rdynamic -Wl,-Bstatic -lboost_system-mt -lboost_date_time-mt -lboost_regex-mt -lboost_thread-mt -Wl,-Bdynamic /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `void boost::call_once<void (*)()>(boost::once_flag&, void (*)()) [clone .constprop.98]': make[2]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' (.text+0xc8): undefined reference to `pthread_key_create' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::interruption_enabled()': (.text+0x540): undefined reference to `pthread_getspecific' make[1]: Leaving directory `/home/david/Code/Build/Support/RE05DevDemo' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x570): undefined reference to `pthread_getspecific' /usr/lib/gcc/x86_64-linux-gnu/4.6/../../../../lib/libboost_thread-mt.a(thread.o): In function `boost::this_thread::disable_interruption::disable_interruption()': (.text+0x59f): undefined reference to `pthread_getspecific' Some Gotchas that I have collected from other StackOverflow posts and have already checked: The boost libs are all present at /usr/lib I am not getting any compile errors for inability to find the boost headers, so they must be getting found. I am trying to link statically, but I believe eclipse should be passing the correct arguments to make that happen since my CMakeLists.txt includes SET(Boost_USE_STATIC_LIBS ON) I'm officially out of ideas here, I have tried doing local builds of boost and a bunch of other stuff with no more success. I even re-installed Ubuntu to ensure I haven't completely fracked the libs directories and links with multiple weird versions or anything else. Any help would be muchly appreciated.

    Read the article

  • Java consistent synchronization

    - by ring0
    We are facing the following problem in a Spring service, in a multi-threaded environment: three lists are freely and independently accessed for Read once in a while (every 5 minutes), they are all updated to new values. There are some dependencies between the lists, making that, for instance, the third one should not be read while the second one is being updated and the first one already has new values ; that would break the three lists consistency. My initial idea is to make a container object having the three lists as properties. Then the synchronization would be first on that object, then one by one on each of the three lists. Some code is worth a thousands words... so here is a draft private class Sync { final List<Something> a = Collections.synchronizedList(new ArrayList<Something>()); final List<Something> b = Collections.synchronizedList(new ArrayList<Something>()); final List<Something> c = Collections.synchronizedList(new ArrayList<Something>()); } private Sync _sync = new Sync(); ... void updateRunOnceEveryFiveMinutes() { final List<Something> newa = new ArrayList<Something>(); final List<Something> newb = new ArrayList<Something>(); final List<Something> newc = new ArrayList<Something>(); ...building newa, newb and newc... synchronized(_sync) { synchronized(_sync.a) { _synch.a.clear(); _synch.a.addAll(newa); } synchronized(_sync.b) { ...same with newb... } synchronized(_sync.c) { ...same with newc... } } // Next is accessed by clients public List<Something> getListA() { return _sync.a; } public List<Something> getListB() { ...same with b... } public List<Something> getListC() { ...same with c... } The question would be, is this draft safe (no deadlock, data consistency)? would you have a better implementation suggestion for that specific problem? update Changed the order of _sync synchronization and newa... building. Thanks

    Read the article

  • multi-thread access MySQL error

    - by user188916
    I have written a simple multi-threaded C program to access MySQL,it works fine except when i add usleep() or sleep() function in each thread function. i created two pthreads in the main method, int main(){ mysql_library_init(0,NULL,NULL); printf("Hello world!\n"); init_pool(&p,100); pthread_t producer; pthread_t consumer_1; pthread_t consumer_2; pthread_create(&producer,NULL,produce_fun,NULL); pthread_create(&consumer_1,NULL,consume_fun,NULL); pthread_create(&consumer_2,NULL,consume_fun,NULL); mysql_library_end(); } void * produce_fun(void *arg){ pthread_detach(pthread_self()); //procedure while(1){ usleep(500000); printf("producer...\n"); produce(&p,cnt++); } pthread_exit(NULL); } void * consume_fun(void *arg){ pthread_detach(pthread_self()); MYSQL db; MYSQL *ptr_db=mysql_init(&db); mysql_real_connect(); //procedure while(1){ usleep(1000000); printf("consumer..."); int item=consume(&p); addRecord_d(ptr_db,"test",item); } mysql_thread_end(); pthread_exit(NULL); } void addRecord_d(MYSQL *ptr_db,const char *t_name,int item){ char query_buffer[100]; sprintf(query_buffer,"insert into %s values(0,%d)",t_name,item); //pthread_mutex_lock(&db_t_lock); int ret=mysql_query(ptr_db,query_buffer); if(ret){ fprintf(stderr,"%s%s\n","cannot add record to ",t_name); return; } unsigned long long update_id=mysql_insert_id(ptr_db); // pthread_mutex_unlock(&db_t_lock); printf("add record (%llu,%d) ok.",update_id,item); } the program output errors like: [Thread debugging using libthread_db enabled] [New Thread 0xb7ae3b70 (LWP 7712)] Hello world! [New Thread 0xb72d6b70 (LWP 7713)] [New Thread 0xb6ad5b70 (LWP 7714)] [New Thread 0xb62d4b70 (LWP 7715)] [Thread 0xb7ae3b70 (LWP 7712) exited] producer... producer... consumer...consumer...add record (31441,0) ok.add record (31442,1) ok.producer... producer... consumer...consumer...add record (31443,2) ok.add record (31444,3) ok.producer... producer... consumer...consumer...add record (31445,4) ok.add record (31446,5) ok.producer... producer... consumer...consumer...add record (31447,6) ok.add record (31448,7) ok.producer... Error in my_thread_global_end(): 2 threads didn't exit [Thread 0xb72d6b70 (LWP 7713) exited] [Thread 0xb6ad5b70 (LWP 7714) exited] [Thread 0xb62d4b70 (LWP 7715) exited] Program exited normally. and when i add pthread_mutex_lock in function addRecord_d,the error still exists. So what exactly the problem is?

    Read the article

  • c++ multithread array

    - by user1731972
    i'm doing something for fun, trying to learn multithreading Problems passing array by reference to threads but Arno pointed out that my threading via process.h wasn't going to be multi-threaded. What I'm hoping to do is something where I have an array of 100 (or 10,000, doesn't really matter I don't think), and split up the assignment of values to each thread. Example, 4 threads = 250 values per thread to be assigned. Then I can use this filled array for further calculations. Here's some code I was working on (which doesn't work) #include <process.h> #include <windows.h> #include <iostream> #include <fstream> #include <time.h> //#include <thread> using namespace std; void myThread (void *dummy ); CRITICAL_SECTION cs1,cs2; // global int main() { ofstream myfile; myfile.open ("coinToss.csv"); int rNum; long numRuns; long count = 0; int divisor = 1; float holder = 0; int counter = 0; float percent = 0.0; HANDLE hThread[1000]; int array[10000]; srand ( time(NULL) ); printf ("Runs (use multiple of 10)? "); cin >> numRuns; for (int i = 0; i < numRuns; i++) { //_beginthread( myThread, 0, (void *) (array1) ); //??? //hThread[i * 2] = _beginthread( myThread, 0, (void *) (array1) ); hThread[i*2] = _beginthread( myThread, 0, (void *) (array) ); } //WaitForMultipleObjects(numRuns * 2, hThread, TRUE, INFINITE); WaitForMultipleObjects(numRuns, hThread, TRUE, INFINITE); } void myThread (void *param ) { //thanks goes to stockoverflow //http://stackoverflow.com/questions/12801862/problems-passing-array-by-reference-to-threads int *i = (int *)param; for (int x = 0; x < 1000000; x++) { //param[x] = rand() % 2 + 1; i[x] = rand() % 2 + 1; } } Can anyone explain why it isn't working?

    Read the article

  • Yet another Memory Leak Issue (memory is still gone when program terminates)- C program on SLES

    - by user1426181
    I run my C program on Suse Linux Enterprise that compresses several thousand large files (between 10MB and 100MB in size), and the program gets slower and slower as the program runs (it's running multi-threaded with 32 threads on a Intel Sandy Bridge board). When the program completes, and it's run again, it's still very slow. When I watch the program running, I see that the memory is being depleted while the program runs, which you would think is just a classic memory leak problem. But, with a normal malloc()/free() mismatch, I would expect all the memory to return when the program terminates. But, most of the memory doesn't get reclaimed when the program completes. The free or top command shows Mem: 63996M total, 63724M used, 272M free when the program is slowed down to a halt, but, after the termination, the free memory only grows back to about 3660M. When the program is rerun, the free memory is quickly used up. The top program only shows that the program, while running, is using at most 4% or so of the memory. I thought that it might be a memory fragmentation problem, but, I built a small test program that simulates all the memory allocation activity in the program (many randomized aspects were built in - size/quantity), and it always returns all the memory upon completion. So, I don't think that's it. Questions: Can there be a malloc()/free() mismatch that will lose memory permanently, i.e. even after the process completes? What other things in a C program (not C++) can cause permanent memory loss, i.e. after the program completes, and even the terminal window closes? Only a reboot brings the memory back. I've read other posts about files not being closed causing problems, but, I don't think I have that problem. Is it valid to be looking at top and free for the memory statistics, i.e. do they accurately describe the memory situation? They do seem to correspond to the slowness of the program. If the program only shows a 4% memory usage, will something like valgrind find this problem?

    Read the article

  • Swing: How do I run a job from AWT thread, but after a window was layed out?

    - by java.is.for.desktop
    My complete GUI runs inside the AWT thread, because I start the main window using SwingUtilities.invokeAndWait(...). Now I have a JDialog which has just to display a JLabel, which indicates that a certain job is in progress, and close that dialog after the job was finished. The problem is: the label is not displayed. That job seems to be started before JDialog was fully layed-out. When I just let the dialog open without waiting for a job and closing, the label is displayed. The last thing the dialog does in its ctor is setVisible(true). Things such as revalidate(), repaint(), ... don't help either. Even when I start a thread for the monitored job, and wait for it using someThread.join() it doesn't help, because the current thread (which is the AWT thread) is blocked by join, I guess. Replacing JDialog with JFrame doesn't help either. So, is the concept wrong in general? Or can I manage it to do certain job after it is ensured that a JDialog (or JFrame) is fully layed-out? Simplified algorithm of what I'm trying to achieve: Create a subclass of JDialog Ensure that it and its contents are fully layed-out Start a process and wait for it to finish (threaded or not, doesn't matter) Close the dialog I managed to write a reproducible test case: EDIT Problem from an answer is now addressed: This use case does display the label, but it fails to close after the "simulated process", because of dialog's modality. import java.awt.*; import javax.swing.*; public class _DialogTest2 { public static void main(String[] args) throws Exception { SwingUtilities.invokeAndWait(new Runnable() { final JLabel jLabel = new JLabel("Please wait..."); @Override public void run() { JFrame myFrame = new JFrame("Main frame"); myFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); myFrame.setSize(750, 500); myFrame.setLocationRelativeTo(null); myFrame.setVisible(true); JDialog d = new JDialog(myFrame, "I'm waiting"); d.setModalityType(Dialog.ModalityType.APPLICATION_MODAL); d.add(jLabel); d.setSize(300, 200); d.setLocationRelativeTo(null); d.setVisible(true); SwingUtilities.invokeLater(new Runnable() { @Override public void run() { try { Thread.sleep(3000); // simulate process jLabel.setText("Done"); } catch (InterruptedException ex) { } } }); d.setVisible(false); d.dispose(); myFrame.setVisible(false); myFrame.dispose(); } }); } }

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Fixing Robocopy for SQL Server Jobs

    - by Most Valuable Yak (Rob Volk)
    Robocopy is one of, if not the, best life-saving/greatest-thing-since-sliced-bread command line utilities ever to come from Microsoft.  If you're not using it already, what are you waiting for? Of course, being a Microsoft product, it's not exactly perfect. ;)  Specifically, it sets the ERRORLEVEL to a non-zero value even if the copy is successful.  This causes a problem in SQL Server job steps, since non-zero ERRORLEVELs report as failed. You can work around this by having your SQL job go to the next step on failure, but then you can't determine if there was a genuine error.  Plus you still see annoying red X's in your job history.  One way I've found to avoid this is to use a batch file that runs Robocopy, and I add some commands after it (in red): robocopy d:\backups \\BackupServer\BackupFolder *.bak rem suppress successful robocopy exit statuses, only report genuine errors (bitmask 16 and 8 settings)set/A errlev="%ERRORLEVEL% & 24" rem exit batch file with errorlevel so SQL job can succeed or fail appropriatelyexit/B %errlev% (The REM statements are simply comments and don't need to be included in the batch file) The SET command lets you use expressions when you use the /A switch.  So I set an environment variable "errlev" to a bitwise AND with the ERRORLEVEL value. Robocopy's exit codes use a bitmap/bitmask to specify its exit status.  The bits for 1, 2, and 4 do not indicate any kind of failure, but 8 and 16 do.  So by adding 16 + 8 to get 24, and doing a bitwise AND, I suppress any of the other bits that might be set, and allow either or both of the error bits to pass. The next step is to use the EXIT command with the /B switch to set a new ERRORLEVEL value, using the "errlev" variable.  This will now return zero (unless Robocopy had real errors) and allow your SQL job step to report success. This technique should also work for other command-line utilities.  The only issues I've found is that it requires the commands to be part of a batch file, so if you use Robocopy directly in your SQL job step you'd need to place it in a batch.  If you also have multiple Robocopy calls, you'll need to place the SET/A command ONLY after the last one.  You'd therefore lose any errors from previous calls, unless you use multiple "errlev" variables and AND them together. (I'll leave this as an exercise for the reader) The SET/A syntax also permits other kinds of expressions to be calculated.  You can get a full list by running "SET /?" on a command prompt.

    Read the article

  • SQL SERVER – Summary of Month – Wait Type – Day 28 of 28

    - by pinaldave
    I am glad to announce that the month of Wait Types and Queues very successful. I am glad that it was very well received and there was great amount of participation from community. I am fortunate to have some of the excellent comments throughout the series. I want to dedicate this series to all the guest blogger – Jonathan, Jacob, Glenn, and Feodor for their kindness to take a participation in this series. Here is the complete list of the blog posts in this series. I enjoyed writing the series and I plan to continue writing similar series. Please offer your opinion. SQL SERVER – Introduction to Wait Stats and Wait Types – Wait Type – Day 1 of 28 SQL SERVER – Signal Wait Time Introduction with Simple Example – Wait Type – Day 2 of 28 SQL SERVER – DMV – sys.dm_os_wait_stats Explanation – Wait Type – Day 3 of 28 SQL SERVER – DMV – sys.dm_os_waiting_tasks and sys.dm_exec_requests – Wait Type – Day 4 of 28 SQL SERVER – Capturing Wait Types and Wait Stats Information at Interval – Wait Type – Day 5 of 28 SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28 SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28 SQL SERVER – SOS_SCHEDULER_YIELD – Wait Type – Day 8 of 28 SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28 SQL SERVER – IO_COMPLETION – Wait Type – Day 10 of 28 SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28 SQL SERVER – PAGELATCH_DT, PAGELATCH_EX, PAGELATCH_KP, PAGELATCH_SH, PAGELATCH_UP – Wait Type – Day 12 of 28 SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28 SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28 SQL SERVER – LCK_M_XXX – Wait Type – Day 15 of 28 SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28 SQL SERVER – WRITELOG – Wait Type – Day 17 of 28 SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28 SQL SERVER – PREEMPTIVE and Non-PREEMPTIVE – Wait Type – Day 19 of 28 SQL SERVER – MSQL_XP – Wait Type – Day 20 of 28 SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28 SQL SERVER – Guest Post – Jacob Sebastian – Filestream – Wait Types – Wait Queues – Day 22 of 28 SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28 SQL SERVER – 2000 – DBCC SQLPERF(waitstats) – Wait Type – Day 24 of 28 SQL SERVER – 2011 – Wait Type – Day 25 of 28 SQL SERVER – Guest Post – Glenn Berry – Wait Type – Day 26 of 28 SQL SERVER – Best Reference – Wait Type – Day 27 of 28 SQL SERVER – Summary of Month – Wait Type – Day 28 of 28 Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Optimization, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, SQLServer, T SQL, Technology

    Read the article

  • Ask the Readers: Do You Use the Command Line?

    - by Asian Angel
    Most people have heard of it but not everyone is familiar or comfortable with how to use this bastion of geekdom. This week we would like to know if you use the command line or not. The command line…the bastion of ultimate geekery in many peoples’ eyes. You often hear people referring to doing things using the command line, so there must be something to it, right? For some people using the command line is the best, most efficient, and easiest way to do things on their systems. These are the people that many of us wish we were like. Next you have those who are proficient at using the command line but do not rely on it for everything they do on their systems. Then there are people who know how to perform some tasks or hacks using the command line but may not be as comfortable or knowledgeable as they wish to be using it. Moving on you find those who are interested in learning how to use the command line and just need a small push to get started.  Perhaps you feel too intimidated to learn it and just need the right opportunity to come along. And maybe you do not care one way or the other so long as you get done what you want to do on your system. Or you may prefer to simply use a graphical interface since that is quicker and easier for you (along with being familiar). You can find the whole range of people when it comes to using the command line… This week we would like to know if you use the command line or not. What command line category do you fit into? Power user? Casual usage? Totally lost? Let us know in the comments! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Fun and Colorful Firefox Theme for Windows 7 Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >