Search Results

Search found 270542 results on 10822 pages for 'default stack size'.

Page 22/10822 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • FreeBSD with Vagrant - don't know how to check guest additions version

    - by joelmaranhao
    On Mac OS X 10.9.3 Picked a box from the VagrantCloud Init the vagrant box $ vagrant init chef/freebsd-9.2-i386 A `Vagrantfile` has been placed in this directory. You are now ready to `vagrant up` your first virtual environment! Please read the comments in the Vagrantfile as well as documentation on `vagrantup.com` for more information on using Vagrant. List the files $ ls -al -rw-r--r-- 1 joel staff 4831 Jun 5 17:17 Vagrantfile Vagrantfile content VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "chef/freebsd-9.2-i386" end Starting my virtual box leads to Errors $ vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Box 'chef/freebsd-9.2-i386' could not be found. Attempting to find and install... default: Box Provider: virtualbox default: Box Version: >= 0 ==> default: Loading metadata for box 'chef/freebsd-9.2-i386' default: URL: https://vagrantcloud.com/chef/freebsd-9.2-i386 ==> default: Adding box 'chef/freebsd-9.2-i386' (v1.0.0) for provider: virtualbox default: Downloading: https://vagrantcloud.com/chef/freebsd-9.2-i386/version/1/provider/virtualbox.box ==> default: Successfully added box 'chef/freebsd-9.2-i386' (v1.0.0) for 'virtualbox'! ==> default: Importing base box 'chef/freebsd-9.2-i386'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'chef/freebsd-9.2-i386' is up to date... ==> default: Setting the name of the VM: freebsd92-i386_default_1401982167145_49633 ==> default: Fixed port collision for 22 => 2222. Now on port 2201. ==> default: Clearing any previously set network interfaces... ==> default: Preparing network interfaces based on configuration... default: Adapter 1: nat ==> default: Forwarding ports... default: 22 => 2201 (adapter 1) ==> default: Booting VM... ==> default: Waiting for machine to boot. This may take a few minutes... default: SSH address: 127.0.0.1:2201 default: SSH username: vagrant default: SSH auth method: private key default: Warning: Connection timeout. Retrying... default: Warning: Connection timeout. Retrying... ==> default: Machine booted and ready! Sorry, don't know how to check guest version of Virtualbox Guest Additions on this platform. Stopping installation. ==> default: Checking for guest additions in VM... default: The guest additions on this VM do not match the installed version of default: VirtualBox! In most cases this is fine, but in rare cases it can default: prevent things such as shared folders from working properly. If you see default: shared folder errors, please make sure the guest additions within the default: virtual machine match the version of VirtualBox you have installed on default: your host and reload your VM. default: default: Guest Additions Version: 4.2.16 default: VirtualBox Version: 4.3 ==> default: Mounting shared folders... default: /vagrant => /Users/joel/Code/anybots/operations/robot/freebsd92-i386 Vagrant attempted to execute the capability 'mount_virtualbox_shared_folder' on the detect guest OS 'freebsd', but the guest doesn't support that capability. This capability is required for your configuration of Vagrant. Please either reconfigure Vagrant to avoid this capability or fix the issue by creating the capability. Note that I have recently installed the latest version of VirtualBox, but somehow I can't find the Guest Additions.

    Read the article

  • How to change the default editor of a specific file type in JDeveloper

    - by [email protected]
    When you open a file in JDeveloper, the mode that is used as the default might not be what you as a developer want.  If, for example, every time you open a .jsp(x) file you click on the source tab at the bottom of the window so that you can edit the jsp(x) file in source code mode, you may want to consider changing the default editor for that file type.  This is easy to do in the JDeveloper tool preferences and can be a time saver in the long run, since some editors can take a while to start up and if you don't need them often, this would just be lost time.  Here are the steps:  From the JDeveloper menu, select Tools->Preferences...Select "File Types" in the tree component on the left side of the preferences dialog.Click on the "Default Editors" tab.Scroll to the file type you want to change.In the details section at the bottom of the dialog, use the "Default Editor" select list to change the default to your liking.

    Read the article

  • Why doesn't Windows XP show "Total Size" and "Free Space" for USB flash disks?

    - by Mehper C. Palavuzlar
    When I double click on My Computer, I can immediately see the Total size and Free space for internal and external HDDs, and inserted CD/DVD media, but in the same columns I cannot see these values for any USB flash drives. They are just empty. To see, I have to right click on USB drive letter, and select Properties. Is there a trick to make Windows XP display USB drive's Total size and Free space in My Computer window?

    Read the article

  • I want to increase the size of my boot partition (Ubuntu 14.04 version) [duplicate]

    - by Mike
    This question already has an answer here: How do I free up more space in /boot? 11 answers How to resize partitions? 5 answers I read in another post that kernels are distributed as new releases rather than upgrades. I didn't know this when I was allocating space to my partitions during my initial install of Ubuntu. As a result I ran out of space on my boot partition. Can I increase the size of it using GParted and how do I do this without doing damage to my system? 1 1049kB 512MB 511MB fat32 boot 2 512MB 768MB 256MB ext2 3 768MB 1000GB 999GB lvm Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/ubuntu--vg-swap_1: 3712MB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 3712MB 3712MB linux-swap(v1) Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/ubuntu--vg-root: 996GB Sector size (logical/physical): 512B/4096B Partition Table: loop Number Start End Size File system Flags 1 0.00B 996GB 996GB ext4 Sorry, don't know how to capture and post the terminal output screen.

    Read the article

  • How to change the default browser from the registry? [closed]

    - by msbg
    Possible Duplicate: Which registry keys need to be edited to change the default browser? I am trying to change the default browser opened from start ? run or win + r. I have set both HKEY_CLASSES_ROOT\http\shell\open\command and HKEY_LOCAL_MACHINE\SOFTWARE\Classes\http\shell\open\command From "C:\Program Files\Internet Explorer\iexplore.exe" %1 To "C:\Program Files (x86)\Mozilla Firefox\firefox.exe" %1 But running an http address still opens Internet Explorer not Firefox. How do I change this?

    Read the article

  • Android: bug in launchMode="singleTask"? -> activity stack not preserved

    - by Stefan Klumpp
    My main activity A has as set android:launchMode="singleTask" in the manifest. Now, whenever I start another activity from there, e.g. B and press the HOME BUTTON on the phone to return to the home screen and then again go back to my app, either via pressing the app's button or pressing the HOME BUTTONlong to show my most recent apps it doesn't preserve my activity stack and returns straight to A instead of the expected activity B. Here the two behaviors: Expected: A > B > HOME > B Actual: A > B > HOME > A (bad!) Is there a setting I'm missing or is this a bug? If the latter, is there a workaround for this until the bug is fixed? FYI: This question has already been discussed here. However, it doesn't seem that there is any real solution to this, yet.

    Read the article

  • why gcc 4.x default reserve 8 bytes for stack on linux when calling a method?

    - by nikcname
    as a beginner of asm, I am checking gcc -S generated asm code to learn. why gcc 4.x default reserve 8 bytes for stack when calling a method? func18 is the empty function with no return no param no local var defined. I can't figure out why 8 bytes is reserved here (neither any forum/site mention for the reason, ppl seems take it for granted) is it for the %ebp just push? or return type?! many thx! .globl _func18 _func18: pushl %ebp movl %esp, %ebp subl $8, %esp .text

    Read the article

  • why gcc 4.x default reserve 8 bytes for stack on linux when calling a method?

    - by nikcname
    as a beginner of asm, I am checking gcc -S generated asm code to learn. why gcc 4.x default reserve 8 bytes for stack when calling a method? func18 is the empty function with no return no param no local var defined. I can't figure out why 8 bytes is reserved here (neither any forum/site mention for the reason, ppl seems take it for granted) is it for the %ebp just push? or return type?! many thx! .globl _func18 _func18: pushl %ebp movl %esp, %ebp subl $8, %esp .text

    Read the article

  • Windows Vista, Default Programs API, file format associations, and (un)installers - explosive mix!

    - by Alex T.
    My application is a rather well behaved Windows citizen, so when I ported it to Windows Vista/7 I replaced my custom file format association code with support for the Default Programs API. However I ran into a problem when trying to make uninstaller for my application - there seems to be no way to remove file format associations via Default Programs API. I tried to call IApplicationAssociationRegistration::ClearUserAssociations but it actually removes all associations, including the ones for other applications - completely restoring default state of the OS (which is of course unacceptable). I tried to call IApplicationAssociationRegistration::SetAppAsDefault to return file format associations to the previous "owner" - but it does not help, because my application handles many unique file formats which the OS does not support and there is no previous "owners". And Windows does not allow to pass empty strings to SetAppAsDefault... So what do I do? Any good solutions?

    Read the article

  • Is there any way to limit the size of an STL Map?

    - by Nathan Fellman
    I want to implement some sort of lookup table in C++ that will act as a cache. It is meant to emulate a piece of hardware I'm simulating. The keys are non-integer, so I'm guessing a hash is in order. I have no intention of inventing the wheel so I intend to use stl::map for this (though suggestions for alternatives are welcome). The question is, is there any way to limit the size of the hash to emulate the fact that my hardware is of finite size? I'd expect the hash's insert method to return an error message or throw an exception if the limit is reached. If there is no such way, I'll simply check its size before trying to insert, but that seems like an inelegant way to do it.

    Read the article

  • Stack allocation fails and heap allocation succeeds!! Is it possible??

    - by Prabhu
    Hello All, I have the following piece of snippet Class Sample { Obj_Class1 o1; Obj_Class2 o2;}; But the size of Obj_Class1 and Obj_Class2 is huge so that the compiler shows a warning "Consider moving some space to heap". I was asked to replace Obj_Class1 o1 with Obj_Class1* o1 = new Obj_Class1(); But I feel that there is no use of making this change as heap allocation will also fail if stack allocation fails. Am I correct? Or does it make sense to make this change ( other than suppressing the compiler warning ).

    Read the article

  • Does Apache need to be stopped to edit "/etc/apache2/sites-available/default"?

    - by webworm
    I am attempting to edit the "default" file located at .. "/etc/apache2/sites-available/default" on my Ubuntu machine running Apache 2.2.8. I want to do this in order to enable the use of .htaccess files. I have downloaded the "default" file and edited it and now I am trying to upload it back to the server via SFTP. I keep getting permission denied errors. Could it be because Apache is running and making use of the file? I am an admin on the machine so I would expect to be able to overwrite the file. Thanks for any assistance.

    Read the article

  • Using C++, why can `throw` cause `terminate()`, and when are the stack variables to be freed?

    - by nbolton
    I'm pondering a question on Brainbench. I actually realised that I could answer my question easily by compiling the code, but it's an interesting question nonetheless, so I'll ask the question anyway and answer it myself shortly. Take a look at this snippet: The question considers what happens when we throw from a destructor (which causes terminate() to be called). It's become clear to me by asking the question that the memory is indeed freed and the destructor is called, but, is this before or after throw is called from foo? Perhaps the issue here is that throw is used while the stack is unwinding that is the problem... Actually this is slightly confusing.

    Read the article

  • Does `throw` cause stack variables (full types) to be freed from memory in C++?

    - by nbolton
    I'm pondering a question on Brainbench. I actually realised that I could answer my question easily by compiling the code, but it's an interesting question nonetheless, so I'll ask the question anyway and answer it myself shortly. Take a look at this snippet: The question considers what happens when we throw from a destructor (which causes terminate() to be called). It's become clear to me by asking the question that the memory is indeed freed and the destructor is called, but, is this before or after throw is called from foo? Perhaps the issue here is that throw is used while the stack is unwinding that is the problem... Actually this is slightly confusing.

    Read the article

  • Does `throw` cause stack variables to be freed from memory in C++?

    - by nbolton
    I'm pondering a question on Brainbench. I actually realised that I could answer my question easily by compiling the code, but it's an interesting question nonetheless, so I'll ask the question anyway and answer it myself shortly. Take a look at this snippet: The question considers what happens when we throw from a destructor (which causes terminate() to be called). It's become clear to me by asking the question that the memory is indeed freed and the destructor is called, but, is this before or after throw is called from foo? Perhaps the issue here is that throw is used while the stack is unwinding that is the problem... Actually this is slightly confusing.

    Read the article

  • Is it wise to rely on default features of a programming language?

    - by George Edison
    Should I frequently rely on default values? For example, in PHP, if you have the following: <?php $var .= "Value"; ?> This is perfectly fine - it works. But what if assignment like this to a previously unused variable is later eliminated from the language? (I'm not referring to just general assignment to an unused variable.) There are countless examples of where the default value of something has changed and so much existing code was then useless. On the other hand, without default values, there is a lot of code redundancy. What is the proper way of dealing with this?

    Read the article

  • Efficiency: what block size of kernel-mode memory allocations?

    - by Robert
    I need a big, driver-internal memory buffer with several tens of megabytes (non-paged, since accessed at dispatcher level). Since I think that allocating chunks of non-continuous memory will more likely succeed than allocating one single continuous memory block (especially when memory becomes fragmented) I want to implement that memory buffer as a linked list of memory blocks. What size should the blocks have to efficiently load the memory pages? (read: not to waste any page space) A multiple of 4096? (equally to the page size of the OS) A multiple of 4000? (not to waste another page for OS-internal memory allocation information) Another size? Target platform is Windows NT = 5.1 (XP and above) Target architectures are x86 and amd64 (not Itanium)

    Read the article

  • How to make a swing app aware of screen size change?

    - by Marton Sigmond
    Hi, while my swing app is running I change the size of the screen (e.g. from 1024x768 to 800x600). Is there any event I can listen to to be notified about this? Alternatively I could check the screen size in every couple of second, but the Toolkit.getScreenSize() keeps telling me the old value. How could I get the real screen size after the change? Environment: Linux (tested on SuSE ES 11 and Ubuntu 9.04) I appreciate your help. Marton

    Read the article

  • Why doesn't the C++ default destructor destroy my objects?

    - by Oszkar
    The C++ specification says the default destructor deletes all non-static members. Nevertheless, I can't manage to achieve that. I have this: class N { public: ~N() { std::cout << "Destroying object of type N"; } }; class M { public: M() { n = new N; } // ~M() { //this should happen by default // delete n; // } private: N* n; }; Then this should print the given message, but it doesn't: M* m = new M(); delete m; //this should invoke the default destructor

    Read the article

  • What is the correct way to set size of elements of GUI?

    - by Roman
    I am using swing to create my GUI. J have a JFrame containing one main JPanel which, in its turn contain several JPanels which, in their turn, contain buttons. I would like to set certain sizes to mu buttons and JPanels. How does it work? Should I set sizes of my buttons and then the size of the JPanel and the JFrame will be set according to the buttons sizes? Or it works in the opposite direction? I set size for the JPanel and the size of the buttons will be set automatically?

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • Unusual heap size limitations in VS2003 C++

    - by Shane MacLaughlin
    I have a C++ app that uses large arrays of data, and have noticed while testing that it is running out of memory, while there is still plenty of memory available. I have reduced the code to a sample test case as follows; void MemTest() { size_t Size = 500*1024*1024; // 512mb if (Size > _HEAP_MAXREQ) TRACE("Invalid Size"); void * mem = malloc(Size); if (mem == NULL) TRACE("allocation failed"); } If I create a new MFC project, include this function, and run it from InitInstance, it works fine in debug mode (memory allocated as expected), yet fails in release mode (malloc returns NULL). Single stepping through release into the C run times, my function gets inlined I get the following // malloc.c void * __cdecl _malloc_base (size_t size) { void *res = _nh_malloc_base(size, _newmode); RTCCALLBACK(_RTC_Allocate_hook, (res, size, 0)); return res; } Calling _nh_malloc_base void * __cdecl _nh_malloc_base (size_t size, int nhFlag) { void * pvReturn; // validate size if (size > _HEAP_MAXREQ) return NULL; ' ' And (size _HEAP_MAXREQ) returns true and hence my memory doesn't get allocated. Putting a watch on size comes back with the exptected 512MB, which suggests the program is linking into a different run-time library with a much smaller _HEAP_MAXREQ. Grepping the VC++ folders for _HEAP_MAXREQ shows the expected 0xFFFFFFE0, so I can't figure out what is happening here. Anyone know of any CRT changes or versions that would cause this problem, or am I missing something way more obvious?

    Read the article

  • Software development stack 2012

    A couple of months ago, I posted on Google+ about my evaluation period for a new software development stack in general. "Analysing existing 'jungle' of multiple applications and tools in various languages for clarification and future design decisions. Great fun and lots of headaches... #DevelopersLife" Surprisingly, there was response... ;-) - And this series of articles is initiated by this post. Thanks Olaf. The past few years... Well, after all my first choice of software development in the past was Microsoft Visual FoxPro 6.0 - 9.0 in combination with Microsoft SQL Server 2000 - 2008 and Crystal Reports 9.x - XI. Honestly, it is my main working environment due to exisiting maintenance and support plans with my customers, but also for new project requests. And... hands on, it is still my first choice for data manipulation and migration options. But the earth is spinning, and as a software craftsman one has to be flexible with the choice of tools. In parallel to my knowledge and expertise in the above mentioned tools, I already started very early to get my hands dirty with the Microsoft .NET Framework. If I remember correctly, I started back in 2002/2003 with the first version ever. But this was more out of curiousity. During the years this kind of development got more serious and demanding, and I focused myself on interop and integrational libraries and applications. Mainly, to expose exisitng features of the .NET Framework to Visual FoxPro - I even had a session about that at the German Developer's Conference in Frankfurt. Observation of recent developments With the recent hype on Javascript and HTML5, especially for Windows 8 and Windows Phone 8 development, I had several 'Deja vu' events... Back in early 2006 (roughly) I had a conversation on the future of Web and Desktop development with my former colleagues Golo Roden and Thomas Wilting about the underestimation of Javascript and its root as a prototype-based, dynamic, full-featured programming language. During this talk with them I took the Mozilla applications, namely Firefox and Thunderbird, as a reference which are mainly based on XML, CSS, Javascript and images - besides the core rendering engine. And that it is very simple to write your own extensions for the Gecko rendering engine. Looking at the Windows Vista Sidebar widgets, just underlines this kind of usage. So, yes the 'Modern UI' of Windows 8 based on HTML5, CSS3 and Javascript didn't come as any surprise to me. Just allow me to ask why did it take so long for Microsoft to come up with this step? A new set of tools Ok, coming from web development in HTML 4, CSS and Javascript prior to Visual FoxPro, I am partly going back to that combination of technologies. What is the other part of the software development stack here at IOS Indian Ocean Software Ltd? Frankly, it is easy and straight forward to describe: Microsoft Visual FoxPro 9.0 SP 2 - still going strong! Visual Studio 2012 (C# on latest .NET Framework) MonoDevelop Telerik DevCraft Suite WPF ASP.NET MVC Windows 8 Kendo UI OpenAccess ORM Reporting JustCode CODE Framework by EPS Software MonoTouch and Mono for Android Subversion and additional tools for the daily routine: Notepad++, JustCode, SQL Compare, DiffMerge, VMware, etc. Following the principles of Clean Code Developer and the Agile Manifesto Actually, nothing special about this combination but rather a solid fundament to work with and create line of business applications for customers.Honestly, I am really interested in your choice of 'weapons' for software development, and hopefully there might be some nice conversations in the comment section. Over the next coming days/weeks I'm going to describe a little bit more in detail about the reasons for my decision. Articles will be added bit by bit here as reference, too. Please bear with me... Regards, JoKi

    Read the article

  • SQL SERVER – Changing Default Installation Path for SQL Server

    - by pinaldave
    Earlier I wrote a blog post about SQL SERVER – Move Database Files MDF and LDF to Another Location and in the blog post we discussed how we can change the location of the MDF and LDF files after database is already created. I had mentioned that we will discuss how to change the default location of the database. This way we do not have to change the location of the database after it is created at different locations. The ideal scenario would be to specify this default location of the database files when SQL Server Installation was performed. If you have already installed SQL Server there is an easy way to solve this problem. This will not impact any database created before the change, it will only affect the default location of the database created after the change. To change the default location of the SQL Server Installation follow the steps mentioned below: Go to Right Click on Servers >> Click on Properties >> Go to the Database Settings screen You can change the default location of the database files. All the future database created after the setting is changed will go to this new location. You can also do the same with T-SQL and here is the T-SQL code to do the same. USE [master] GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultData', REG_SZ, N'F:\DATA' GO EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE', N'Software\Microsoft\MSSQLServer\MSSQLServer', N'DefaultLog', REG_SZ, N'F:\DATA' GO What are the best practices do you follow with regards to default file location for your database? I am interested to know them. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >