Search Results

Search found 4860 results on 195 pages for 'parallel extensions'.

Page 54/195 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • ZeroDowntime deployment of configuration in Tomcat 7

    - by pagid
    looking at the things which can be done with the Parallel deployments in Tomcat 7, I wonder how new or changed configuration could be provided to these various versions of the application. In a nutshell - what parallel deployment offers is that pushing a new version of a war file to the webapps dir (with filenames like "App##01.war, "App##02.war") and ever user with a new session will get the newer version, all others stay with the old version. So how could one provide different or additional configuration (properties) to the various versions? Cheers.

    Read the article

  • Enable ReadyBoost on a second internal HDD?

    - by kd304
    I have two SATA HDDs in my desktop PC (one for daily activity, one for storage and backup). I can finely use ReadyBoost with pendrives, but I wonder, Is there a way I could use my underutilized second HDD to participate in the cacheing mechanism (same concept as having two CPU cores crunch things in parallel: have two HDDs fetch data in parallel)? Clearly speaking: I want to enable ReadyBoost on my separate D: drive.

    Read the article

  • Enabling "USB Printing Support" in Windows 7

    - by Kevin Dente
    I'm trying to use an old parallel-port based printer with a USB-to-parallel port adapter on Windows 7. When I plug it into the USB port on the computer it's listed as an unrecognized device. I know that these cables typically use the "USB Printing Support" driver with makes USB ports show up as printer ports in the printer dialog. Is there a way to manually add USB Printing Support to Windows 7, since it isn't being added automatically?

    Read the article

  • Enabling "USB Printing Support" in Windows 7

    - by Kevin Dente
    I'm trying to use an old parallel-port based printer with a USB-to-parallel port adapter on Windows 7. When I plug it into the USB port on the computer it's listed as an unrecognized device. I know that these cables typically use the "USB Printing Support" driver with makes USB ports show up as printer ports in the printer dialog. Is there a way to manually add USB Printing Support to Windows 7, since it isn't being added automatically?

    Read the article

  • Linux software RAID6: 3 drives offline - how to force online?

    - by Ole Tange
    This is similar to 3 drives fell out of Raid6 mdadm - rebuilding? except that it is not due to a failing cable. Instead the 3rd drive fell offline during rebuild of another drive. The drive failed with: kernel: end_request: I/O error, dev sdc, sector 293732432 kernel: md/raid:md0: read error not correctable (sector 293734224 on sdc). After rebooting both these sectors and the sectors around them are fine. This leads me to believe the error is intermittent and thus the device simply took too long to error correct the sector and remap it. I expect that no data was written to the RAID after it failed. Therefore I hope that if I can kick the last failing device online that the RAID is fine and that the xfs_filesystem is OK, maybe with a few missing recent files. Taking a backup of the disks in the RAID takes 24 hours, so I would prefer that the solution works the first time. I have therefore set up a test scenario: export PRE=3 parallel dd if=/dev/zero of=/tmp/raid${PRE}{} bs=1k count=1000k ::: 1 2 3 4 5 parallel mknod /dev/loop${PRE}{} b 7 ${PRE}{} \; losetup /dev/loop${PRE}{} /tmp/raid${PRE}{} ::: 1 2 3 4 5 mdadm --create /dev/md$PRE -c 4096 --level=6 --raid-devices=5 /dev/loop${PRE}[12345] cat /proc/mdstat mkfs.xfs -f /dev/md$PRE mkdir -p /mnt/disk2 umount -l /mnt/disk2 mount /dev/md$PRE /mnt/disk2 seq 1000 | parallel -j1 mkdir -p /mnt/disk2/{}\;cp /bin/* /mnt/disk2/{}\;sleep 0.5 & mdadm --fail /dev/md$PRE /dev/loop${PRE}3 /dev/loop${PRE}4 cat /proc/mdstat # Assume reboot so no process is using the dir kill %1; sync & kill %1; sync & # Force fail one too many mdadm --fail /dev/md$PRE /dev/loop${PRE}1 parallel --tag -k mdadm -E ::: /dev/loop${PRE}? | grep Upda # loop 2,5 are newest. loop1 almost newest => force add loop1 Next step is to add loop1 back - and this is where I am stuck. After that do a xfs-consistency check. When that works, check that the solution also works on real devices (such a 4 USB sticks).

    Read the article

  • How can I run a (16-bit) .COM executable that has been renamed to .COS?

    - by max16bit
    I'm trying to run a couple of 16-bit legacy DOS programs from a standard windows XP dos prompt. The problem is that the file extensions have been renamed from .COM to .COS and they are stored on read-only media and I can't copy them (special environment). Any tips on how to invoke such files despite the weird extension? If they had been 32-bit EXEs, it wouldn't have been an issue running them even without their proper extensions, but with these COM files, I'm unable to find a way to run them.

    Read the article

  • Windows 7 64-bit on brand new HP Pavilion

    - by Bill K
    Is anyone else experiencing Chrome extensions disappearing upon closing the browser? I do have the Click and Clean extension installed, and I'm wondering whether that is uninstalling my extensions somehow when it deletes browsing history. Thanks!

    Read the article

  • Using pip to install modules in python failing

    - by James N
    I'm having trouble installing python modules using pip. Below is the output from the command window: Note that I installed pip immediately before trying to install GDAL module. I am on a w7 64bit machine running python 2.7 Microsoft Windows [Version 6.1.7601] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Users\jnunn\Desktop>python get-pip.py Downloading/unpacking pip Downloading pip-1.2.1.tar.gz (102Kb): 102Kb downloaded Running setup.py egg_info for package pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing collected packages: pip Running setup.py install for pip warning: no files found matching '*.html' under directory 'docs' warning: no previously-included files matching '*.txt' found under directory 'docs\_build' no previously-included directories found matching 'docs\_build\_sources' Installing pip-script.py script to C:\Python26\ArcGIS10.1\Scripts Installing pip.exe script to C:\Python26\ArcGIS10.1\Scripts Installing pip.exe.manifest script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7-script.py script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7.exe script to C:\Python26\ArcGIS10.1\Scripts Installing pip-2.7.exe.manifest script to C:\Python26\ArcGIS10.1\Scripts Successfully installed pip Cleaning up... C:\Users\jnunn\Desktop>pip install gdal Downloading/unpacking gdal Downloading GDAL-1.9.1.tar.gz (420kB): 420kB downloaded Running setup.py egg_info for package gdal Installing collected packages: gdal Running setup.py install for gdal building 'osgeo._gdal' extension c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -I../../port -I../../gcore -I../../alg -I../../ogr/ -I C:\Python26\ArcGIS10.1\include -IC:\Python26\ArcGIS10.1\PC -IC:\Python26\ArcGIS1 0.1\lib\site-packages\numpy\core\include /Tpextensions/gdal_wrap.cpp /Fobuild\te mp.win32-2.7\Release\extensions/gdal_wrap.obj gdal_wrap.cpp c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : warning C4530: C++ exception handler used, but unwind semantics are not enabled . Specify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file : 'cpl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\c l.exe"' failed with exit status 2 Complete output from command C:\Python26\ArcGIS10.1\python.exe -c "import se tuptools;__file__='c:\\users\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setu p.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec' ))" install --record c:\users\jnunn\appdata\local\temp\pip-f7tgze-record\install -record.txt --single-version-externally-managed: running install running build running build_py creating build creating build\lib.win32-2.7 copying gdal.py -> build\lib.win32-2.7 copying ogr.py -> build\lib.win32-2.7 copying osr.py -> build\lib.win32-2.7 copying gdalconst.py -> build\lib.win32-2.7 copying gdalnumeric.py -> build\lib.win32-2.7 creating build\lib.win32-2.7\osgeo copying osgeo\gdal.py -> build\lib.win32-2.7\osgeo copying osgeo\gdalconst.py -> build\lib.win32-2.7\osgeo copying osgeo\gdalnumeric.py -> build\lib.win32-2.7\osgeo copying osgeo\gdal_array.py -> build\lib.win32-2.7\osgeo copying osgeo\ogr.py -> build\lib.win32-2.7\osgeo copying osgeo\osr.py -> build\lib.win32-2.7\osgeo copying osgeo\__init__.py -> build\lib.win32-2.7\osgeo running build_ext building 'osgeo._gdal' extension creating build\temp.win32-2.7 creating build\temp.win32-2.7\Release creating build\temp.win32-2.7\Release\extensions c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.exe /c /nologo /Ox /MD /W3 /GS- /DNDEBUG -I../../port -I../../gcore -I../../alg -I../../ogr/ -IC:\P ython26\ArcGIS10.1\include -IC:\Python26\ArcGIS10.1\PC -IC:\Python26\ArcGIS10.1\ lib\site-packages\numpy\core\include /Tpextensions/gdal_wrap.cpp /Fobuild\temp.w in32-2.7\Release\extensions/gdal_wrap.obj gdal_wrap.cpp c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : war ning C4530: C++ exception handler used, but unwind semantics are not enabled. Sp ecify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file: 'c pl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.ex e"' failed with exit status 2 ---------------------------------------- Command C:\Python26\ArcGIS10.1\python.exe -c "import setuptools;__file__='c:\\us ers\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setup.py';exec(compile(open(_ _file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\u sers\jnunn\appdata\local\temp\pip-f7tgze-record\install-record.txt --single-vers ion-externally-managed failed with error code 1 in c:\users\jnunn\appdata\local\ temp\pip-build\gdal Storing complete log in C:\Users\jnunn\pip\pip.log C:\Users\jnunn\Desktop> I have tried to use easy_install before too, and it came back with a common error to this: c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\INCLUDE\xlocale(342) : war ning C4530: C++ exception handler used, but unwind semantics are not enabled. Sp ecify /EHsc extensions/gdal_wrap.cpp(2853) : fatal error C1083: Cannot open include file: 'c pl_port.h': No such file or directory error: command '"c:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\BIN\cl.ex e"' failed with exit status 2 Plus the following additional pip.log: Exception information: Traceback (most recent call last): File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\basecommand.py", line 107, in main status = self.run(options, args) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\commands\install.py", line 261, in run requirement_set.install(install_options, global_options) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\req.py", line 1166, in install requirement.install(install_options, global_options) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\req.py", line 589, in install cwd=self.source_dir, filter_stdout=self._filter_install, show_stdout=False) File "C:\Python26\ArcGIS10.1\lib\site-packages\pip\util.py", line 612, in call_subprocess % (command_desc, proc.returncode, cwd)) InstallationError: Command C:\Python26\ArcGIS10.1\python.exe -c "import setuptools;__file__='c:\\users\\jnunn\\appdata\\local\\temp\\pip-build\\gdal\\setup.py';exec(compile(open(__file__).read().replace('\r \n', '\n'), __file__, 'exec'))" install --record c:\users\jnunn\appdata\local\temp\pip-f7tgze-record\install-record.txt --single-version-externally-managed failed with error code 1 in c:\users\jnunn\appdata \local\temp\pip-build\gdal

    Read the article

  • Yii include PHP Excel

    - by Anton Sementsov
    Im trying include PHPExcel lib to Yii, put PHPExcel.php in root of extensions, near PHPExcel folder and added that code into config/main.php // application components 'components'=>array( 'excel'=>array( 'class'=>'application.extensions.PHPExcel', ), modify /protected/extensions/PHPExcel/Autoloader.php public static function Register() { $functions = spl_autoload_functions(); foreach($functions as $function) spl_autoload_unregister($function); $functions=array_merge(array(array('PHPExcel_Autoloader', 'Load')), $functions); foreach($functions as $function) $x = spl_autoload_register($function); return $x; }// function Register() Then, trying create PHPExcel object $objPHPExcel = new PHPExcel(); but have an error: include(PHPExcel.php) [<a href='function.include'>function.include</a>]: failed to open stream: No such file or directory in Z:\home\yii.local\www\framework\YiiBase.php(418)

    Read the article

  • Hadoop: Iterative MapReduce Performance

    - by S.N
    Is it correct to say that the parallel computation with iterative MapReduce can be justified only when the training data size is too large for the non-parallel computation for the same logic? I am aware that the there is overhead for starting MapReduce jobs. This can be critical for overall execution time when a large number of iterations is required. I can imagine that the sequential computation is faster than the parallel computation with iterative MapReduce as long as the memory allows to hold a data set in many cases. Is it the only benefit to use the iterative MapReduce? If not, what are the other benefits could be?

    Read the article

  • openmp in mex : stackoverflow error

    - by Edwin
    i have got the following fraction of code that getting me the stack overflow error #pragma omp parallel shared(Mo1, Mo2, sum_normalized_p_gn, Data, Mean_Out,Covar_Out,Prior_Out, det) private(i) num_threads( number_threads ) { //every thread has a new copy double* normalized_p_gn = (double*)malloc(NMIX*sizeof(double)); #pragma omp critical { int id = omp_get_thread_num(); int threads = omp_get_num_threads(); mexEvalString("drawnow"); } #pragma omp for //some parallel process..... } the variables declared in the shared are created by malloc. and they consumes with large amount of memory there are 2 questions regarding to the above code. 1) why this would generate the stack overflow error( i.e. segmentation fault) before it goes into the parallel for loop? it works fine when it runs in the sequential mode.... 2) am i right to dynamic allocate memory for each thread like "normalized_p_gn" above? Regards Edwin

    Read the article

  • Firefox extension dev: observing preferences, avoid multiple notifications

    - by Michael
    Let's say my Firefox extension has multiple preferences, but some of them are grouped, like check interval, fail retry interval, destination url. Those are used in just single function. When I subscribe to preference service and add observer, the observe callback will be called for each changed preference, so if by chance user changed all of the settings in group, then I will have to do the same routine for the same subsystem as many times as I have items in that preferences group. It's a great redundancy! What I want is observe to be called just once for group of preferences. Say extensions.myextension.interval1 extensions.myextension.site extensions.myextension.retry so if one or all of those preferences are changed, I receive only 1 notification about it. In other words, no matter how many preferences changed in branch, I want the observe callback to called once.

    Read the article

  • Firefox extension dev: observing preferences, avoid multiple notifications

    - by Michael
    Let's say my Firefox extension has multiple preferences, but some of them are grouped, like check interval, fail retry interval, destination url. Those are used in just single function. When I subscribe to preference service and add observer, the observe callback will be called for each changed preference, so if by chance user changed all of the settings in group, then I will have to do the same routine for the same subsystem as many times as I have items in that preferences group. It's a great redundancy! What I want is observe to be called just once for group of preferences. Say extensions.myextension.interval1 extensions.myextension.site extensions.myextension.retry so if one or all of those preferences are changed, I receive only 1 notification about it. In other words, no matter how many preferences changed in branch, I want the observe callback to called once.

    Read the article

  • need suggestions about content filtering project

    - by serdar
    i'm thinking of designing and implementing a content filtering software as my graduation project. i want it to be a user contributed software. i mean, users can also add/categorize websites. it should be also a web project and extensions for browsers like chrome, firefox, ie.. my question is which programming language do you suggest for this project? i know that firefox extensions are javascript based maybe you can say use .net framework 3.5 because it's better in communication with extensions. sorry for my bad english.. btw any other suggessions about project will be good.. thx a lot.

    Read the article

  • How to scale an image (in data URI format) in JavaScript (real scaling, not using styling)

    - by 103067513055141045393
    We are capturing a visible tab in a Chrome browser (by using the extensions API chrome.tabs.captureVisibleTab) and receiving a snapshot in the data URI scheme (Base64 encoded string). Is there a JavaScript library that can be used to scale down an image to a certain size? Currently we are styling it via CSS, but have to pay performance penalties as pictures are mostly 100 times bigger than required. Additional concern is also the load on the localStorage we use to save our snapshots. Does anyone know of a way to process this data URI scheme formatted pictures and reduce their size by scaling them down? References: Data URI scheme on http://en.wikipedia.org/wiki/Data_URI_scheme Chrome Extensions API onhttp://code.google.com/chrome/extensions/tabs.html The "Recently Closed Tabs" Chrome Extension onhttp://code.google.com/p/recently-closed-tabs

    Read the article

  • Scala Tuple Deconstruction

    - by dbyrne
    I am new to Scala, and ran across a small hiccup that has been annoying me. Initializing two vars in parallel works great: var (x,y) = (1,2) However I can't find a way to assign new values in parallel: (x,y) = (x+y,y-x) //invalid syntax I end up writing something like this: val xtmp = x+y; y = x-y; x = xtmp I realize writing functional code is one way of avoiding this, but there are certain situations where vars just make more sense. I have two questions: 1) Is there a better way of doing this? Am I missing something? 2) What is the reason for not allowing true parallel assignment?

    Read the article

  • Firefox extension dev: observing preferencies, avoid multiple notifications

    - by Michael
    Let's say my Firefox extension has multiple preferences, but some of them are grouped, like check interval, fail retry interval, destination url. Those are used in just single function. When I subscribe to preference service and add observer, the observe callback will be called for each changed preference, so if by chance user changed all of the settings in group, then I will have to do the same routine for the same subsystem as many times as I have items in that preferences group. What I want is observe to be called just once for group of preferences. Say extensions.myextension.interval1 extensions.myextension.site extensions.myextension.retry so if one or all of those preferences are changed, I receive only 1 notification about it.

    Read the article

  • Upgrade existing WinForms applications to use the latest RadControls

    Upgrading projects to new versions can be a pain, especially when you have to update several assemblies from a single version, as is the case with RadControls for WinForms. Q1 2010 simplifies this process a lot, by giving a couple of ways (one new and one updated) to upgrade existing applications to the latest and greatest version of RadControls for WinForms: By using the new Visual Studio Extensions (VSX), available in VS2005, VS2008 and VS2010 RC; By using the updated Project Upgrade Utility. Here are the steps: Upgrading a classic Windows Forms application to the latest RadControls for WinForms by using the Visual Studio Extensions Install RadControls for WinForms Q1 2010 Open the classic Windows Forms application (VB or C#) Open the Telerik Menu and select RadControls for WinForms --> Convert to Telerik WinForms Application     Select the Telerik controls you plan to use in the application, as well as a theme, and click OK. The VSX package will add the needed assemblies to your project automatically for you.     Replace the standard controls on your form with the respective Telerik controls.     Run the application to see the result. Upgrading an older RadControls application to the latest RadControls for WinForms by using the Visual Studio Extensions Install RadControls for WinForms Q1 2010. Open your current RadControls application (VB or C#), which uses pre-Q1 2010 assembly versions. Open the Telerik Menu and select RadControls for WinForms --> Upgrade Wizard   Choose to either use the online downloader of the latest version, or to use the currently installed version. The VSX package will check what assemblies you use in your project and will upgrade them automatically.     Run the application to see the result. Upgrading an older RadControls application to the latest RadControls for WinForms by using the Project Upgrade Utility The Q1 2010 Project Upgrade Utility now features upgrading not only a single project, but all projects in a directory/solution (recursively). The tool is quite intuitive - simply choose your solution folder (or a folder with several projects)m and click Update. Feel free to leave a comment. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • TFS Auto Shelve - New Visual Studio 2010 / TFS 2010 Extension

    - by MikeParks
    We've been working with the Visual Studio 2010 SDK and the TFS 2010 SDK a lot recently to create new Visual Studio Extensions. You can find these extensions in the Visual Studio Gallery. If you're a developer/programmer, you should check it out, they have some pretty cool tools out there. I'd be surprised if you told me you went there and couldn't find any tools that could help you. One of the new extensions Cory and I made is called TFS Auto Shelve. Check out the description and read about it below. If you're interested and you have VS 2010 w/TFS 2010, feel free to try it out and let us know what you think. You can download it here: http://visualstudiogallery.msdn.microsoft.com/en-us/080540cb-e35f-4651-b71c-86c73e4a633d   Here's a description and screenshots of what it does: Automatically shelves the latest version of all pending changes from local TFS workspaces to the TFS Server every "x" number of minutes when solutions are opened.   ·         Purpose o    Created for Team Foundation Server 2010 and Visual Studio 2010 o    This tool is mainly aimed at the Programmer/Developer audience so they can always have the latest copy of their pending changes backed up to the TFS Server while coding ·         Functionality o    Menu options become active and automatic shelving begins when a solution that mapped to a TFS Workspace is opened in Visual Studio o    In Tools > TFS Auto Shelve (Running/NotRunning):  Automatic shelving can be turned on/off o    In Tools > TFS Auto Shelve Now : Shelve all code can be manually triggered o    Each TFS workspace has its own shelveset which is re-used to save the latest version of pending changes o    Shelvesets are named as Base Name + Workspace Name o    Shelveset comment contains item count o    If there are no pending changes, no shelvesets will be created/updated o    If a solution is opened that is not mapped to a TFS Workspace, menu options are disabled since shelving only works for mapped workspaces. ·         Configuration o    In Tools > Options > TFS Auto Shelve Options: Base Name is configurable o    In Tools > Options > TFS Auto Shelve Options: "x" number of minutes is configurable in options ·         Logging o    Custom Visual Studio Activity Logging is implemented. If you run into any errors, please startup Visual Studio with the /log switch, re-create the error, then close Visual Studio. You can browse to “%AppData%\Microsoft\VisualStudio\10.0\ActivityLog.XML” to view the log. Please feel free to inform us of any errors you see and we can work it out via email. ·         Other Helpful Information o    To view shelvesets, open Source Control Explorer, click on File > Source Control > Unshelve Pending Changes o    Workspaces can be modified by opening the Source Control Explorer > Clicking on Workspaces drop down > Click Workspaces… > Click Add / Edit / Removed   Thanks! - Mike

    Read the article

  • Backup options in SharePoint 2007

    - by sreejukg
    It is very important to make sure the server farm backup is taking properly, making sure that in case of any disaster, the administrator has the latest backup that can be used to restore. This articles addresses some of the options available for backup/restore in SharePoint 2007 Backup There are two options that can be utilized to take backup of SharePoint sites. Using SharePoint Central Administration website Using SharePoint central administration website, you can do backup/restore from user interface. Using central administration website you can back up the following · Server farm · Web application · Content databases Follow these steps to take backup of the server farm using central administration 1. Open Central administration website 2. Navigate to Operations -> Backup and Restore -> Perform a backup 3. Here you will have options to choose the item to back up. Select Farm (the top most item in the list) 4. Once you select the items to backup, click on “Continue to backup options” 5. Select “Full” as type of backup. 6. In the backup file location, enter the path where you need to store the backup. The path should be according to the UNC, for e.g. for c drive you may use \\server\c$\mybackupFolder 7. Click ok 8. Now you will be redirected to Backup and Restore Status page. This page shows the progress for the backup operation. You can use the refresh button to update the status of backup(this page will automatically refresh in every 30 seconds). Once completed you can find the files in the specified folder. Using STSADM website SharePoint comes with a STSADM command line tool. STSADM provides lot of administrative operations that can be performed on SharePoint 2007 sites. You can find STSADM command from the following location C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin (You may change the drive letter according to your installation) STSADM provides a method for performing the Office SharePoint Server 2007 administration tasks at the command line or by using batch files or scripts. STSADM provides access to operations not available by using the Central Administration site The general syntax for STSADM is as follows STSADM -operation Operation Name –parameter1 value1 –parameter2 value2 ……….. Using STSADM you can back up the following · Server farm · Web application · Content databases To perform any STSADM, operation you need to be a member of administrators group. Follow these steps to take backup of SharePoint server farm using STSADM tool. Note: make sure you are logged in to the computer where central administration website is installed. 1. Open the Command prompt (You should run command prompt with administrator privileges) 2. Change the working directory to C:\Program Files\Common Files\Microsoft shared\web server extensions\12\bin 3. Enter the command, then press enter Stsadm –o backup -directory <UNC path> -backupmethod full 4. You will get success / failure message once the command finishes. How to schedule the backup There is no option to schedule a backup using central administration site. Also there is no operation provided by STSADM to automate the backup. The farm administrators need to take backup in regular intervals. To achieve this, you can write a batch file that includes STSADM command to take full backup of the server. This batch file can be scheduled using windows task scheduler to execute in certain intervals. Sample of the batch file 1. Open notepad(or any other text editor) 2. Enter the following commands @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm.exe -o backup -directory "<\backup>" -backupmethod full echo completed 3. Save the file with .bat extension You can schedule this batch file as you require. Other Options Using STSADM tool, you will be able to take backup for individual site collection. The syntax for this is stsadm -o backup -url <URL name for site collection> -filename <file name> [-overwrite] The explanations for the parameters are as follows. -url The url of the site collection you need to backup -filename The name of the backup file. E.g. c:\backup.bak -overwrite optional. Indicates if the filename specified exists, whether to overwrite or not. If you are creating the batch file for scheduling the backup for a site collection, you may need to specify the backup filename automatically created. It is an option that you can generate the filename with date so that you can keep backup for each day. e.g. The following commands can be utilized create a site collection backup. @echo off echo =============================================================== echo Back up the farm to <C:\backup> echo =============================================================== echo =============================================================== echo getting todays date to a variable echo =============================================================== @For /F "tokens=1,2,3 delims=/ " %%A in (‘Date /t’) do @( Set Day=%%A Set Month=%%B Set Year=%%C Set todayDate=%%C%%B%%A ) cd %COMMONPROGRAMFILES%\Microsoft Shared\web server extensions\12\BIN @echo off stsadm -o backup -url <sitecollection url> -filename \\ServerName\ShareName\Backup_%todayDate%.bak -overwrite echo completed To read more about backup STSADM operation, read this http://technet.microsoft.com/en-us/library/cc263441.aspx

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Usual Solution – Wait Type – Day 6 of 28

    - by pinaldave
    CXPACKET has to be most popular one of all wait stats. I have commonly seen this wait stat as one of the top 5 wait stats in most of the systems with more than one CPU. Books On-Line: Occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if contention on this wait type becomes a problem. CXPACKET Explanation: When a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. There is an organizer/coordinator thread (thread 0), which takes waits for all the threads to complete and gathers result together to present on the client’s side. The organizer thread has to wait for the all the threads to finish before it can move ahead. The Wait by this organizer thread for slow threads to complete is called CXPACKET wait. Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query. Reducing CXPACKET wait: We cannot discuss about reducing the CXPACKET wait without talking about the server workload type. OLTP: On Pure OLTP system, where the transactions are smaller and queries are not long but very quick usually, set the “Maximum Degree of Parallelism” to 1 (one). This way it makes sure that the query never goes for parallelism and does not incur more engine overhead. EXEC sys.sp_configure N'cost threshold for parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Data-warehousing / Reporting server: As queries will be running for long time, it is advised to set the “Maximum Degree of Parallelism” to 0 (zero). This way most of the queries will utilize the parallel processor, and long running queries get a boost in their performance due to multiple processors. EXEC sys.sp_configure N'cost threshold for parallelism', N'0' GO RECONFIGURE WITH OVERRIDE GO Mixed System (OLTP & OLAP): Here is the challenge. The right balance has to be found. I have taken a very simple approach. I set the “Maximum Degree of Parallelism” to 2, which means the query still uses parallelism but only on 2 CPUs. However, I keep the “Cost Threshold for Parallelism” very high. This way, not all the queries will qualify for parallelism but only the query with higher cost will go for parallelism. I have found this to work best for a system that has OLTP queries and also where the reporting server is set up. Here, I am setting ‘Cost Threshold for Parallelism’ to 25 values (which is just for illustration); you can choose any value, and you can find it out by experimenting with the system only. In the following script, I am setting the ‘Max Degree of Parallelism’ to 2, which indicates that the query that will have a higher cost (here, more than 25) will qualify for parallel query to run on 2 CPUs. This implies that regardless of the number of CPUs, the query will select any two CPUs to execute itself. EXEC sys.sp_configure N'cost threshold for parallelism', N'25' GO EXEC sys.sp_configure N'max degree of parallelism', N'2' GO RECONFIGURE WITH OVERRIDE GO Read all the post in the Wait Types and Queue series. Additionally a must read comment of Jonathan Kehayias. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest you all to read the online book for further clarification. All the discussion of Wait Stats over here is generic and it varies from system to system. It is recommended that you test this on the development server before implementing on the production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Ubuntu for Android on the ASUS Transformer Prime

    - by sola
    I would like to use Ubuntu on my Transformer Prime in parallel with Android (not as a dual booting solution, I want to be able to switch between them instantaniously). I am aware of the traditional chrooting/VNC solution but I heard that it performs very poorly so I would like to use Ubuntu For Android (UFA) which has been announced recently by Canonical. That looks like a polished, highly integrated solution for Android devices. The Prime would be the ideal device for Ubuntu For Android since it has a powerful processor (Tegra3) capable of running a lot of processes in parallel on its 4 cores. Does anyone know if Canonical or anybody else is working on supporting UFA on the ASUS Transformer Prime? As far as I understand, the X11 driver is available for Tegra3 so, the biggest hurdle may be easily overcome.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >