Search Results

Search found 3000 results on 120 pages for 'computing capacity'.

Page 96/120 | < Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >

  • Why are mainframes still around?

    - by ThaDon
    It's a question you've probably asked or been asked several times. What's so great about Mainframes? The answer you've probably been given is "they are fast" "normal computers can't process as many 'transactions' per second as they do". Jeese, I mean it's not like Google is running a bunch of Mainframes and look how many transactions/sec they do! The question here really is "why?". When I ask this question to the mainframe devs I know, they can't answer, they simply restate "It's fast". With the advent of Cloud Computing, I can't imagine mainframes being able to compete both cost-wise and mindshare-wise (aren't all the Cobol devs going to retire at some point, or will offshore just pickup the slack?). And yet, I know a few companies that still pump out net-new Cobol/Mainframe apps, even for things we could do easily in say .NET and Java. Anyone have a real good answer as to why "The Mainframe is faster", or can point me to some good articles relating to the topic?

    Read the article

  • What to use to wait on a indeterminate number of tasks?

    - by Scott Chamberlain
    I am still fairly new to parallel computing so I am not too sure which tool to use for the job. I have a System.Threading.Tasks.Task that needs to wait for n number number of tasks to finish before starting. The tricky part is some of its dependencies may start after this task starts (You are guaranteed to never hit 0 dependent tasks until they are all done). Here is kind of what is happening Parent thread creates somewhere between 1 and (NUMBER_OF_CPU_CORES - 1) tasks. Parent thread creates task to be run when all of the worker tasks are finished. Parent thread creates a monitoring thread Monitoring thread may kill a worker task or spawn a new task depending on load. I can figure out everything up to step 4. How do I get the task from step 2 to wait to run until any new worker threads created in step 4 finish?

    Read the article

  • How to interpret situations where Math.Acos() reports invalid input?

    - by Sean Ochoa
    Hey all. I'm computing the angle between two vectors, and sometimes Math.Acos() returns NaN when it's input is out of bounds (-1 input && input 1) for a cosine. What does that mean, exactly? Would someone be able to explain what's happening? Any help is appreciated! Here's me method: public double AngleBetween(vector b) { var dotProd = this.Dot(b); var lenProd = this.Len*b.Len; var divOperation = dotProd/lenProd; // http://msdn.microsoft.com/en-us/library/system.math.acos.aspx return Math.Acos(divOperation) * (180.0 / Math.PI); }

    Read the article

  • Query about the service or technology behind gmail service

    - by user1726908
    I am a final year computer science student. I am studying in hyderabad, andhra pradesh, india. I have come to know that the gmail is a cloud service. I am very much interested in learning more about cloud computing. This technology has been puzzling,tickling,increasing my curiosity and i just want to learn as much as i can about it. And through experience, i have learnt that practically doing can improve our knowledge and thirst to learn more. Thus, I would like to know "what are the security measures which you have taken to keep the cloud service like gmail secure and authentic? What is the architecture of the service? What are the technologies used in building it? What are the different levels of security applied in general for building a private cloud?"

    Read the article

  • Allied Telesis router: IP filtering for the LOCAL interface

    - by syneticon-dj
    Given an Allied Telesis router with an AlliedWare OS (2.9.1) I would like to disable access to all management services of the router except for a number of subnets (or alternatively have what is a "management VLAN" with other manufacturers' switch and router models). What I have tried so far: creating a new VLAN and an appropriate IP interface, setting the LOCAL IP into this subnet, creating an IP filter for the IP interface and specifying my exclusion subnets: it simply does not work as intended as I can access the LOCAL IP set from any of the other VLAN interfaces - the traffic is apparently not going through my defined filter set at all creating a new IP filter set and binding it to the LOCAL IP interface: this seems not to affect any kind of traffic at all, the counters for the filter set remain at zero packets setting the Remote Security Officer Level IP address range: this only restricts the ability for a user with the Security Officer privilege level to log in from any but the specified address ranges / subnets. Unfortunately, it does not prevent service availability (and thus DoS capacity) or the ability to log in as a less privileged user (e.g. a "manager") calling technical support: unfortunately no solution so far What I have not tried: creating a filter set for each and every IP interface defined on the router and excluding access to the router's management IP: I would like to reduce the overhead induced by IP filters as the router already is CPU-constrained at times. Setting up filters for every IP interface would mean that each and every traffic packet would have to pass the filters, thus consuming CPU cycles. If by any means possible, I would like to find a different solution.

    Read the article

  • Mathematica & J/Link: Memory Constraints?

    - by D-Bug
    I am doing a computing-intensive benchmark using Mathematica and its J/Link Java interface. The benchmark grinds to a halt if a memory footprint of about 320 MB is reached, since this seems to be the limit and the garbage collector needs more and more time and will eventually fail. The Mathematica function ReinstallJava takes the argument command line. I tried to do ReinstallJava[CommandLine -> "java -Xmx2000m ..."] but Mathematica seems to ignore the -Xmx option completely. How can I set the -Xmx memory option for my java program? Where does the limit of 320 MB come from? Any help would be greatly appreciated.

    Read the article

  • Why are Asynchronous processes not called Synchronous?

    - by Balk
    So I'm a little confused by this terminology. Everyone refers to "Asynchronous" computing as running different processes on seperate threads, which gives the illusion that these processes are running at the same time. This is not the definition of the word asynchronous. a·syn·chro·nous –adjective 1. not occurring at the same time. 2. (of a computer or other electrical machine) having each operation started only after the preceding operation is completed. What am I not understanding here?

    Read the article

  • Optimize Apache performance

    - by Phliplip
    I'm looking for ways to optimize our current web server hosted in-house. I'm trying to supply as much relevant information below. Please let me know if you would require additional information in order to assist. Server is running 1 single website, which is an online pizza ordering platform built on Zend Framework (ver1). On traffic stats from the last month aprox 6.000 pageloads per day, concentrated mainly around dinnertime. Around 1500 loads/hour peaks in that period. We recently upgraded from a 2/2mbit aDSL-line to 100/100mbit fiber, and we still have performance issues at dinner time. We assumed the 2mbit was the issue. Website is pretty snappy in low-load periods. Hardware CPU: Intel(R) Xeon(R) CPU 5160 @ 3.00GHz (3000.13-MHz K8-class CPU) Mem: 328M Active, 4427M Inact, 891M Wired, 244M Cache, 623M Buf, 33M Free Swap: 16G Total, 468K Used, 16G Free (6GB physical, 16GB swap) Filesystem Type Size Used Avail Capacity Mounted on /dev/ad7s1a ufs 4.8G 768M 3.7G 17% / devfs devfs 1.0K 1.0K 0B 100% /dev /dev/ad7s1g ufs 176G 5.2G 157G 3% /home /dev/ad7s1e ufs 4.8G 2.8M 4.5G 0% /tmp /dev/ad7s1f ufs 19G 3.5G 14G 19% /usr /dev/ad7s1d ufs 4.8G 550M 3.9G 12% /var Server OS FreeBSD 8.2-RELEASE Software apache-2.2.17 php5-5.3.8 mysql-server-5.5 Apache footprint (example, taken from # top) 31140 www 1 45 0 377M 41588K lockf 2 0:00 0.00% httpd 31122 www 1 44 0 375M 35416K lockf 2 0:00 0.00% httpd 31109 www 1 44 0 375M 38188K lockf 2 0:00 0.00% httpd 31113 www 1 44 0 375M 35188K lockf 2 0:00 0.00% httpd Apache is using the prefork MPM, APC (Alternative PHP Cache). SSL module is loaded, but not utilized (as in don't really work, thus not used). There is a file containing settings for MPM modules, but as i see it's not included in the httpd.conf file, the include line is commented out. Thus i would guess that the prefork MPM is working of default values too. Here are some other Apache conf values that i found - which are included in https.conf Timeout 300 KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 5 UseCanonicalName Off HostnameLookups Off

    Read the article

  • Random HTTP 413 error on apach2/php/joomla site

    - by jfab
    I have a Joomla site, and every once in a while when I submit something via a form, I get a HTTP 413 error: Request Entity Too Large The requested resource /index.php does not allow request data with POST requests, or the amount of data provided in the request exceeds the capacity limit. In the error.log file I get: Invalid Content-Length, referer: [site]/index.php It doesn't seem this has anything to do with the actual size of the request, for the following reasons: a) I tinkered with the configuration of both Apache, and PHP. In Apache I tried increasing LimitRequestBody, and in PHP post_max_size, max_input_vars, memory_limit, and even upload_max_filesize. Every value is far beyond what is sent in a typical request that generates an error. b) The error pops up quite randomly, and often just hitting refresh allows me to get through. c) I checked the request in Fiddler to make sure everything is right with the content-length stated in the header, and the content of the request itself. Everything appears to be in order. A curious thing is that when I resent the exact same request via Fiddler, I never got the error. It seems I can only recreate it through a browser. So I'm at my wit's end here. I don't even know where to look for the problem anymore. I don't know if it's Apache or PHP (though I can't find anything in PHP error logs, so maybe that means Apache is the more likely culprit?), or PHP in general, or my Joomla site in particular (my bets were on Joomla until a recreated the error on a test script, with a very basic post form, though it does pop up much more often on the Joomla site). If anyone can give any advice on where to even begin with this, I'll be very grateful!

    Read the article

  • Partially parse C++ for a domain-specific language

    - by PierreBdR
    I would like to create a domain specific language as an augmented-C++ language. I will need mostly two types of contructs: Top-level constructs for specialized types or declarations In-code constructs, i.e. to add primitives to make functions calls or idiom easier The language will be used for scientific computing purposes, and will ultimately be translated into plain C++. C++ has been chosen as it seems to offer a good compromise between: ease of use, efficiency and availability of a wide range of libraries. A previous attempt using flex and bison failed due to the complexity of the C++ syntax. The existing parser can still fail on some constructs. So we want to start over, but on better bases. Do you know about similar projects? And if you attempted to do so, what tools would you use? What would be the main pitfalls? Would you have recommendations in term of syntax?

    Read the article

  • Can I host multiple sites with one Amazon EC2 instance [duplicate]

    - by user22
    This question already has an answer here: Can you help me with my capacity planning? 2 answers I currently have VPS server and I pay around $75 per month and I get: 40GB HD 2Gb RAM 100GB BW 6 core cpu (but i dont use much) I have only one live website running and traffic is only max 100 user visit per day. I mostly do the my testing stuff and some of my inter sites for playing with coding. But I do need one server. I am thinking of moving to Amazon EC2 if the price diff is not so much because then I can learn some more stuff. I am thinking of getting the 3 years Heavy utilization Reserved instance because my server will be running all day and night. I tried their online caluclator with Medium Instance Heavy reserved for 3 years for EC2 it comes $31 per month(effective price) and for EBS and S3 , I think even if thats it $40 for all other stuff. I will be at no loss for what I am getting at present. Am i correct or I missed something?? Now In my current VPS I have Apache for PHP sites and MOD wsgi for python sites. I am not sure if I will be able to do all that stuff in Amazon EC2. Can I host python and PHP sites both in Amazon EC2 instance using Named Virtual Hosts and Ngnix

    Read the article

  • How much RAM used by Python dict or list?

    - by Who8MyLunch
    My problem: I am writing a simple Python tool to help me visualize my data as a function of many parameters. Each change in parameters involves a non-trivial amount of time, so I would like to cache each step's resulting imagery and supporting data in a dictionary. But then I worry that this dictionary could grow too large over time. Most of my data is in the form of Numpy arrays. My question: How would one go about computing the total number of bytes used by a Python dictionary. The dictionary itself may contain lists and other dictionaries, each of which contain data stored in Numpy arrays. Ideas?

    Read the article

  • Can't connect to wi-fi hotspot in Ubuntu 11.10

    - by ht3t
    I'm new to Ubuntu. I'm having a wireless network problem in Ubuntu 11.10. I made a hotspot using Connectify from a computer which is running Windows 7. I can access it in Windows 7 but not in Ubuntu 11.10. Every time I access it,I get a message "disconnected". I'm using msi fx 400 notebook with Intel Centrino wireless -N 1000 wireless card. Ubuntu version is 11.10 with KDE desktop. $ sudo lshw -c network [sudo] password for ht3t: *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: wlan0 version: 00 serial: 00:26:c7:56:b8:f0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:44 memory:e7400000-e7401fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 06 serial: 40:61:86:b6:b1:a2 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw IP=192.168.21.107 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:9000(size=256) memory:e6004000-e6004fff memory:e6000000-e6003fff I can't do anything without internet connection. How can I fix this?

    Read the article

  • Page pool memory

    - by legiwei
    I'm currently using Windows XP SP3 32 bit, using C2D E6320 with 2GB RAM. When I am playing Starcraft 2, I encounter an error where it says my system is running low on page pool memory. Starcraft graphic settings suggested a high settings for me. I do not think it has to do with my GC but with my RAM. I then made a search to try to rectify the problem. Apparently, it's something to do with my virtual memory. I then proceed to try to the suggested solution which is to temper the registry and limit the page pool memory to 384MB. However, having done so, I still could not achieved it. I've seen screenshot settings of windows XP with 2GB having 384MB of page pool memory. My default settings puts it at 195MB whereas when I try to increase the pool limit, it can only go to a max of 229MB. I tried increasing my RAM capacity to 3GB but the pool limit still remains. I like to know how to increase my page pool memory. I've tried searching for solution but to no avail other than the one that I've mentioned above (which didn't solve my problem completely).

    Read the article

  • Multi-petabyte scale out storage solution [closed]

    - by Alex Yuriev
    Let's say that I have a need to have a single-name space scale to multi-petabyte object store with a file system-like wrapper. What is currently out there that supports the following: Single name space that can take 1B files. Support for multiple entry points using NFS At least node level replication ( preferably node and file level replication ) Online software upgrades No "magic sauce" on the storage layer The following has been evaluated: Gluster & Lustre - just ick - fundamental lack of understanding of why online upgrades are mandatory. OneFS - we have it. It is smelling more and more like it hides a dead body under the hood. Other than MapR and zfs am I missing anything? P.S. Oh yes, I keep forgetting that the forums are for people to discuss if 2TB drive actually stores 2TB info. May bad. Seriously though - how the heck can "meets the following requirements" can be considered a "debate"? P.P.S. I did not throw an idiotic insult - i pointed out that this is actually an interesting question compared to a conversation about storage capacity of a 2TB hard drive. It is not a question of what works better - it is a question that asks did I miss any of the products that currently exist which fit the criteria where criteria is clearly outline. I got one answer below which included something that I have not looked at in a long time which looks quite a bit grown up compared to the time I briefly look at it before.

    Read the article

  • Avaliable parallel technologies in .Net

    - by David
    I am new to .Net platform. I did a search and found that there are several ways to do parallel computing in .Net: Parallel task in Task Parallel Library, which is .Net 3.5. PLINQ, .Net 4.0 Asynchounous Programming, .Net 2.0, (async is mainly used to do I/O heavy tasks, F# has a concise syntax supporting this). I list this because in Mono, there seem to be no TPL or PLINQ. Thus if I need to write cross platform parallel programs, I can use async. .Net threads. No version limitation. Could you give some short comments on these or add more methods in this list? Thanks.

    Read the article

  • Is there any Application Server Frameworks for other languages/platforms than JavaEE and .NET?

    - by Jonas
    I'm a CS student and has rare experience from the enterprise software industry. When I'm reading about enterprise software platforms, I mostly read about these two: Java Enterprise Edition, JavaEE .NET and Windows Communication Foundation By "enterprise software platforms" I mean frameworks and application servers with support for the same characteristics as J2EE and WCF has: [JavaEE] provide functionality to deploy fault-tolerant, distributed, multi-tier Java software, based largely on modular components running on an application server. WCF is designed in accordance with service oriented architecture principles to support distributed computing where services are consumed by consumers. Clients can consume multiple services and services can be consumed by multiple clients. Services are loosely coupled to each other. Is there any alternatives to these two "enterprise software platforms"? Isn't any other programming languages used in a bigger rate for this problem area? I.e Why isn't there any popular application servers for C++/Qt?

    Read the article

  • What's the fastest way to get directory and subdirs size on unix using Perl?

    - by ivicas
    I am using Perl stat() function to get the size of directory and its subdirectories. I have a list of about 20 parent directories which have few thousand recursive subdirs and every subdir has few hundred records. Main computing part of script looks like this: sub getDirSize { my $dirSize = 0; my @dirContent = <*>; my $sizeOfFilesInDir = 0; foreach my $dirContent (@dirContent) { if (-f $dirContent) { my $size = (stat($dirContent))[7]; $dirSize += $size; } elsif (-d $dirContent) { $dirSize += getDirSize($dirContent); } } return $dirSize; } The script is executing for more than one hour and I want to make it faster. I was trying with the shell du command, but the output of du (transfered to bytes) is not accurate. And it is also quite time consuming. I am working on HP-UNIX 11i v1.

    Read the article

  • deliver c++ application for the final customer

    - by Nebrass
    I am working on a c++ windows application on visual studio 2010. I want to deliver my application to my customer so he can use it easly, without the obligation of installing visual runtime fx. And to be executed every where. How do I set up the installer so that the customer does not need to separately install any required Visual Studio runtime libraries? Please i want a solution for this problem, because my costumers are so far from computing, they love just "next, next, install, finish" system. Thank you for your help.

    Read the article

  • How can I use Amazon's API in PHP to search for books?

    - by TerranRich
    I'm working on a Facebook app for book sharing, reviewing, and recommendations. I've scoured the web, searched Google using every search phrase I could think of, but I could not find any tutorials on how to access the Amazon.com API for book information. I signed up for an AWS account, but even the tutorials on their website didn't help me one bit. They're all geared toward using cloud computing for file storage and processing, but that's not what I want. I just want to access their API to search info on books. Kind of like how http://openlibrary.org/ does it, where it's a simple URL call to get information on a book (but their databases aren't nearly as populated as Amazon's). Why is it so hard to find the information I need on Amazon's AWS site? If anybody could help, I would greatly appreciate it.

    Read the article

  • Easiest way to keep SSRS child elements in the same relative position when the parent is re-positioned?

    - by Mac
    I am trying to revise the layout of an SSRS report where I have several textboxes that are child elements of a rectangle. When I reposition the parent rectangle down by x, all of the child textboxes maintain the same absolute position. Their "Location" (defined relative to the parent) decreases by x. I then need to reposition the child textboxes. Additionally, if any of these ever has a negative "Location" then the parent rectangle is then repositioned back up by x. What is the easiest way to move everything in unison? I can Control-click everything and then drag them or use the arrow keys, but I want to position everything with precision and the "Location" field in the Properties window disappears when selecting more than one item. Is there a way I can avoid individually computing and typing in every "Location" value every time I have a small layout change? I am using SSRS (11.0.3360.12) within the Visual Studio 2012 Shell. Thanks!

    Read the article

  • How many VPS do I need for my website? [duplicate]

    - by michael
    This question already has an answer here: How do you do load testing and capacity planning for web sites? 3 answers I made a website which aims at simulating a trading market. There are a list of prices and corresponding volumes that people want to purchase. Users can purchase at any price any time. My website retrieves the prices and volumes from my database every 2 seconds (I have to update the user's browser frequently to allow them to see the current market). Users' database INSERT query can be sent any time if they purchase. I used ajax to post or get data from my database (sometimes nested ajax calls). So, every 2 seconds, each user will send or retrieve data by using more than 20 database queries (in order to show a users the current prices and volumes). Also, I may have 200 users at a time. I was not using VPS before, and I got banned because of using too much CPU resources on my host. Now, I've purchased VPS*2 from a hosting servers. I have: CPU Speed: 2000 Mhz Memory: 2048 MB Disk Space: 20000 MB Bandwidth: 2000 GB Connection: 40 Mb/s Dedicated IP's 2 IP's Is this enough for my 200 users? Also, which VPS OS is suitable for me? Thank you.

    Read the article

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • C# Java Objective-C need expert advices

    - by Kevino
    Which platform as the edge today in 2012 with the rise of cloud computing, mobile development and the revolution of HTML5/Javascript between J2EE, .Net framework and IOS Objective-C ??? I want to start learning 1 language between Java, C# and Objective-C and get back into programming after 14 years and I don't know which to choose I need expert advices... I already know a little C++ and I remember my concepts in example pointers arithmetic, class etc so I tend to prefer learning C# and Objective-C but I've been told by some experienced programmers that Windows 8 could flop and .Net could be going away slowly since C++ and Html5/Javascript could be king in mobile is that true ? and that C# is more advanced compared to Java with Linq/Lambda... but not truly as portable if we consider android, etc but Java as a lot going for him too Scala, Clojure, Groovy, JRuby, JPython etc etc so I am lost Please help me, and don't close this right away I really need help and expert advices thanks you very much ANSWER : ElYusubov : thanks for everything please continue with the answers/explanations I just did some native C++ in dos mode in 1998 before Cli and .Net I don't know the STL,Templates, Win32 or COM but I remember a little the concept of memory management and oop etc I already played around a little with C# 1.0 in 2002 but things changed a lot with linq and lambda... I am here because I talked with some experienced programmers and authors of some the best selling programming books like apress wrox and deitel and they told me a few things are likely to happen like .Net could be on his way out because of Html5/Javascript combo could kill xaml and C++ native apps on mobile dev will outperform them by a lot... Secondly ios and android are getting so popular that mobile dev is the future so Objective-C is very hard to ignore so why get tied down in Windows long term (.Net) compared to Java (android)... but again android is very fragmented, they also said Windows 8 RT will give you access to only a small part of the .Net framework... so that's what they think so I don't know which direction to choose I wanted to learn C# & .Net but what if it die off or Windows 8 flop Windows Phone marketshare really can't compare to ios... so I'll be stuck that's why I worry is Java safer long term or more versatile if you want 'cause of the support for android ??

    Read the article

< Previous Page | 92 93 94 95 96 97 98 99 100 101 102 103  | Next Page >