Search Results

Search found 2684 results on 108 pages for 'michael prescott'.

Page 21/108 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Oracle Commerce Best Practices for the Communications Industry

    - by Michael Seback
      In today’s volatile economy, Communications Service Providers are challenged to offer a complete, cross-channel commerce experience. With Oracle Commerce solutions, CSPs can get closer to customers and gain valuable insight to maximize ROI across all commerce activities. Join us for a  live webcast on September 26th with featured speakers Raghavendra Ademane, Omni-Channel Commerce Consultant at Professional Access and Brenna Johnson, Product Manager, Oracle and learn how you can manage and deliver commerce experiences for Communications that engage customers and promote loyalty. The panelists will guide you through a number of topics including: Current Communications market trends, opportunities and challenges Introduction to the Oracle Commerce solution with case studies Demonstration of the solution for Communications with live Q&A Register today and learn how Oracle's latest innovations for Communications can help you increase online sales and enhance cross-channel commerce interactions.

    Read the article

  • What technologies are used for Game development now days?

    - by Monika Michael
    Whenever I ask a question about game development in an online forum I always get suggestions like learning line drawing algorithms, bit level image manipulation and video decompression etc. However looking at games like God of War 3, I find it hard to believe that these games could be developed using such low level techniques. The sheer awesomeness of such games defy any comprehensible(for me) programming methodology. Besides the gaming hardware is really a monster now days. So it stands to reason that the developers would work at a higher level of abstraction. What is the latest development methodology in the gaming industry? How is it that a team of 30-35 developers (of which most is management and marketing fluff) able to make such mind boggling games?

    Read the article

  • Zero bytes on home partition

    - by Michael Z
    I decided to replace the hard drive on my machine running Ubuntu 12.04 LTS . After using the new hard drive for a few days, I noticed that the new hard drive has bad sectors. So I decided to plug my old hard drive back in. First, I plugged both hard drives in and copied some data files from the new hard drive to the old one. After unplugging the new hard drive, I booted the computer with the old hard drive, and here I got a surprise: I can see 0 bytes available on my /home partition! The df utility shows that the /home partition has 0 available bytes. I have tried to move some files. But I still has 0 bytes on /home! However, GParted correctly shows that the available size is near 2Gb. UPDATE 1: To my surprise, System Monitor shows me that approximately 2 Gb are free and 0 bytes are available on the /home partition. It's slightly shocked me! Are "free" and "available" not the same? Any help is really appreciated!

    Read the article

  • Failed install 12.04 on Intel Hardware Raid with Large Partition (> 2TB)

    - by Michael Wiles
    I have Intel Hardware Raid on the motherboard. I have 10 2 TB HDD that I've configured as RAID 1+0 to be one big 8 TB HDD. Now I'm trying to install ubuntu 12.04 on it. After installing with default desktop installation disk I get a blank screen with a cursor flashing. If I try the alternate guided partitioning option I get error: out of disk. and the grub prompt. If I boot with the rescue disk or such like I can drop into a shell and view the disk. Everything also installs without an issue. Don't know what to do...

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

  • Combining javascript files

    - by Michael Freidgeim
    I’ve read Combining Client Scripts into a Composite Script and wanted to use it. Then I’ve read Julian Jelfs concerns ScriptManager.CompositeScript issues However the article Combining javascript files with Ajax toolkit library describes workarounds, that make the solution workable. You also can use Script reference profiler: http://aspnet.codeplex.com/releases/view/13356 Related posts: Using ScriptManager with other frameworks MSDN documentation: CompositeScriptReference he older implementations, that has been superseded by CompositeScript class: ToolkitScriptManager Combining, Compressing, Minifying ASP.NET ScriptResource and HTML Markups

    Read the article

  • The next step for a graduate?

    - by Michael Hobbs
    I will complete my degree in Computer Science in July of 2013 however I will have two years of military service left. I would like to get some hands on C#/C++ programming experience in the mean time. I have been looking around the web at all the open source project that are out there. There are literally 10's of thousands. How would you go about picking one that is: A) good for a graduate level programmer B) will probably go somewhere, and C) are willing to take a novice under their wing. I would even be happy to work for a company for free, remotely as a debugger, tester, or what have you. As a side note would prefer to stick in the C# realm but understand I need to branch out at the same time.

    Read the article

  • The Ideal Platform for Oracle Database 12c In-Memory and in-memory Applications

    - by Michael Palmeter (Engineered Systems Product Management)
    Oracle SuperCluster, Oracle's SPARC M6 and T5 servers, Oracle Solaris, Oracle VM Server for SPARC, and Oracle Enterprise Manager have been co-engineered with Oracle Database and Oracle applications to provide maximum In-Memory performance, scalability, efficiency and reliability for the most critical and demanding enterprise deployments. The In-Memory option for the Oracle Database 12c, which has just been released, has been specifically optimized for SPARC servers running Oracle Solaris. The unique combination of Oracle's M6 32 Terabytes Big Memory Machine and Oracle Database 12c In-Memory demonstrates 2X increase in OLTP performance and 100X increase in analytics response times, allowing complex analysis of incredibly large data sets at the speed of thought. Numerous unique enhancements, including the large cache on the SPARC M6 processor, massive 32 TB of memory, uniform memory access architecture, Oracle Solaris high-performance kernel, and Oracle Database SGA optimization, result in orders of magnitude better transaction processing speeds across a range of in-memory workloads. Oracle Database 12c In-Memory The Power of Oracle SuperCluster and In-Memory Applications (Video, 3:13) Oracle’s In-Memory applications Oracle E-Business Suite In-Memory Cost Management on the Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Applications on Oracle SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One In-Memory Sales Advisor on the SuperCluster M6-32 (PDF) Oracle JD Edwards Enterprise One Project Portfolio Management on the SuperCluster M6-32 (PDF)

    Read the article

  • Tips for XNA WP7 Developers

    - by Michael B. McLaughlin
    There are several things any XNA developer should know/consider when coming to the Windows Phone 7 platform. This post assumes you are familiar with the XNA Framework and with the changes between XNA 3.1 and XNA 4.0. It’s not exhaustive; it’s simply a list of things I’ve gathered over time. I may come back and add to it over time, and I’m happy to add anything anyone else has experienced or learned as well. Display · The screen is either 800x480 or 480x800. · But you aren’t required to use only those resolutions. · The hardware scaler on the phone will scale up from 240x240. · One dimension will be capped at 800 and the other at 480; which depends on your code, but you cannot have, e.g., an 800x600 back buffer – that will be created as 800x480. · The hardware scaler will not normally change aspect ratio, though, so no unintended stretching. · Any dimension (width, height, or both) below 240 will be adjusted to 240 (without any aspect ratio adjustment such that, e.g. 200x240 will be treated as 240x240). · Dimensions below 240 will be honored in terms of calculating whether to use portrait or landscape. · If dimensions are exactly equal or if height is greater than width then game will be in portrait. · If width is greater than height, the game will be in landscape. · Landscape games will automatically flip if the user turns the phone 180°; no code required. · Default landscape is top = left. In other words a user holding a phone who starts a landscape game will see the first image presented so that the “top” of the screen is along the right edge of his/her phone, such that the natural behavior would be to turn the phone 90° so that the top of the phone will be held in the user’s left hand and the bottom would be held in the user’s right hand. · The status bar (where the clock, battery power, etc., are found) is hidden when the Game-derived class sets GraphicsDeviceManager.IsFullScreen = true. It is shown when IsFullScreen = false. The default value is false (i.e. the status bar is shown). · You should have a good reason for hiding the status bar. Users find it helpful to know what time it is, how much charge their battery has left, and whether or not their phone is in service range. This is especially true for casual games that you expect someone to play for a few minutes at a time, e.g. while waiting for some event to start, for a phone call to come in, or for a train, bus, or subway to arrive. · In portrait mode, the status bar occupies 32 pixels of space. This means that a game with a back buffer of 480x800 will be scaled down to occupy approximately 461x768 screen pixels. Setting the back buffer to 480x768 (or some resolution with the same 0.625 aspect ratio) will avoid this scaling. · In landscape mode, the status bar occupies 72 pixels of space. This means that a game with a back buffer of 800x480 will be scaled down to occupy approximately 728x437 screen pixels. Setting the back buffer to 728x480 (or some resolution with the same 1.51666667 aspect ratio) will avoid this scaling. Input · Touch input is scaled with screen size. · So if your back buffer is 600x360, a tap in the bottom right corner will come in as (599,359). You don’t need to do anything special to get this automatic scaling of touch behavior. · If you do not use full area of the screen, any touch input outside the area you use will still register as a touch input. For example, if you set a portrait resolution of 240x240, it would be scaled up to occupy a 480x480 area, centered in the screen. If you touch anywhere above this area, you will get a touch input of (X,0) where X is a number from 0 to 239 (in accordance with your 240 pixel wide back buffer). Any touch below this area will give a touch input of (X,239). · If you keep the status bar visible, touches within its area will not be passed to your game. · In general, a screen measurement is the diagonal. So a 3.5” screen is 3.5” long from the bottom right corner to the top left corner. With an aspect ratio of 0.6 (480/800 = 0.6), this means that a phone with a 3.5” screen is only approximately 1.8” wide by 3” tall. So there are approximately 267 pixels in an inch on a 3.5” screen. · Again, this time in metric! 3.5 inches is approximately 8.89 cm. So an 8.89 cm screen is 8.89 cm long from the bottom right corner to the top left corner. With an aspect ratio of 0.6, this means that a phone with an 8.89 cm screen is only approximately 4.57 cm wide by 7.62 cm tall. So there are approximately 105 pixels in a centimeter on an 8.89 cm screen. · Think about the size of your finger tip. If you do not have large hands, think about the size of the fingertip of someone with large hands. Consider that when you are sizing your touch input. Especially consider that when you are spacing two touch targets near one another. You need to judge it for yourself, but items that are next to each other and are each 100x100 should be fine when it comes to selecting items individually. Smaller targets than that are ok provided that you leave space between them. · You want your users to have a pleasant experience. Making touch controls too small or too close to one another will make them nervous about whether they will touch the right target. Take this into account when you plan out your game initially. If possible, do some quick size mockups on an actual phone using colored rectangles that you position and size where you plan to have your game controls. Adjust as necessary. · People do not have transparent hands! Nor are their hands the size of a mouse pointer icon. Consider leaving a dedicated space for input rather than forcing the user to cover up to one-third of the screen with a finger just to play the game. · Another benefit of designing your controls to use a dedicated area is that you’re less likely to have players moving their finger(s) so frantically that they accidentally hit the back button, start button, or search button (many phones have one or more of these on the screen itself – it’s easy to hit one by accident and really annoying if you hit, e.g., the search button and then quickly tap back only to find out that the game didn’t save your progress such that you just wasted all the time you spent playing). · People do not like doing somersaults in order to move something forward with accelerometer-based controls. Test your accelerometer-based controls extensively and get a lot of feedback. Very well-known games from noted publishers have created really bad accelerometer controls and been virtually unplayable as a result. Also be wary of exceptions and other possible failures that the documentation warns about. · When done properly, the accelerometer can add a nice touch to your game (see, e.g. ilomilo where the accelerometer was used to move the background; it added a nice touch without frustrating the user; I also think CarniVale does direct accelerometer controls very well). However, if done poorly, it will make your game an abomination unto the Marketplace. Days, weeks, perhaps even months of development time that you will never get back. I won’t name names; you can search the marketplace for games with terrible reviews and you’ll find them. Graphics · The maximum frame rate is 30 frames per second. This was set as a compromise between battery life and quality. · At least one model of phone is known to have a screen refresh rate that is between 59 and 60 hertz. Because of this, using a fixed time step with a target frame rate of 30 will cause a slight internal delay to build up as the framework is forced to wait slightly for the next refresh. Eventually the delay will get to the point where a draw is skipped in order to recover from the delay. (See Nick's comment below for clarification.) · To deal with that delay, you can either stay with a fixed time step and set the frame rate slightly lower or else you can go to a variable time step and make sure to adjust all of your update data (e.g. player movement distance) to take into account the elapsed time from the last update. A variable time step makes your update logic slightly more complicated but will avoid frame skips entirely. · Currently there are no custom shaders. This might change in the future (there is no hardware limitation preventing it; it simply wasn’t a feature that could be implemented in the time available before launch). · There are five built-in shaders. You can create a lot of nice effects with the built-in shaders. · There is more power on the CPU than there is on the GPU so things you might typically off-load to the GPU will instead make sense to do on the CPU side. · This is a phone. It is not a PC. It is not an Xbox 360. The emulator runs on a PC and uses the full power of your PC. It is very good for testing your code for bugs and doing early prototyping and layout. You should not use it to measure performance. Use actual phone hardware instead. · There are many phone models, each of which has slightly different performance levels for I/O, screen blitting, CPU performance, etc. Do not take your game right to the performance limit on your phone since for some other phones you might be crossing their limits and leaving players with a bad experience. Leave a cushion to account for hardware differences. · Smaller screened phones will have slightly more dots per inch (dpi). Larger screened phones will have slightly less. Either way, the dpi will be much higher than the typical 96 found on most computer screens. Make sure that whoever is doing art for your game takes this into account. · Screens are only required to have 16 bit color (65,536 colors). This is common among smart phones. Using gradients on a 16 bit display can produce an ugly artifact known as banding. Banding is when, rather than a smooth transition from one color to another, you instead see distinct lines. Be careful to avoid this when possible. Banding can be avoided through careful art creation. Its effects can be minimized and even unnoticeable when the texture in question is always moving. You should be careful not to rely on “looks good on my phone” since some phones do have 32-bit displays and thus you’ll find yourself wondering why you’re getting bad reviews that complain about the graphics. Avoid gradients; if you can’t, make sure they are 16-bit safe. Audio · Never rely on sounds as your sole signal to the player that something is happening in the game. They might have the sound off. They might be playing somewhere loud. Etc. · You have to provide controls to disable sound & music. These should be separate. · On at least one model of phone, the volume control API currently has no effect. Players can adjust sound with their hardware volume buttons, but in game selectors simply won’t work. As such, it may not be worth the effort of providing anything beyond on/off switches for sound and music. · MediaPlayer.GameHasControl will return true when a game is hooked up to a PC running Zune. When Zune is running, any attempts to do anything (beyond check GameHasControl) with MediaPlayer will cause an exception to be thrown. If this exception is thrown, catch it and disable music. Exceptions take time to propagate; you don’t want one popping up in every single run of your game’s Update method. · Remember that players can already be listening to music or using the FM radio. In this case GameHasControl will be false and you should handle this appropriately. You can, alternately, ask the player for permission to stop their current music and play your music instead, but the (current) requirement that you restore their music when done is very hard (if not impossible) to deal with. · You can still play sound effects even when the game doesn’t have control of the music, but don’t think this is a backdoor to playing music. Your game will fail certification if your “sound effect” seems to be more like music in scope and length.

    Read the article

  • ejabberd starts slow and fails

    - by Michael Tanner
    I installed ejabberd and tried to set it up. On my server there are strict rules which allow only several services and denies everything else. I experienced starting problems with my iptables rules and found out that ejabberd works when I allow everything in the iptables and restore my rules afterwards. Then it also works for connecting, registering and else. What could be the problem in my rules? Attached here: http://fixee.org/paste/s5s744j/ Would be nice to get a correction straight on fixee if I made a mistake. Thanks in advance. Edit 1: In the log files there are no specific error messages during startup.

    Read the article

  • How are minimum system requirements determined?

    - by Michael McGowan
    We've all seen countless examples of software that ships with "minimum system requirements" like the following: Windows XP/Vista/7 1GB RAM 200 MB Storage How are these generally determined? Obviously sometimes there are specific constraints (if the program takes 200 MB on disk then that is a hard requirement). Aside from those situations, many times for things like RAM or processor it turns out that more/faster is better with no hard constraint. How are these determined? Do developers just make up numbers that seem reasonable? Does QA go through some rigorous process testing various requirements until they find the lowest settings with acceptable performance? My instinct says it should be the latter but is often the former in practice.

    Read the article

  • Message during Edit and Continue doesn't give an option to edit.

    - by Michael Freidgeim
    During my Edit and Continue session I received a message --------------------------- Microsoft Visual Studio --------------------------- Modifying a catch handler around an active statement will prevent the debug session from continuing while Edit and Continue is enabled. --------------------------- OK    --------------------------- I would expect that Visual Studio give me option to edit, but stop Edit and Continue or Cancel, but it only disallow edit .

    Read the article

  • Use CompiledQuery.Compile to improve LINQ to SQL performance

    - by Michael Freidgeim
    After reading DLinq (Linq to SQL) Performance and in particular Part 4  I had a few questions. If CompiledQuery.Compile gives so much benefits, why not to do it for all Linq To Sql queries? Is any essential disadvantages of compiling all select queries? What are conditions, when compiling makes whose performance, for how much percentage? World be good to have default on application config level or on DBML level to specify are all select queries to be compiled? And the same questions about Entity Framework CompiledQuery Class. However in comments I’ve found answer  of the author ricom 6 Jul 2007 3:08 AM Compiling the query makes it durable. There is no need for this, nor is there any desire, unless you intend to run that same query many times. SQL provides regular select statements, prepared select statements, and stored procedures for a reason.  Linq now has analogs. Also from 10 Tips to Improve your LINQ to SQL Application Performance   If you are using CompiledQuery make sure that you are using it more than once as it is more costly than normal querying for the first time. The resulting function coming as a CompiledQuery is an object, having the SQL statement and the delegate to apply it.  And your delegate has the ability to replace the variables (or parameters) in the resulting query. However I feel that many developers are not informed enough about benefits of Compile. I think that tools like FxCop and Resharper should check the queries  and suggest if compiling is recommended. Related Articles for LINQ to SQL: MSDN How to: Store and Reuse Queries (LINQ to SQL) 10 Tips to Improve your LINQ to SQL Application Performance Related Articles for Entity Framework: MSDN: CompiledQuery Class Exploring the Performance of the ADO.NET Entity Framework - Part 1 Exploring the Performance of the ADO.NET Entity Framework – Part 2 ADO.NET Entity Framework 4.0: Making it fast through Compiled Query

    Read the article

  • Does Unity support disabling the global application menu?

    - by Michael E
    I'm fairly excited for Unity, as it looks like a promising new direction for Ubuntu. However, I do have a concern - will it be possible to use Unity without the global menu? I have my window manager set to focus-follows-mouse/sloppy focus, and find the productivity gains to be immense. Sloppy focus is incompatible, however, with global menus, as it is possible for the focus to change while you move from window to menu. Will Unity support an option to use window menus while still using Unity?

    Read the article

  • Cleaning your BizTalk Build Server

    - by Michael Stephenson
    Just a little note for myself this one.At one of my customers where it is still BizTalk 2006 one of the build servers is intermittently getting issues so I wanted to run a script periodically to clean things up a little.  The below script is an example of how you can stop cruise control and all of the biztalk services, then clean the biztalk databases and reset the backup process and then click everything off again.This should keep the server a little cleaner and reduce the number of builds that occasionally fail for adhoc environmental issues.REM Server Clean ScriptREM =================== REM This script is ran to move the build server back to a clean state echo Stop Cruise Controlnet stop CCService echo Stop IISiisreset /stop echo Stop BizTalk Servicesnet stop BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Stop SSOnet stop ENTSSO echo Stop SQL Job Agentnet stop SQLSERVERAGENT echo Clean Message Boxsqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_CleanupMsgbox"sqlcmd -E -d BizTalkMsgBoxDB -Q "Exec bts_PurgeSubscriptions"  echo Clean Tracking Databasesqlcmd -E -d BizTalkDTADb -Q "Exec dtasp_CleanHMData" echo Reset TDDS Stream Statussqlcmd -E -d BizTalkDTADb -Q "Update TDDS_StreamStatus Set lastSeqNum = 0" echo Force Full Backupsqlcmd -E -d BizTalkMgmtDB -Q "Exec sp_ForceFullBackup" echo Clean Backup Directorydel E:\BtsBackups\*.* /q  echo Start SSOnet start ENTSSO echo Start SQL Job Agentnet start SQLSERVERAGENT echo Start BizTalk Servicesnet start BTSSvc$<Name of BizTalk Host><Repeat for other BizTalk services> echo Start IISiisreset /start echo Start Cruise Controlnet start CCService

    Read the article

  • For a Javascript library, what is the best or standard way to support extensibility

    - by Michael Best
    Specifically, I want to support "plugins" that modify the behavior of parts of the library. I couldn't find much information on the web about this subject. But here are my ideas for how a library could be extensible. The library exports an object with both public and "protected" functions. A plugin can replace any of those functions, thus modifying the library's behavior. Advantages of this method are that it's simple and that the plugin's functions can have full access to the library's "protected" functions. Disadvantages are that the library may be harder to maintain with a larger set of exposed functions and it could be hard to debug if multiple plugins are involved (how to know which plugin modified which function?). The library provides an "add plugin" function that accepts an object with a specific interface. Internally, the library will use the plugin instead of it's own code if appropriate. With this method, the internals of the library can be rearranged more freely as long as it still supports the same plugin interface. This could also support having different plugin interfaces to modify different parts of the library. A disadvantage of this method is that the plugins may have to re-implement code that is already part of the library since the library's internal functions are not exported. The library provides a "set implementation" function that accepts an object inherited from a specific base object. The library's public API calls functions in the implementation object for any functionality that can be modified and the base implementation object includes the core functionality, with both external (to the API) and internal functions. A plugin creates a new implementation object, which inherits from the base object and replaces any functions it wants to modify. This combines advantages and disadvantages of both the other methods.

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • ArchBeat Link-o-Rama for October 17, 2013

    - by OTN ArchBeat
    Oracle Author Podcast: Danny Coward on "Java WebSocket Programming" In this Oracle Author Podcast Roger Brinkley talks with Java architect Danny Coward about his new book, Java WebSocket Programming, now available from Oracle Press. Webcast: Why Choose Oracle Linux for your Oracle Database 12c Deployments Sumanta Chatterjee, VP Database Engineering for Oracle discusses advantages of choosing Oracle Linux for Oracle Database, including key optimizations and features, and talks about tools to simplify and speed deployment of Oracle Database on Linux, including Oracle VM Templates, Oracle Validated Configurations, and pre-install RPM. Oracle BI Apps 11.1.1.7.1 – GoldenGate Integration - Part 1: Introduction | Michael Rainey Michael Rainey launches a series of posts that guide you through "the architecture and setup for using GoldenGate with OBIA 11.1.1.7.1." Should your team use a framework? | Sten Vesterli "Some developers have an aversion to frameworks, feeling that it will be faster to just write everything themselves," observes Oracle ACE Director Sten Vesterli. He explains why that's a very bad idea in this short post. Free Poster: Adaptive Case Management in Practice Thanks to Masons of SOA member Danilo Schmiedel for providing a hi-res copy of the Adaptive Case Management poster, now available for download from the OTN ArchBeat Blog. Oracle Internal Testing Overview: Understanding How Rigorous Oracle Testing Saves Time and Effort During Deployment Want to understand Oracle Engineering's internal product testing methodology? This white paper takes you behind the curtain. Thought for the Day "If I see an ending, I can work backward." — Arthur Miller, American playwright (October 17, 1915 – February 10, 2005) Source: brainyquote.com

    Read the article

  • The Nine Cs of Customer Engagement

    - by Michael Snow
    Avoid Social Media Fatigue - Learn the 9 C's of Customer Engagement inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- Have We Hit a Social-Media Plateau? In recent years, social media has evolved from a cool but unproven medium to become the foundation of pragmatic social business and a driver of business value. Yet, time is running out for businesses to make the most out of this channel. This isn’t a warning. It’s a fact. Join leading industry analyst R “Ray” Wang as he explains how to apply the nine Cs of engagement to strengthen customer relationships. Learn: How to overcome social-media fatigue and make the most of the medium Why engagement is the most critical factor in the age of overexposure The nine pillars of successful customer engagement Register for the eighth Webcast in the Social Business Thought Leaders series today. Register Now Thurs., Sept. 20, 2012 10 a.m. PT / 1 p.m. ET Presented by: R “Ray” Wang Principal Analyst and CEO, Constellation Research Christian Finn Senior Director, Product Management Oracle Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement SEV100103386 Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time.Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

    Read the article

  • How to turn off Libnotify notifications only when sound is in muted state?

    - by Michael Butler
    I have a multimedia keyboard that allows me to easily mute the sound (Ubuntu 12.04). It would be nice to "link" this to also turn off libnotify messages that pop-up in the top right corner (i.e. Pidgin messages). So when Ubuntu is muted, no libnotify messages would pop up. When not muted, messages show as normal. Is this possible with a script of some kind or would it require changing source code?

    Read the article

  • Problems with EmusicJ on Ubuntu 14.04

    - by Michael Dykes
    I am running Ubuntu 14.04 in my new machine and have a subscription to emusic (because I used it before in my Windows days because of the great selection of classical music). I have used it on my old machine (running older versions of Ubuntu) with some success but am having difficulties doing so now. I am trying to use EmusicJ but do not/cannot remember how to do so. I have downloaded the program and unzipped it and ran the following command in a terminal: ./emusic If I remember correctly, once I go to the emusic site and download my music, the EmucisJ program should open and begin downloading the music through that program but this time it is not and I have already wiped my old box in order to give it to a friend. Any help is appreciated as I would like to continue with Emusic, but if I cannot get this resolved I will likely have to cancel my subscription and find a better alternative. Thanks.

    Read the article

  • Microsoft Visual C# MVP 2012

    - by James Michael Hare
    I was informed on July 1st, 2012 that I was awarded a Microsoft Visual C# MVP recognition for 2012.  This is my second year now, and I'm doubly thankful to have been nominated and selected, and thankful that you guys all find my posts informative and useful! Even though life has thrown me some curve balls in this past last year, I look forward to continuing my posts (especially the Little Wonders) as much as possible!Thanks again!

    Read the article

  • Wordpress Queue like Tumblr?

    - by Michael Hopkins
    Hi. Is there a way to give Wordpress the queue functionality that Tumblr has? Tumblr's queue, for those who don't know, is a way to space posts out without assigning specific post dates. For example, a Tumblr queue might be set to post every four hours between 9am and 5pm. Tumblr would drop the front post in the queue at 9am, 1pm and 5pm every day. Posts are added to the queue by clicking "add to queue" instead of "publish." It's quite simple. How can this feature be added to Wordpress?

    Read the article

  • How can I create a symlink to the location that Ubuntu 10.10 mounts a CD?

    - by Michael Curran
    In Ubuntu 10.10, when I insert a CD or DVD into my optical drive, the system mounts the CD in a folder called /media/XYZ where XYZ is the disk's label. This has cause problems with Wine, as in order for an application to verify that an application's CD is present, Wine uses a symlinks to point to a mounted CD's folder. In this case, that folder must be /media/XYZ, but when using a different application, the folder would be different. I would like to know if there is a way to create a symlink that will always point to the mounted folder from a given /dev/cdrom* device, or how to force the system to always mount CDs to the same address (i.e. /media/cdrom).

    Read the article

  • C#/.NET Little Wonders: Interlocked Read() and Exchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Last time we discussed the Interlocked class and its Add(), Increment(), and Decrement() methods which are all useful for updating a value atomically by adding (or subtracting).  However, this begs the question of how do we set and read those values atomically as well? Read() – Read a value atomically Let’s begin by examining the following code: 1: public class Incrementor 2: { 3: private long _value = 0; 4:  5: public long Value { get { return _value; } } 6:  7: public void Increment() 8: { 9: Interlocked.Increment(ref _value); 10: } 11: } 12:  It uses an interlocked increment, as we discuss in my previous post (here), so we know that the increment will be thread-safe.  But, to realize what’s potentially wrong we have to know a bit about how atomic reads are in 32 bit and 64 bit .NET environments. When you are dealing with an item smaller or equal to the system word size (such as an int on a 32 bit system or a long on a 64 bit system) then the read is generally atomic, because it can grab all of the bits needed at once.  However, when dealing with something larger than the system word size (reading a long on a 32 bit system for example), it cannot grab the whole value at once, which can lead to some problems since this read isn’t atomic. For example, this means that on a 32 bit system we may read one half of the long before another thread increments the value, and the other half of it after the increment.  To protect us from reading an invalid value in this manner, we can do an Interlocked.Read() to force the read to be atomic (of course, you’d want to make sure any writes or increments are atomic also): 1: public class Incrementor 2: { 3: private long _value = 0; 4:  5: public long Value 6: { 7: get { return Interlocked.Read(ref _value); } 8: } 9:  10: public void Increment() 11: { 12: Interlocked.Increment(ref _value); 13: } 14: } Now we are guaranteed that we will read the 64 bit value atomically on a 32 bit system, thus ensuring our thread safety (assuming all other reads, writes, increments, etc. are likewise protected).  Note that as stated before, and according to the MSDN (here), it isn’t strictly necessary to use Interlocked.Read() for reading 64 bit values on 64 bit systems, but for those still working in 32 bit environments, it comes in handy when dealing with long atomically. Exchange() – Exchanges two values atomically Exchange() lets us store a new value in the given location (the ref parameter) and return the old value as a result. So just as Read() allows us to read atomically, one use of Exchange() is to write values atomically.  For example, if we wanted to add a Reset() method to our Incrementor, we could do something like this: 1: public void Reset() 2: { 3: _value = 0; 4: } But the assignment wouldn’t be atomic on 32 bit systems, since the word size is 32 bits and the variable is a long (64 bits).  Thus our assignment could have only set half the value when a threaded read or increment happens, which would put us in a bad state. So instead, we could write Reset() like this: 1: public void Reset() 2: { 3: Interlocked.Exchange(ref _value, 0); 4: } And we’d be safe again on a 32 bit system. But this isn’t the only reason Exchange() is valuable.  The key comes in realizing that Exchange() doesn’t just set a new value, it returns the old as well in an atomic step.  Hence the name “exchange”: you are swapping the value to set with the stored value. So why would we want to do this?  Well, anytime you want to set a value and take action based on the previous value.  An example of this might be a scheme where you have several tasks, and during every so often, each of the tasks may nominate themselves to do some administrative chore.  Perhaps you don’t want to make this thread dedicated for whatever reason, but want to be robust enough to let any of the threads that isn’t currently occupied nominate itself for the job.  An easy and lightweight way to do this would be to have a long representing whether someone has acquired the “election” or not.  So a 0 would indicate no one has been elected and 1 would indicate someone has been elected. We could then base our nomination strategy as follows: every so often, a thread will attempt an Interlocked.Exchange() on the long and with a value of 1.  The first thread to do so will set it to a 1 and return back the old value of 0.  We can use this to show that they were the first to nominate and be chosen are thus “in charge”.  Anyone who nominates after that will attempt the same Exchange() but will get back a value of 1, which indicates that someone already had set it to a 1 before them, thus they are not elected. Then, the only other step we need take is to remember to release the election flag once the elected thread accomplishes its task, which we’d do by setting the value back to 0.  In this way, the next thread to nominate with Exchange() will get back the 0 letting them know they are the new elected nominee. Such code might look like this: 1: public class Nominator 2: { 3: private long _nomination = 0; 4: public bool Elect() 5: { 6: return Interlocked.Exchange(ref _nomination, 1) == 0; 7: } 8: public bool Release() 9: { 10: return Interlocked.Exchange(ref _nomination, 0) == 1; 11: } 12: } There’s many ways to do this, of course, but you get the idea.  Running 5 threads doing some “sleep” work might look like this: 1: var nominator = new Nominator(); 2: var random = new Random(); 3: Parallel.For(0, 5, i => 4: { 5:  6: for (int j = 0; j < _iterations; ++j) 7: { 8: if (nominator.Elect()) 9: { 10: // elected 11: Console.WriteLine("Elected nominee " + i); 12: Thread.Sleep(random.Next(100, 5000)); 13: nominator.Release(); 14: } 15: else 16: { 17: // not elected 18: Console.WriteLine("Did not elect nominee " + i); 19: } 20: // sleep before check again 21: Thread.Sleep(1000); 22: } 23: }); And would spit out results like: 1: Elected nominee 0 2: Did not elect nominee 2 3: Did not elect nominee 1 4: Did not elect nominee 4 5: Did not elect nominee 3 6: Did not elect nominee 3 7: Did not elect nominee 1 8: Did not elect nominee 2 9: Did not elect nominee 4 10: Elected nominee 3 11: Did not elect nominee 2 12: Did not elect nominee 1 13: Did not elect nominee 4 14: Elected nominee 0 15: Did not elect nominee 2 16: Did not elect nominee 4 17: ... Another nice thing about the Interlocked.Exchange() is it can be used to thread-safely set pretty much anything 64 bits or less in size including references, pointers (in unsafe mode), floats, doubles, etc.  Summary So, now we’ve seen two more things we can do with Interlocked: reading and exchanging a value atomically.  Read() and Exchange() are especially valuable for reading/writing 64 bit values atomically in a 32 bit system.  Exchange() has value even beyond simply atomic writes by using the Exchange() to your advantage, since it reads and set the value atomically, which allows you to do lightweight nomination systems. There’s still a few more goodies in the Interlocked class which we’ll explore next time! Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >