Search Results

Search found 6078 results on 244 pages for 'processing'.

Page 66/244 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Parallelism in .NET – Part 4, Imperative Data Parallelism: Aggregation

    - by Reed
    In the article on simple data parallelism, I described how to perform an operation on an entire collection of elements in parallel.  Often, this is not adequate, as the parallel operation is going to be performing some form of aggregation. Simple examples of this might include taking the sum of the results of processing a function on each element in the collection, or finding the minimum of the collection given some criteria.  This can be done using the techniques described in simple data parallelism, however, special care needs to be taken into account to synchronize the shared data appropriately.  The Task Parallel Library has tools to assist in this synchronization. The main issue with aggregation when parallelizing a routine is that you need to handle synchronization of data.  Since multiple threads will need to write to a shared portion of data.  Suppose, for example, that we wanted to parallelize a simple loop that looked for the minimum value within a dataset: double min = double.MaxValue; foreach(var item in collection) { double value = item.PerformComputation(); min = System.Math.Min(min, value); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This seems like a good candidate for parallelization, but there is a problem here.  If we just wrap this into a call to Parallel.ForEach, we’ll introduce a critical race condition, and get the wrong answer.  Let’s look at what happens here: // Buggy code! Do not use! double min = double.MaxValue; Parallel.ForEach(collection, item => { double value = item.PerformComputation(); min = System.Math.Min(min, value); }); This code has a fatal flaw: min will be checked, then set, by multiple threads simultaneously.  Two threads may perform the check at the same time, and set the wrong value for min.  Say we get a value of 1 in thread 1, and a value of 2 in thread 2, and these two elements are the first two to run.  If both hit the min check line at the same time, both will determine that min should change, to 1 and 2 respectively.  If element 1 happens to set the variable first, then element 2 sets the min variable, we’ll detect a min value of 2 instead of 1.  This can lead to wrong answers. Unfortunately, fixing this, with the Parallel.ForEach call we’re using, would require adding locking.  We would need to rewrite this like: // Safe, but slow double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach(collection, item => { double value = item.PerformComputation(); lock(syncObject) min = System.Math.Min(min, value); }); This will potentially add a huge amount of overhead to our calculation.  Since we can potentially block while waiting on the lock for every single iteration, we will most likely slow this down to where it is actually quite a bit slower than our serial implementation.  The problem is the lock statement – any time you use lock(object), you’re almost assuring reduced performance in a parallel situation.  This leads to two observations I’ll make: When parallelizing a routine, try to avoid locks. That being said: Always add any and all required synchronization to avoid race conditions. These two observations tend to be opposing forces – we often need to synchronize our algorithms, but we also want to avoid the synchronization when possible.  Looking at our routine, there is no way to directly avoid this lock, since each element is potentially being run on a separate thread, and this lock is necessary in order for our routine to function correctly every time. However, this isn’t the only way to design this routine to implement this algorithm.  Realize that, although our collection may have thousands or even millions of elements, we have a limited number of Processing Elements (PE).  Processing Element is the standard term for a hardware element which can process and execute instructions.  This typically is a core in your processor, but many modern systems have multiple hardware execution threads per core.  The Task Parallel Library will not execute the work for each item in the collection as a separate work item. Instead, when Parallel.ForEach executes, it will partition the collection into larger “chunks” which get processed on different threads via the ThreadPool.  This helps reduce the threading overhead, and help the overall speed.  In general, the Parallel class will only use one thread per PE in the system. Given the fact that there are typically fewer threads than work items, we can rethink our algorithm design.  We can parallelize our algorithm more effectively by approaching it differently.  Because the basic aggregation we are doing here (Min) is communitive, we do not need to perform this in a given order.  We knew this to be true already – otherwise, we wouldn’t have been able to parallelize this routine in the first place.  With this in mind, we can treat each thread’s work independently, allowing each thread to serially process many elements with no locking, then, after all the threads are complete, “merge” together the results. This can be accomplished via a different set of overloads in the Parallel class: Parallel.ForEach<TSource,TLocal>.  The idea behind these overloads is to allow each thread to begin by initializing some local state (TLocal).  The thread will then process an entire set of items in the source collection, providing that state to the delegate which processes an individual item.  Finally, at the end, a separate delegate is run which allows you to handle merging that local state into your final results. To rewriting our routine using Parallel.ForEach<TSource,TLocal>, we need to provide three delegates instead of one.  The most basic version of this function is declared as: public static ParallelLoopResult ForEach<TSource, TLocal>( IEnumerable<TSource> source, Func<TLocal> localInit, Func<TSource, ParallelLoopState, TLocal, TLocal> body, Action<TLocal> localFinally ) The first delegate (the localInit argument) is defined as Func<TLocal>.  This delegate initializes our local state.  It should return some object we can use to track the results of a single thread’s operations. The second delegate (the body argument) is where our main processing occurs, although now, instead of being an Action<T>, we actually provide a Func<TSource, ParallelLoopState, TLocal, TLocal> delegate.  This delegate will receive three arguments: our original element from the collection (TSource), a ParallelLoopState which we can use for early termination, and the instance of our local state we created (TLocal).  It should do whatever processing you wish to occur per element, then return the value of the local state after processing is completed. The third delegate (the localFinally argument) is defined as Action<TLocal>.  This delegate is passed our local state after it’s been processed by all of the elements this thread will handle.  This is where you can merge your final results together.  This may require synchronization, but now, instead of synchronizing once per element (potentially millions of times), you’ll only have to synchronize once per thread, which is an ideal situation. Now that I’ve explained how this works, lets look at the code: // Safe, and fast! double min = double.MaxValue; // Make a "lock" object object syncObject = new object(); Parallel.ForEach( collection, // First, we provide a local state initialization delegate. () => double.MaxValue, // Next, we supply the body, which takes the original item, loop state, // and local state, and returns a new local state (item, loopState, localState) => { double value = item.PerformComputation(); return System.Math.Min(localState, value); }, // Finally, we provide an Action<TLocal>, to "merge" results together localState => { // This requires locking, but it's only once per used thread lock(syncObj) min = System.Math.Min(min, localState); } ); Although this is a bit more complicated than the previous version, it is now both thread-safe, and has minimal locking.  This same approach can be used by Parallel.For, although now, it’s Parallel.For<TLocal>.  When working with Parallel.For<TLocal>, you use the same triplet of delegates, with the same purpose and results. Also, many times, you can completely avoid locking by using a method of the Interlocked class to perform the final aggregation in an atomic operation.  The MSDN example demonstrating this same technique using Parallel.For uses the Interlocked class instead of a lock, since they are doing a sum operation on a long variable, which is possible via Interlocked.Add. By taking advantage of local state, we can use the Parallel class methods to parallelize algorithms such as aggregation, which, at first, may seem like poor candidates for parallelization.  Doing so requires careful consideration, and often requires a slight redesign of the algorithm, but the performance gains can be significant if handled in a way to avoid excessive synchronization.

    Read the article

  • Messed up installation of mysql-server - cannot complete installation or deinstallation

    - by Christian Engel
    apt-get got stuck while installing mysql-server. I don't know why but it just stopped working and never continued. I had to reboot the machine in the middle of the setup process. Now, if I try to install or purge the mysql-server package, apt-get tries to configure mysql-server first (tells me its not installed before that) and cancels with a error message: Sub-process /usr/bin/dpkg returned an error code(1) apt-get also tells me that two packages have not been successfully installed or removed. this is the complete console output: christian@devbox:~$ sudo apt-get install mysql-server [sudo] password for christian: Reading package lists... Done Building dependency tree Reading state information... Done mysql-server is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 17 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up mysql-server-5.5 (5.5.32-0ubuntu7) ... start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) christian@devbox:~$

    Read the article

  • How do I install 32-bit perl-tgicl 2.1.1?

    - by Daniel Fernandez
    I'm trying to install a .deb and I need some packages but they are not on the synaptic. How can I install this packages lib32z1 libc6-i386 TGICL$ sudo dpkg -i perl-tgicl_2.1-1_all.deb Selecting previously deselected package perl-tgicl. (Reading database ... 168515 files and directories currently installed.) Unpacking perl-tgicl (from perl-tgicl_2.1-1_all.deb) ... dpkg: dependency problems prevent configuration of perl-tgicl: perl-tgicl depends on lib32z1 (>= 1:1.1.4); however: Package lib32z1 is not installed. perl-tgicl depends on libc6-i386 (>= 2.3); however: Package libc6-i386 is not installed. perl-tgicl depends on libfile-homedir-perl (>= 0.10); however: Package libfile-homedir-perl is not installed. perl-tgicl depends on libfile-spec-perl (>= 0.10); however: Package libfile-spec-perl is not installed. dpkg: error processing perl-tgicl (--install): dependency problems - leaving unconfigured Processing triggers for man-db ... Errors were encountered while processing: perl-tgicl My OS: $ uname -a Linux 3.0.0-12-generic-pae #20-Ubuntu SMP Fri Oct 7 16:37:17 UTC 2011 i686 i686 i386 GNU/Linux

    Read the article

  • How to fix "apt-get upgrade" errors?

    - by mohamad farid bin abdullah
    I get these errors when I try to upgrade the packages installed on my Ubuntu system: m@m-desktop ~ $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up drbd8-source (2:8.3.7-1ubuntu2.3) ... Removing old drbd8-8.3.7 DKMS files... ------------------------------ Deleting module version: 8.3.7 completely from the DKMS tree. ------------------------------ Done. Loading new drbd8-8.3.7 DKMS files... First Installation: checking all kernels... Building only for 2.6.35-22-generic Building for architecture i386 Building initial module for 2.6.35-22-generic Error! Bad return status for module build on kernel: 2.6.35-22-generic (i386) Consult the make.log in the build directory /var/lib/dkms/drbd8/8.3.7/build/ for more information. dpkg: error processing drbd8-source (--configure): subprocess installed post-installation script returned error exit status 10 dpkg: dependency problems prevent configuration of drbd8-utils: drbd8-utils depends on drbd8-source; however: Package drbd8-source is not configured yet. dpkg: error processing drbd8-utils (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: drbd8-source drbd8-utils E: Sub-process /usr/bin/dpkg returned an error code (1) m@m-desktop ~ $

    Read the article

  • unmet dependencies and broken count>0 problem

    - by Simon
    I tried installing fbreader, following all the steps, but ended up with unmet dependencies, i also think a file is referenced in two locations at once and hence killing it.. any ideas how I can fix it? i've done alot of research and tried: simon@simon-Studio-1558:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: dkms patch Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libzlcore0.12 The following NEW packages will be installed: libzlcore0.12 0 upgraded, 1 newly installed, 0 to remove and 61 not upgraded. 6 not fully installed or removed. Need to get 0 B/270 kB of archives. After this operation, 811 kB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 179860 files and directories currently installed.) Unpacking libzlcore0.12 (from .../libzlcore0.12_0.12.10dfsg-4_i386.deb) ... dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 No apport report written because MaxReports is reached already dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) sorry for the formatting, but it basically isn't liking: dpkg: error processing /var/cache/apt/archives/libzlcore0.12_0.12.10dfsg-4_i386.deb (--unpack): trying to overwrite '/usr/lib/libzlcore.so.0.12.10', which is also in package libzlcore 0.12.10-1 Any ideas? Also I don't care about keeping the program, but the error is stopping sudo apt-get remove fbreader from working too.

    Read the article

  • Big Data for Retail

    - by David Dorf
    Right up there with mobile, social, and cloud is the term "big data," which seems to be popping up lots in the press these days.  Companies like Google, Yahoo, and Facebook have popularized a new class of data technologies meant to solve the problem of processing large amounts of data quickly.  I first mentioned this in a posting back in March 2009.  Put simply, big data implies datasets so large they can't normally be processed using a standard transactional database.  The term "noSQL" is often used in this context as well. Actually, using parallel processing within the Oracle database combined with Exadata can achieve impressive results.  Look for more from Oracle at OpenWorld as hinted by Jean-Pierre Dijcks. McKinsey recently released a report on big data in which retail was specifically mentioned as an industry that can benefit from the new technologies.  I won't rehash that report because my friend Rama already did such a good job in his posting, Impact of "Big Data" on Retail. The presentation below does a pretty good job of framing the problem, although it doesn't really get into the available technologies (e.g. Exadata, Hadoop, Cassandra, etc.) and isn't retail specific. Determine the Right Analytic Database: A Survey of New Data Technologies So when a retailer asks me about big data, here's what I say:  Big data refers to a set of technologies for processing large volumes of structured and unstructured data.  Imagine collecting everything uttered by your customers on Facebook and Twitter and combining it with all the data you can find about the products you sell (e.g. reviews, images, demonstration videos), including competitive data.  Assuming you could process all that data, you could then personalize offers to specific customers based on their tastes, ensure prices are competitive, and implement better local assortments.  It's really not that far off.

    Read the article

  • How to install VLC? When i get this error?

    - by YumYumYum
    How to install VLC? (with error showing such). root@sun-desktop:/var/tmp# apt-get install vlc Reading package lists... Done Building dependency tree Reading state information... Done vlc is already the newest version. The following packages were automatically installed and are no longer required: liblash3 libreoffice-l10n-common libgsf-1-common libcutter-dev pocketsphinx-hmm-wsj1 libfluidsynth1 libftgl2 projectm-data libprojectm-qt1 libgnomevfs2-extra libbml0 libprojectm2 libpocketsphinx1 libsphinxbase1 buzztard-data libbabl-0.0-0 libgegl-0.0-0 libhal1 libgsf-1-114 libsidplay1 pocketsphinx-utils liboil0.3 pocketsphinx-lm-wsj libcutter0 cutter-testing-framework-bin Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 239 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up vlc-nox (1.1.9-1ubuntu1.3) ... /var/lib/dpkg/info/vlc-nox.postinst: 10: /usr/lib/vlc/vlc-cache-gen: not found dpkg: error processing vlc-nox (--configure): subprocess installed post-installation script returned error exit status 127 dpkg: dependency problems prevent configuration of vlc: vlc depends on vlc-nox (= 1.1.9-1ubuntu1.3); however: Package vlc-nox is not configured yet. dpkg: error processing vlc (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: vlc-nox vlc E: Sub-process /usr/bin/dpkg returned an error code (1) # sudo apt-get autoremove vlc vlc-nox Reading package lists... Done Building dependency tree Reading state information... Done Package vlc is not installed, so not removed Package vlc-nox is not installed, so not removed 0 upgraded, 0 newly installed, 0 to remove and 237 not upgraded.

    Read the article

  • not able to upgrade maas to 1.4?

    - by SaM
    I am running ubuntu 13.04 LTS, and maas version runnung is maas 1.3+bzr1470+dfsg-0+1474+175~ppa0~ubuntu13.04.1, so i'm trying to upgrade it to mass 1.4 but its failing, sam@xsmaas01:~$ sudo apt-get install maas [sudo] password for sam: Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be upgraded: maas 1 upgraded, 0 newly installed, 0 to remove and 87 not upgraded. 2 not fully installed or removed. Need to get 0 B/1,912 B of archives. After this operation, 1,024 B of additional disk space will be used. (Reading database ... 85268 files and directories currently installed.) Preparing to replace maas 1.3+bzr1470+dfsg-0+1474+175~ppa0~ubuntu13.04.1 (using .../maas_1.4+bzr1693+dfsg-0ubuntu2~ctools0_all.deb) ... Unpacking replacement maas ... Setting up maas-cluster-controller (1.4+bzr1693+dfsg-0ubuntu2~ctools0) ... ERROR: Module version does not exist! dpkg: error processing maas-cluster-controller (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of maas: maas depends on maas-cluster-controller; however: Package maas-cluster-controller is not configured yet. dpkg: error processing maas (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: maas-cluster-controller maas E: Sub-process /usr/bin/dpkg returned an error code (1) sam@maas01:~$ Can anyone help me with this?

    Read the article

  • Problem after mysql-server installation I cant install any thing in ubuntu 12.04.1 now

    - by mohammed ezzi
    I'm not an advanced user of Linux and I tried to install work with database so I installed Mysql-server, I think I did same thing wrong so I get in trouble and now I cant install any thing and this what I get when I use apt-get -f install : root@me:~# apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: mysql-server mysql-server-5.5 Suggested packages: tinyca mailx The following packages will be upgraded: mysql-server mysql-server-5.5 2 upgraded, 0 newly installed, 0 to remove and 194 not upgraded. 2 not fully installed or removed. Need to get 0 B/8,737 kB of archives. After this operation, 15.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y dpkg: dependency problems prevent configuration of mysql-server-5.5: mysql-server-5.5 depends on mysql-server-core-5.5 (= 5.5.24-0ubuntu0.12.04.1); however: Version of mysql-server-core-5.5 on system is 5.5.28-0ubuntu0.12.04.3. dpkg: error processing mysql-server-5.5 (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1) I tried to remove mysql-server but nothing happened.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Cannot uninstall myspell-he

    - by Yuval Rabinovich
    I tried to install a Hebrew spell checker for LibreOffice. I downloaded a package named myspell-he_1.1-1_all.deb and tried to install it through the software center, installation failed and I can neither complete installation nor remove it. Since then, whenever I run Update Manager I get an error message: The package system is broken, In details: The following packages have unmet dependencies: dictionaries-common: When I try to run the Software center I get a pop-up window: Items cannot be installed or removed until the package catalog is repaired. A Repair option is suggested. If I click it the following nessage appears: installArchives() failed: dpkg: dependency problems prevent configuration of myspell-he: dictionaries-common (1.12.1ubuntu2) breaks myspell-he (<= 1.1-1) and is installed. Version of myspell-he to be configured is 1.1-1. dpkg: error processing myspell-he (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: myspell-he Error in function: dpkg: dependency problems prevent configuration of myspell-he: dictionaries-common (1.12.1ubuntu2) breaks myspell-he (<= 1.1-1) and is installed. Version of myspell-he to be configured is 1.1-1. dpkg: error processing myspell-he (--configure): dependency problems - leaving unconfigured How can I clear this mess?

    Read the article

  • Error when upgrading initscripts using dist-upgrade on a cd image using uck

    - by InkBlend
    I was using UCK to customize an Ubuntu 11.10 image, and ran a dist-upgrade on it (from the console) to try to update all of the packages on it. The upgrade worked successfully for all but two packages, so I tried again and got the same error message. This is what happened: # sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initscripts (2.88dsf-13.10ubuntu4.1) ... guest environment detected: Linking /run/shm to /dev/shm rmdir: failed to remove `/run/shm': Device or resource busy Can't symlink /run/shm to /dev/shm; please fix manually. dpkg: error processing initscripts (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already dpkg: dependency problems prevent configuration of ifupdown: ifupdown depends on initscripts (>= 2.88dsf-13.3); however: Package initscripts is not configured yet. dpkg: error processing ifupdown (--configure): dependency problems - leaving unconfigured No apport report written because MaxReports is reached already Errors were encountered while processing: initscripts ifupdown E: Sub-process /usr/bin/dpkg returned an error code (1) # How can I fix this? Is the problem with the ISO, dpkg, or can I just not upgrade some packages with UCK? I am using Ubuntu 11.10 to and UCK 2.4.5 to customize an Ubuntu 11.10 image.

    Read the article

  • Want to book hotel stay using Bitcoin? Book using Expedia.com

    - by Gopinath
    The online travel booking leader Expedia announced that it started accepting Bitcoins for booking hotels on its website. For those who are new to Bitcoin, it is a digital currency in which transactions can be performed without the need for a central bank – its more like internet of currency. At the moment Expedia is accepting Bitcoin payments only for hotel bookings and in the future it may allow flights and vacation packages bookings. When Expedia customers wants to pay for a hotel using Bitcoin then they are transferred to Coinbase, a third party bit coin processor, for accepting the payments and then they are redirected back to Expedia.com to complete booking. This simple process would definitely drive mainstream adoption of Bitcoin – a win-win situation for digital currency users as well as for the travel company as they save a lot on card processing fees. Online retailers pay around 3% of transaction amount as fees for credit card processing companies like Visa & Master when they accept cards, but Coinbase charges a fee of just 1 percent for processing Bitcoins. Irrespective of customers adoption to Bitcoin based payments on Expedia as well as the savings on transaction fees, this move would give bragging rights to Expedia being the first e-commerce giant to accept digital currency! Image credit: Jonathan Caves

    Read the article

  • Unable to install mysql in ubuntu

    - by Anand
    I purged my existing mysql-server from ubuntu and re-installed the same. This was due there was an upgrade from my ubuntu and I was unable to start my sql-server. After cleaned up and taking backup of my data , I re-insalled mysql. During installaion I received following message popup. Unable to set password for the MySQL "root" user An error occurred while setting the password for the MySQL administrative user. This may have happened because the account already has a password, or because of a communication problem with the MySQL server. You should check the account's password after the package installation. Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more information. ¦ ¦ After the installation was failed, following error were received. 121109 20:36:18 InnoDB: Initializing buffer pool, size = 128.0M 121109 20:36:18 InnoDB: Completed initialization of buffer pool 121109 20:36:18 InnoDB: highest supported file format is Barracuda. 121109 20:36:18 InnoDB: Waiting for the background threads to start 121109 20:36:19 InnoDB: 1.1.8 started; log sequence number 2395841 ERROR: 130 Incorrect file format 'user' 121109 20:36:19 [ERROR] Aborting 121109 20:36:19 InnoDB: Starting shutdown... 121109 20:36:20 InnoDB: Shutdown completed; log sequence number 2395841 121109 20:36:20 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-5.5; however: Package mysql-server-5.5 is not configured yet. dpkg: error processing mysql-server (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates its a followup error from a previous failure. Errors were encountered while processing: mysql-server-5.5 mysql-server E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Items cannot be installed or removed

    - by Gyanendra Kumar Gyan
    installArchives() failed: (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 136187 files and directories currently installed.) Removing pidgin-ppa ... gpg: key "67265EB522BDD6B1C69E66ED7FB8BEE0A1F196A8" not found: eof gpg: 67265EB522BDD6B1C69E66ED7FB8BEE0A1F196A8: delete key failed: eof dpkg: error processing pidgin-ppa (--remove): subprocess installed post-removal script returned error exit status 2 No apport report written because MaxReports is reached already Processing triggers for ureadahead ... ureadahead will be reprofiled on next reboot Errors were encountered while processing: pidgin-ppa Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • apt-get failed install of libg15, all package management is failing

    - by Stifle
    I was trying to get my Logitech G510 keyboard's back-lights working so I went into the Synaptic Package Manager and marked LibG15, G15daemon, and all the other associated packages. Synaptic reported a failed install. Now all Package management is failing due to libg15 being "halfway installed." Some commands I have tried to fix the problem follow. . . root@bt:~# apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get autoremove Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo apt-get -f purge libg15 Reading package lists... Done Building dependency tree Reading state information... Done E: The package libg15 needs to be reinstalled, but I can't find an archive for it. root@bt:~# sudo dpkg --configure -a dpkg: dependency problems prevent configuration of g15macro: g15macro depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15macro (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of g15stats: g15stats depends on g15daemon; however: Package g15daemon is not configured yet. dpkg: error processing g15stats (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: g15macro g15stats I'm not too computer savvy. Any help would be much appreciated!!! Note: I'm using Ubuntu 10.04 under Backtrack 5 R3.

    Read the article

  • Unable to access any ubuntu shares from android/windows clients

    - by dan
    I am running Ubuntu 11.04, and cant seem to access any of my shares. Here is the output from testparm-s : Load smb config files from /etc/samba/smb.conf rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384) Processing section "[printers]" Processing section "[CanonMG2100AIO]" Processing section "[FreeAgent Drive]" Loaded services file OK. WARNING: You have some share names that are longer than 12 characters. These may not be accessible to some older clients. (Eg. Windows9x, WindowsMe, and smbclient prior to Samba 3.0.) Server role: ROLE_STANDALONE [global] server string = %h server (Samba, Ubuntu) encrypt passwords = No obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = Enter\snew\s\spassword:* %n\n Retype\snew\s\spassword:* %n\n password\supdated\ssuccessfully . username map = /etc/samba/smbusers unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 name resolve order = wins lmhosts host bcast dns proxy = No wins support = Yes usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d [printers] comment = All Printers path = /var/spool/samba create mask = 0700 guest ok = Yes printable = Yes browseable = No [CanonMG2100AIO] comment = Printer Drivers path = /var/lib/samba/printers read only = No guest ok = Yes [FreeAgent Drive] path = /media/FreeAgent Drive read only = No guest ok = Yes smbtree: Server requested plaintext password but 'client plaintext auth' is disabled anonymous failed session setup with NT_STATUS_INVALID_PARAMETER Server requested plaintext password but 'client plaintext auth' is disabled anonymous failed session setup with NT_STATUS_INVALID_PARAMETER and hostname: dekstop I know the spelling of desktop is incorrect. it was a duh moment. Any help would be greatly appreciated.

    Read the article

  • dpkg error when using apt-get install

    - by V-T
    I upgraded to Ubuntu 14.04 from 12.04 and every time I use apt-get install for any package it ends with a bunch of errors about processing some of my latex packages. Including a snippet below: Sometimes, not accepting conffile updates in /etc/texmf/updmap.d causes updmap-sys to fail. Please check for files with extension .dpkg-dist or .ucf-dist in this directory dpkg: error processing package tex-common (--configure): subprocess installed post-installation script returned error exit status 1 dpkg: dependency problems prevent configuration of lmodern: lmodern depends on tex-common (>= 3); however: Package tex-common is not configured yet. Reproduced by using sudo dpkg --configure -a and a total list of packages with this error is included here: Errors were encountered while processing: tex-common texlive-publishers tex-gyre texlive-latex-extra-doc texlive-fonts-extra-doc texlive-lang-english texlive-luatex texlive-generic-recommended texlive-pstricks-doc texlive-fonts-recommended latex2html latex-xcolor texlive-pictures texlive-fonts-extra texlive-pictures-doc asymptote texlive-bibtex-extra texlive-latex-recommended-doc texlive-latex-recommended doxygen-latex texlive-pstricks tipa texlive-latex-base texlive-fonts-recommended-doc latex-beamer texlive-font-utils texlive-latex-base-doc texlive-latex-extra texlive-extra-utils texlive texlive-publishers-doc lmodern Any ideas on how to fix this?

    Read the article

  • Lifecycle of an ASP.NET MVC 5 Application

    Here you can download a PDF Document that charts the lifecycle of every ASP.NET MVC 5 application, from receiving the HTTP request to sending the HTTP response back to the client. It is designed both as an educational tool for those who are new to ASP.NET MVC and also as a reference for those who need to drill into specific aspects of the application. The PDF document has the following features: Relevant HttpApplication stages to help you understand where MVC integrates into the ASP.NET application lifecycle. A high-level view of the MVC application lifecycle, where you can understand the major stages that every MVC application passes through in the request processing pipeline. A detail view that shows drills down into the details of the request processing pipeline. You can compare the high-level view and the detail view to see how the lifecycles details are collected into the various stages. Placement and purpose of all overridable methods on the Controller object in the request processing pipeline. You may or may not have the need to override any one method, but it is important for you to understand their role in the application lifecycle so that you can write code at the appropriate life cycle stage for the effect you intend. Blown-up diagrams showing how each of the filter types (authentication, authorization, action, and result) is invoked. Link to a useful article or blog from each point of interest in the detail view. span.fullpost {display:none;}

    Read the article

  • Sharing business logic between server-side and client-side of web application?

    - by thoughtpunch
    Quick question concerning shared code/logic in back and front ends of a web application. I have a web application (Rails + heavy JS) that parses metadata from HTML pages fetched via a user supplied URL (think Pinterest or Instapaper). Currently this processing takes place exclusively on the client-side. The code that fetches the URL and parses the DOM is in a fairly large set of JS scripts in our Rails app. Occasionally want to do this processing on the server-side of the app. For example, what if a user supplied a URL but they have JS disabled or have a non-standard compliant browser, etc. Ideally I'd like to be able to process these URLS in Ruby on the back-end (in asynchronous background jobs perhaps) using the same logic that our JS parsers use WITHOUT porting the JS to Ruby. I've looked at systems that allow you to execute JS scripts in the backend like execjs as well as Ruby-to-Javascript compilers like OpalRB that would hopefully allow "write-once, execute many", but I'm not sure that either is the right decision. Whats the best way to avoid business logic duplication for apps that need to do both client-side and server-side processing of similar data?

    Read the article

  • "sha256sum mismatch jdk-7u3-linux-x64.tar.gz " error when trying to install Oracle Java

    - by Fawkes5
    i recently tried installed java 7 on ubuntu 12.04 and i think i screwed something up I followed the instructions given here. First you need to remove openjdk for this run the following command from your terminal sudo apt-get purge openjdk* Now you can install Java7 by adding the following repository: sudo add-apt-repository ppa:eugenesan/java sudo apt-get update sudo apt-get install oracle-java7-installer Now everytime i install a new program i get the following error: Download done. sha256sum mismatch jdk-7u3-linux-x64.tar.gz Oracle JDK 7 is NOT installed. dpkg: error processing oracle-java7-installer (--configure): subprocess installed post-installation script returned error exit status 1 Setting up python-central (0.6.17ubuntu1) ... Setting up python-eggtrayicon (2.25.3-11) ... Setting up gmail-notify (1.6.1.1-1ubuntu1) ... Processing triggers for python-central ... Errors were encountered while processing: oracle-java7-installer Error in function: However.The program seems to install and work just fine so it doesn't seem to be a problem preventing me from doing anything So then i reinstalled openjdk by going: sudo apt-get install openjdk* But i still get the same error. going: sudo apt-get install oracle-java7-installer gives me the same error. What is going on? Please let me know if this is clear or not and ill try to explain my issue better

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • What to do when 'dpkg --configure -a' fails with too many errors?

    - by rudivonstaden
    During an upgrade from lucid (10.04) to precise (12.04), the X session froze, and I have been trying to recover the upgrade to get a stable system. I have performed the following steps: Used ssh to log in to the stalled system over the network. Checked the contents of the /var/log/dist-upgrade directory. There was no activity on main.log, apt.log or term.log. top showed that process 'precise' was using about 3% CPU, but I could find no evidence that the upgrade process was still doing anything. 'dpkg' did not show up in top, but it came up with pgrep dpkg | xargs ps Killed the 'dpkg' and 'precise' processes Tried to recover the upgrade by running sudo fuser -vki /var/lib/dpkg/lock;sudo dpkg --configure -a. This was partially successful (some packages were configured), but failed with the message Processing was halted because there were too many errors. I ran the same command a few times, and each time some packages were configured but others failed. Tried running sudo apt-get -f install. It fails with similar errors to dpkg. The current situation is that dpkg --configure -a and sudo apt-get -f install fails with two kinds of error: Dependency issues, e.g.: dpkg: dependency problems prevent configuration of cifs-utils: cifs-utils depends on samba-common; however: Package samba-common is not configured yet. dpkg: error processing cifs-utils (--configure): dependency problems - leaving unconfigured Resource conflict, e.g.: debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable Additionally, it seems there's reference to potential boot problems, so I'm not keen to reboot without fixing the install first: dpkg: too many errors, stopping Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.2.0-25-generic cryptsetup: WARNING: failed to detect canonical device of /dev/sda1 cryptsetup: WARNING: could not determine root device from /etc/fstab So my question is, how to get a working install when dpkg --configure -a fails?

    Read the article

  • Why am I getting this error while installing gnome?

    - by Sreehari Rajendran
    i have raring ringtail. I installed gnome a couple days ago. I try to install extensions but the site says I don't have the latest version. I type the command sudo apt-get install gnome-shell and I get this error Reading package lists... Done Building dependency tree Reading state information... Done gnome-shell is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 138 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? y Setting up initramfs-tools (0.103ubuntu0.7) ... update-initramfs: deferring update (trigger activated) Processing triggers for initramfs-tools ... update-initramfs: Generating /boot/initrd.img-3.8.0-25-generic cp: cannot stat ‘/module-files.d/libpango1.0-0.modules’: No such file or directory cp: cannot stat ‘/modules/pango-basic-fc.so’: No such file or directory E: /usr/share/initramfs-tools/hooks/plymouth failed with return 1. update-initramfs: failed for /boot/initrd.img-3.8.0-25-generic with 1. dpkg: error processing initramfs-tools (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: initramfs-tools E: Sub-process /usr/bin/dpkg returned an error code (1) why?

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >