Search Results

Search found 908 results on 37 pages for 'optimistic concurrency'.

Page 22/37 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Two Phase Commit with MongoDB

    - by mattcodes
    Heres what Im thinking. Do you see any issues with this workaround to emulate 2 phase commit when using something like MongoDB where each operation is atomic and there is no support for transactions outside of that? transaction_scope: read message from servicebus - UpdateCustomerAddress get customer aggregate from docdb, replay events where commited =1 call customer.updateAddress validates creates customer address updated event apply event event store as uncommitted events do optimistic concurrency update against docdb pushing uncommitted events (single op to ensure consistency) publish event to service bus update docdb set events just published to commited = 1 (again one 1 op - at least in mongodb) transaction_complete

    Read the article

  • How to use db4o IObjectContainer in a web application ? (Container lifetime ?)

    - by driis
    I am evaluating db4o for persistence for a ASP .NET MVC project. I am wondering how I should use the IObjectContainer in a web context with regards to object lifetime. As I see it, I can do one of the following: Create the IObjectContainer at application startup and keep the same instance for the entire application lifetime. Create one IObjectContainer per request. Start a server, and get a client IObjectContainer for each database interaction. What are the implications of these options, in terms of performance and concurrency ? Since the database is locked when an IObjectContainer is opened, I am pretty sure that option 2) would get me some problems with concurrency - would this also be the case for option 1 ? As I understand it, if I retrieve an object from an IObjectContainer, it must be saved by the same IObjectContainer instance - in order for db4o to identify it as being the same object. Therefore, If I choose option 3), I would have to retrieve the original object, make the necessary changes (copy data from a modified object), and then store it using the same IObjectContainer. Is this true ?

    Read the article

  • What's the easy way to test for 304 reponses?

    - by John Mee
    I need to test if my 304 responses are working, but my development environment is pretty hard set on force no-cache. Is there an easy way to modify the max-age value of the cache-control header before it goes out? I'm perhaps a bit optimistic in hoping chrome (or an extension, or FF) has a console command letting me alter the if-modified-since header then send the request. Maybe paste something into a telnet connection?

    Read the article

  • Noise with multi-threaded raytracer

    - by herber88
    This is my first multi-threaded implementation, so it's probably a beginners mistake. The threads handle the rendering of every second row of pixels (so all rendering is handled within each thread). The problem persists if the threads render the upper and lower parts of the screen respectively. Both threads read from the same variables, can this cause any problems? From what I've understood only writing can cause concurrency problems... Can calling the same functions cause any concurrency problems? And again, from what I've understood this shouldn't be a problem... The only time both threads write to the same variable is when saving the calculated pixel color. This is stored in an array, but they never write to the same indices in that array. Can this cause a problem? Multi-threaded rendered image (Spam prevention stops me from posting images directly..) Ps. I use the exactly same implementation in both cases, the ONLY difference is a single vs. two threads created for the rendering.

    Read the article

  • Postgresql has broken apt-get on Ubuntu

    - by Raphie Palefsky-Smith
    On ubuntu 12.04, whenever I try to install a package using apt-get I'm greeted by: The following packages have unmet dependencies: postgresql-9.1 : Depends: postgresql-client-9.1 but it is not going to be instal led E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a so lution). apt-get install postgresql-client-9.1 generates: The following packages have unmet dependencies: postgresql-client-9.1 : Breaks: postgresql-9.1 (< 9.1.6-0ubuntu12.04.1) but 9.1.3-2 is to be installed apt-get -f install and apt-get remove postgresql-9.1 both give: Removing postgresql-9.1 ... * Stopping PostgreSQL 9.1 database server * Error: /var/lib/postgresql/9.1/main is not accessible or does not exist ...fail! invoke-rc.d: initscript postgresql, action "stop" failed. dpkg: error processing postgresql-9.1 (--remove): subprocess installed pre-removal script returned error exit status 1 Errors were encountered while processing: postgresql-9.1 E: Sub-process /usr/bin/dpkg returned an error code (1) So, apt-get is crippled, and I can't find a way out. Is there any way to resolve this without a re-install? EDIT: apt-cache show postgresql-9.1 returns: Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.6-0ubuntu12.04.1 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.6-0ubuntu12.04.1) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.6-0ubuntu12.04.1_amd64.deb Size: 4298270 MD5sum: 9ee2ab5f25f949121f736ad80d735d57 SHA1: 5eac1cca8d00c4aec4fb55c46fc2a013bc401642 SHA256: 4e6c24c251a01f1b6a340c96d24fdbb92b5e2f8a2f4a8b6b08a0df0fe4cf62ab Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11164 Maintainer: Ubuntu Developers <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.5-0ubuntu12.04 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.5-0ubuntu12.04) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.5-0ubuntu12.04_amd64.deb Size: 4298028 MD5sum: 3797b030ca8558a67b58e62cc0a22646 SHA1: ad340a9693341621b82b7f91725fda781781c0fb SHA256: 99aa892971976b85bcf6fb2e1bb8bf3e3fb860190679a225e7ceeb8f33f0e84b Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server Package: postgresql-9.1 Priority: optional Section: database Installed-Size: 11220 Maintainer: Martin Pitt <[email protected]> Original-Maintainer: Martin Pitt <[email protected]> Architecture: amd64 Version: 9.1.3-2 Replaces: postgresql-contrib-9.1 (<< 9.1~beta1-3~), postgresql-plpython-9.1 (<< 9.1.3-2) Depends: libc6 (>= 2.15), libcomerr2 (>= 1.01), libgssapi-krb5-2 (>= 1.8+dfsg), libkrb5-3 (>= 1.6.dfsg.2), libldap-2.4-2 (>= 2.4.7), libpam0g (>= 0.99.7.1), libpq5 (>= 9.1~), libssl1.0.0 (>= 1.0.0), libxml2 (>= 2.7.4), postgresql-client-9.1, postgresql-common (>= 115~), tzdata, ssl-cert, locales Suggests: oidentd | ident-server, locales-all Conflicts: postgresql (<< 7.5) Breaks: postgresql-plpython-9.1 (<< 9.1.3-2) Filename: pool/main/p/postgresql-9.1/postgresql-9.1_9.1.3-2_amd64.deb Size: 4284744 MD5sum: bad9aac349051fe86fd1c1f628797122 SHA1: a3f5d6583cc6e2372a077d7c2fc7adfcfa0d504d SHA256: e885c32950f09db7498c90e12c4d1df0525038d6feb2f83e2e50f563fdde404a Description-en: object-relational SQL database, version 9.1 server PostgreSQL is a fully featured object-relational database management system. It supports a large part of the SQL standard and is designed to be extensible by users in many aspects. Some of the features are: ACID transactions, foreign keys, views, sequences, subqueries, triggers, user-defined types and functions, outer joins, multiversion concurrency control. Graphical user interfaces and bindings for many programming languages are available as well. . This package provides the database server for PostgreSQL 9.1. Servers for other major release versions can be installed simultaneously and are coordinated by the postgresql-common package. A package providing ident-server is needed if you want to authenticate remote connections with identd. Homepage: http://www.postgresql.org/ Description-md5: c487fe4e86f0eac09ed9847282436059 Bugs: https://bugs.launchpad.net/ubuntu/+filebug Origin: Ubuntu Supported: 5y Task: postgresql-server

    Read the article

  • It was a figure of speech!

    - by Ratman21
    Yesterday I posted the following as attention getter / advertisement (as well as my feelings). In the groups, (I am in) on the social networking site, LinkedIn and boy did I get responses.    I am fighting mad about (a figure of speech, really) not having a job! Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can no longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage. I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!     This was the responses I got.   I hear you. We just need to retrain and get our skills up to speed is all. That is what I am doing. I have not given up. Just got to stay on top of the game. Experience is on our side if we have the credentials and we are reasonable about our salaries this should not be an issue.   Already on it, going back to school and have got three certifications (CompTIA A+, Security+ and Network+. I am now studying for my CISCO CCNA certification. As to my salary, I am willing to work at very reasonable rate.   You need to re-brand yourself like a product, market and sell yourself. You need to smarten up, look and feel a million dollars, re-energize yourself, regain your confidents. Either start your own business, or re-write your CV so it stands out from the rest, get the template off the internet. Contact every recruitment agent in your town, state, country and overseas, and on the web. Apply to every job you think you could do, you may not get it but you will make a contact for your network, which may lead to a job at the end of the tunnel. Get in touch with everyone you know from past jobs. Do charity work. I maintain the IT Network, stage electrical and the Telecom equipment in my church,   Again already on it. I have email the world is seems with my resume and cover letters. So far, I have rewritten or had it rewrote, my resume and cover letters; over seven times so far. Re-energize? I never lost my energy level or my self-confidents in my work (now if could get some HR personal to see the same). I also volunteer at my church, I created and maintain the church web sit.   I share your frustration. Sucks being over 50 and looking for work. Please don't sell yourself short at min wage because the employer will think that’s your worth. Keep trying!!   I never stop trying and min wage is only for 90 days. If some one takes up the challenge. Some post asked if I am keeping up technology.   Do you keep up with the latest technology and can speak the language fluidly?   Yep to that and as to speaking it also a yep! I am a geek you know. I heard from others over the 50 year mark and younger too.   I'm with you! I keep getting told that I don't have enough experience because I just recently completed a Masters level course in Microsoft SQL Server, which gave me a project-intensive equivalent of between 2 and 3 years of experience. On top of that training, I have 19 years as an applications programmer and database administrator. I can normalize rings around experienced DBAs and churn out effective code with the best of them. But my 19 years is worthless as far as most recruiters and HR people are concerned because it is not the specific experience for which they're looking. HR AND RECRUITERS TAKE NOTE: Experience, whatever the language, translates across platforms and technology! By the way, I'm also over 55 and still have "got it"!   I never lost it and I also can work rings round younger techs.   I'm 52 and female and seem to be having the same issues. I have over 10 years experience in tech support (with a BS in CIS) and can't get hired either.   Ow, I only have an AS in computer science along with my certifications.   Keep the faith, I have been unemployed since August of 2008. I agree with you...I am willing to return to the beginning of my retail career and work myself back through the ranks, if someone will look past the grey and realize the knowledge I would bring to the table.   I also would like some one to look past the gray.   Interesting approach, volunteering to work for minimum wage for 90 days. I'm in the same situation as you, being 55 & balding w/white hair, so I know where you're coming from. I've been out of work now for a year. I'm in Michigan, where the unemployment rate is estimated to be 15% (the worst in the nation) & even though I've got 30+ years of IT experience ranging from mainframe to PC desktop support, it's difficult to even get a face-to-face interview. I had one prospective employer tell me flat out that I "didn't have the energy required for this position". Mostly I never get any feedback. All I can say is good luck & try to remain optimistic.   He said WHAT! Yes remaining optimistic is key. Along with faith in God. Then there was this (for lack of better word) jerk.   Give it up already. You were too old to work in high tech 10 years ago. Scratch that, 20 years ago! Try selling hot dogs in front of Fry's Electronics. At least you would get a chance to eat lunch with your previous colleagues....   You know funny thing on this person is that I checked out his profile. He is older than I am.

    Read the article

  • Parallelism in .NET – Part 13, Introducing the Task class

    - by Reed
    Once we’ve used a task-based decomposition to decompose a problem, we need a clean abstraction usable to implement the resulting decomposition.  Given that task decomposition is founded upon defining discrete tasks, .NET 4 has introduced a new API for dealing with task related issues, the aptly named Task class. The Task class is a wrapper for a delegate representing a single, discrete task within your decomposition.  We will go into various methods of construction for tasks later, but, when reduced to its fundamentals, an instance of a Task is nothing more than a wrapper around a delegate with some utility functionality added.  In order to fully understand the Task class within the new Task Parallel Library, it is important to realize that a task really is just a delegate – nothing more.  In particular, note that I never mentioned threading or parallelism in my description of a Task.  Although the Task class exists in the new System.Threading.Tasks namespace: Tasks are not directly related to threads or multithreading. Of course, Task instances will typically be used in our implementation of concurrency within an application, but the Task class itself does not provide the concurrency used.  The Task API supports using Tasks in an entirely single threaded, synchronous manner. Tasks are very much like standard delegates.  You can execute a task synchronously via Task.RunSynchronously(), or you can use Task.Start() to schedule a task to run, typically asynchronously.  This is very similar to using delegate.Invoke to execute a delegate synchronously, or using delegate.BeginInvoke to execute it asynchronously. The Task class adds some nice functionality on top of a standard delegate which improves usability in both synchronous and multithreaded environments. The first addition provided by Task is a means of handling cancellation via the new unified cancellation mechanism of .NET 4.  If the wrapped delegate within a Task raises an OperationCanceledException during it’s operation, which is typically generated via calling ThrowIfCancellationRequested on a CancellationToken, or if the CancellationToken used to construct a Task instance is flagged as canceled, the Task’s IsCanceled property will be set to true automatically.  This provides a clean way to determine whether a Task has been canceled, often without requiring specific exception handling. Tasks also provide a clean API which can be used for waiting on a task.  Although the Task class explicitly implements IAsyncResult, Tasks provide a nicer usage model than the traditional .NET Asynchronous Programming Model.  Instead of needing to track an IAsyncResult handle, you can just directly call Task.Wait() to block until a Task has completed.  Overloads exist for providing a timeout, a CancellationToken, or both to prevent waiting indefinitely.  In addition, the Task class provides static methods for waiting on multiple tasks – Task.WaitAll and Task.WaitAny, again with overloads providing time out options.  This provides a very simple, clean API for waiting on single or multiple tasks. Finally, Tasks provide a much nicer model for Exception handling.  If the delegate wrapped within a Task raises an exception, the exception will automatically get wrapped into an AggregateException and exposed via the Task.Exception property.  This exception is stored with the Task directly, and does not tear down the application.  Later, when Task.Wait() (or Task.WaitAll or Task.WaitAny) is called on this task, an AggregateException will be raised at that point if any of the tasks raised an exception.  For example, suppose we have the following code: Task taskOne = new Task( () => { throw new ApplicationException("Random Exception!"); }); Task taskTwo = new Task( () => { throw new ArgumentException("Different exception here"); }); // Start the tasks taskOne.Start(); taskTwo.Start(); try { Task.WaitAll(new[] { taskOne, taskTwo }); } catch (AggregateException e) { Console.WriteLine(e.InnerExceptions.Count); foreach (var inner in e.InnerExceptions) Console.WriteLine(inner.Message); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, our routine will print: 2 Different exception here Random Exception! Note that we had two separate tasks, each of which raised two distinctly different types of exceptions.  We can handle this cleanly, with very little code, in a much nicer manner than the Asynchronous Programming API.  We no longer need to handle TargetInvocationException or worry about implementing the Event-based Asynchronous Pattern properly by setting the AsyncCompletedEventArgs.Error property.  Instead, we just raise our exception as normal, and handle AggregateException in a single location in our calling code.

    Read the article

  • New on the Java Channel: Low-Latency Applications, JavaFX on Raspberry PI, and more

    - by terrencebarr
    If you haven’t checked out the Java YouTube channel lately … here is some of the stuff you’re missing: Understanding the JVM and Low Latency Applications (picture) JavaFX on the Raspberry Pi 55 New Java 7 Features: Part 3 – Concurrency Properties and Binding with JavaFX 2 Intro And something fun & cool: Java @ Maker Faire 2012 Much more on the Java Channel. Enjoy! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: Embedded Java, Java 7, Java Channel, Java Embedded, JavaFX, Maker Faire, Raspberry Pi, video, webcast, YouTube

    Read the article

  • Google I/O 2010 - Go Programming

    Google I/O 2010 - Go Programming Google I/O 2010 - Go Programming Tech Talks Rob Pike, Russ Cox The Go Programming Language was released as an open source project in late 2009. This session will illustrate how programming in Go differs from other languages through a set of examples demonstrating features particular to Go. These include concurrency, embedded types, methods on any type, and program construction using interfaces. Very little time will be spent waiting for compilation. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 12 0 ratings Time: 56:11 More in Science & Technology

    Read the article

  • Windows Azure Use Case: Hybrid Applications

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Organizations see the need for computing infrastructures that they can “rent” or pay for only when they need them. They also understand the benefits of distributed computing, but do not want to create this infrastructure themselves. However, they may have considerations that prevent them from moving all of their current IT investment to a distributed environment: Private data (do not want to send or store sensitive data off-site) High dollar investment in current infrastructure Applications currently running well, but may need additional periodic capacity Current applications not designed in a stateless fashion In these situations, a “hybrid” approach works best. In fact, with Windows Azure, a hybrid approach is an optimal way to implement distributed computing even when the stipulations above do not apply. Keeping a majority of the computing function in an organization local while exploring and expanding that footprint into Windows and SQL Azure is a good migration or expansion strategy. A “hybrid” architecture merely means that part of a computing cycle is shared between two architectures. For instance, some level of computing might be done in a Windows Azure web-based application, while the data is stored locally at the organization. Implementation: There are multiple methods for implementing a hybrid architecture, in a spectrum from very little interaction from the local infrastructure to Windows or SQL Azure. The patterns fall into two broad schemas, and even these can be mixed. 1. Client-Centric Hybrid Patterns In this pattern, programs are coded such that the client system sends queries or compute requests to multiple systems. The “client” in this case might be a web-based codeset actually stored on another system (which acts as a client, the user’s device serving as the presentation layer) or a compiled program. In either case, the code on the client requestor carries the burden of defining the layout of the requests. While this pattern is often the easiest to code, it’s the most brittle. Any change in the architecture must be reflected on each client, but this can be mitigated by using a centralized system as the client such as in the web scenario. 2. System-Centric Hybrid Patterns Another approach is to create a distributed architecture by turning on-site systems into “services” that can be called from Windows Azure using the service Bus or the Access Control Services (ACS) capabilities. Code calls from a series of in-process client application. In this pattern you move the “client” interface into the server application logic. If you do not wish to change the application itself, you can “layer” the results of the code return using a product (such as Microsoft BizTalk) that exposes a Web Services Definition Language (WSDL) endpoint to Windows Azure using the Application Fabric. In effect, this is similar to creating a Service Oriented Architecture (SOA) environment, and has the advantage of de-coupling your computing architecture. If each system offers a “service” of the results of some software processing, the operating system or platform becomes immaterial, assuming it adheres to a service contract. There are important considerations when you federate a system, whether to Windows or SQL Azure or any other distributed architecture. While these considerations are consistent with coding any application for distributed computing, they are especially important for a hybrid application. Connection resiliency - Applications on-premise normally have low-latency and good connection properties, something you’re not always guaranteed in a distributed and hybrid application. Whether a centralized client or a distributed one, the code should be able to handle extended retry logic. Authorization and Access - In a single authorization environment like a Active Directory domain, security is handled at a user-password level. In a distributed computing environment, you have more options. You can mitigate this with  using The Windows Azure Application Fabric feature of ACS to make the Azure application aware of the App Fabric as an ADFS provider. However, a claims-based authentication structure is often a superior choice.  Consistency and Concurrency - When you have a Relational Database Management System (RDBMS), Consistency and Concurrency are part of the design. In a Service Architecture, you need to plan for sequential message handling and lifecycle. Resources: How to Build a Hybrid On-Premise/In Cloud Application: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/11/09/how-to-build-a-hybrid-on-premise-in-cloud-application.aspx  General Architecture guidance: http://blogs.msdn.com/b/buckwoody/archive/2010/12/21/windows-azure-learning-plan-architecture.aspx   

    Read the article

  • The way I think about Diagnostic tools

    - by Daniel Moth
    Every software has issues, or as we like to call them "bugs". That is not a discussion point, just a mere fact. It follows that an important skill for developers is to be able to diagnose issues in their code. Of course we need to advance our tools and techniques so we can prevent bugs getting into the code (e.g. unit testing), but beyond designing great software, diagnosing bugs is an equally important skill. To diagnose issues, the most important assets are good techniques, skill, experience, and maybe talent. What also helps is having good diagnostic tools and what helps further is knowing all the features that they offer and how to use them. The following classification is how I like to think of diagnostics. Note that like with any attempt to bucketize anything, you run into overlapping areas and blurry lines. Nevertheless, I will continue sharing my generalizations ;-) It is important to identify at the outset if you are dealing with a performance or a correctness issue. If you have a performance issue, use a profiler. I hear people saying "I am using the debugger to debug a performance issue", and that is fine, but do know that a dedicated profiler is the tool for that job. Just because you don't need them all the time and typically they cost more plus you are not as familiar with them as you are with the debugger, doesn't mean you shouldn't invest in one and instead try to exclusively use the wrong tool for the job. Visual Studio has a profiler and a concurrency visualizer (for profiling multi-threaded apps). If you have a correctness issue, then you have several options - that's next :-) This is how I think of identifying a correctness issue Do you want a tool to find the issue for you at design time? The compiler is such a tool - it gives you an exact list of errors. Compilers now also offer warnings, which is their way of saying "this may be an error, but I am not smart enough to know for sure". There are also static analysis tools, which go a step further than the compiler in identifying issues in your code, sometimes with the aid of code annotations and other times just by pointing them at your raw source. An example is FxCop and much more in Visual Studio 11 Code Analysis. Do you want a tool to find the issue for you with code execution? Just like static tools, there are also dynamic analysis tools that instead of statically analyzing your code, they analyze what your code does dynamically at runtime. Whether you have to setup some unit tests to invoke your code at runtime, or have to manually run your app (and interact with it) under the tool, or have to use a script to execute your binary under the tool… that varies. The result is still a list of issues for you to address after the analysis is complete or a pause of the execution when the first issue is encountered. If a code path was not taken, no analysis for it will exist, obviously. An example is the GPU Race detection tool that I'll be talking about on the C++ AMP team blog. Another example is the MSR concurrency CHESS tool. Do you want you to find the issue at design time using a tool? Perform a code walkthrough on your own or with colleagues. There are code review tools that go beyond just diffing sources, and they help you with that aspect too. For example, there is a new one in Visual Studio 11 and searching with my favorite search engine yielded this article based on the Developer Preview. Do you want you to find the issue with code execution? Use a debugger - let’s break this down further next. This is how I think of debugging: There is post mortem debugging. That means your code has executed and you did something in order to examine what happened during its execution. This can vary from manual printf and other tracing statements to trace events (e.g. ETW) to taking dumps. In all cases, you are left with some artifact that you examine after the fact (after code execution) to discern what took place hoping it will help you find the bug. Learn how to debug dump files in Visual Studio. There is live debugging. I will elaborate on this in a separate post, but this is where you inspect the state of your program during its execution, and try to find what the problem is. More from me in a separate post on live debugging. There is a hybrid of live plus post-mortem debugging. This is for example what tools like IntelliTrace offer. If you are a tools vendor interested in the diagnostics space, it helps to understand where in the above classification your tool excels, where its primary strength is, so you can market it as such. Then it helps to see which of the other areas above your tool touches on, and how you can make it even better there. Finally, see what areas your tool doesn't help at all with, and evaluate whether it should or continue to stay clear. Even though the classification helps us think about this space, the reality is that the best tools are either extremely excellent in only one of this areas, or more often very good across a number of them. Another approach is to offer a toolset covering all areas, with appropriate integration and hand off points from one to the other. Anyway, with that brain dump out of the way, in follow-up posts I will dive into live debugging, and specifically live debugging in Visual Studio - stay tuned if that interests you. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API

    Google I/O 2011: Large-scale Data Analysis Using the App Engine Pipeline API Brett Slatkin The Pipeline API makes it easy to analyze complex data using App Engine. This talk will cover how to build multi-phase Map Reduce workflows; how to merge multiple large data sources with "join" operations; and how to build reusable analysis components. It will also cover the API's concurrency model, how to debug in production, and built-in testing facilities. From: GoogleDevelopers Views: 3320 17 ratings Time: 51:39 More in Science & Technology

    Read the article

  • What is the meaning of the sentence "we wanted it to be compiled so it’s not burning CPU doing the wrong stuff."

    - by user2434
    I was reading this article. It has the following paragraph. And did Scala turn out to be fast? Well, what’s your definition of fast? About as fast as Java. It doesn’t have to be as fast as C or Assembly. Python is not significantly faster than Ruby. We wanted to do more with fewer machines, taking better advantage of concurrency; we wanted it to be compiled so it’s not burning CPU doing the wrong stuff. I am looking for the meaning of the last sentence. How will interpreted language make the CPU do "wrong" stuff ?

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • PPL and TPL sessions on channel9

    - by Daniel Moth
    Back in June there was an internal conference in Redmond ("Engineering Forum") aimed at Microsoft engineers, and delivered by Microsoft engineers. I was asked to put together a track on Multi-Core development, so I picked 6 parallelism experts and we created 6 awesome sessions (we won the top spot in the Top 10 :-)). Two of the speakers kept the content fairly external-friendly, so we received permission to publish their recordings publicly. Enjoy (best to download the High Quality WMV): Don McCrady - Parallelism in C++ Using the Concurrency Runtime Stephen Toub - Implementing Parallel Patterns using .NET 4 To get notified on future videos on parallelism (or to browse the archive) stay tuned on this channel9 parallel computing feed. Comments about this post welcome at the original blog.

    Read the article

  • Links from UK TechDays 2010 sessions on Entity Framework, Parallel Programming and Azure

    - by Eric Nelson
    [I will do some longer posts around my sessions when I get back from holiday next week] Big thanks to all those who attended my 3 sessions at TechDays this week (April 13th and 14th, 2010). I really enjoyed both days and watched some great session – my personal fave being the Silverlight/Expression session by my friend and colleague Mike Taulty. The following links should help get you up and running on each of the technologies. Entity Framework 4 Entity Framework 4 Resources http://bit.ly/ef4resources Entity Framework Team Blog http://blogs.msdn.com/adonet Entity Framework Design Blog http://blogs.msdn.com/efdesign/ Parallel Programming Parallel Computing Developer Center http://msdn.com/concurrency Code samples http://code.msdn.microsoft.com/ParExtSamples Managed Team Blog http://blogs.msdn.com/pfxteam Tools Team Blog http://blogs.msdn.com/visualizeparallel My code samples http://gist.github.com/364522  And PDC 2009 session recordings to watch: Windows Azure Platform UK Site http://bit.ly/landazure UK Community http://bit.ly/ukazure (http://ukazure.ning.com ) Feedback www.mygreatwindowsazureidea.com Azure Diagnostics Manager - A client for Windows Azure Diagnostics Cloud Storage Studio - A client for Windows Azure Storage SQL Azure Migration Wizard http://sqlazuremw.codeplex.com

    Read the article

  • Links and code from session on Entity Framework 4, Parallel and C# 4.0 new features

    - by Eric Nelson
    Last week (12th May 2010) I did a session in the city on lot of the new .NET 4.0 Stuff. My demo code and links below. Code Parallel demos http://gist.github.com/364522  C# 4.0 new features http://gist.github.com/403826  EF4 Links Entity Framework 4 Resources http://bit.ly/ef4resources Entity Framework Team Blog http://blogs.msdn.com/adonet Entity Framework Design Blog http://blogs.msdn.com/efdesign/ Parallel Links Parallel Computing Dev Center http://msdn.com/concurrency Code samples http://code.msdn.microsoft.com/ParExtSamples Managed blog http://blogs.msdn.com/pfxteam Tools blog http://blogs.msdn.com/visualizeparallel C# 4.0 New features http://bit.ly/baq3aU  New in .NET 4.0 Coevolution http://bit.ly/axglst  New in C# 4.0 http://bit.ly/bG1U2Y

    Read the article

  • Web services, J2EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, J2EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

  • Host And Expose Application to local small network

    - by tartak
    I developed a little application (web application) using JavaEE+MySql. I try to keep some data and .. from time to time to get some reports using my data. My problem is I have to access this application from 4-5 computers in the office. They are connected through a switch. It's a typical small office network, nothing fancy. I need some advice on how to do this. I mean for a small application with no external communication is it mandatory to use an Apache machine? I'd use a simple Tomcat container on the "server machine" (which is my computer, a windows machine) and .. basically .. I would like to permit the access to my colleagues also. I don't have any knowledge about concurrency (I know mysql permits concurrent access) so I would like some configuration tips also.

    Read the article

  • Five new junior developers and lots of complex tasks. What's now?

    - by mxe
    Our company has hired five new junior developers to help me to developer our product. Unfortunately the new features and incoming bug fixes usually require deeper knowledge than a recently graduated developer usually has (threading/concurrency, debugging performance bottlenecks in a complex system, etc.) Delegating (and planning) tasks which they (probably) can solve, answering their questions, mentoring/managing them, reviewing their code use up all of my time and I often feel that I could solve the issues less time than the whole delegating process takes (counting only my time). In addition I don't have time to solve the tasks which require deeper system knowledge/more advanced skills and it does not seem that it will change in the near future. So, what's now? What should I do to use their and my time effectively?

    Read the article

  • Cooperator Framework

    - by csharp-source.net
    Cooperator Framework is a base class library for high performance Object Relational Mapping (ORM), and a code generation tool that aids agile application development for Microsoft .Net Framework 2.0/3.0. The main features are: * Use business entities. * Full typed Model (Data Layer and Entities) * Maintain persistence across the layers by passing specific types( .net 2.0/3.0 generics) * Business objects can bind to controls in Windows Forms and Web Forms taking advantage of data binding of Visual Studio 2005. * Supports any Primary Key defined on tables, with no need to modify it or to create a unique field. * Uses stored procedures for data access. * Supports concurrency. * Generates code both for stored procedures and projects in C# or Visual Basic. * Maintains the model in a repository, which can be modified in any stage of the development cycle, regenerating the model on demand.

    Read the article

  • .NET 4: &ldquo;Slim&rdquo;-style performance boost!

    - by Vitus
    RTM version of .NET 4 and Visual Studio 2010 is available, and now we can do some test with it. Parallel Extensions is one of the most valuable part of .NET 4.0. It’s a set of good tools for easily consuming multicore hardware power. And it also contains some “upgraded” sync primitives – Slim-version. For example, it include updated variant of widely known ManualResetEvent. For people, who don’t know about it: you can sync concurrency execution of some pieces of code with this sync primitive. Instance of ManualResetEvent can be in 2 states: signaled and non-signaled. Transition between it possible by Set() and Reset() methods call. Some shortly explanation: Thread 1 Thread 2 Time mre.Reset(); mre.WaitOne(); //code execution 0 //wating //code execution 1 //wating //code execution 2 //wating //code execution 3 //wating mre.Set(); 4 //code execution //… 5 Upgraded version of this primitive is ManualResetEventSlim. The idea in decreasing performance cost in case, when only 1 thread use it. Main concept in the “hybrid sync schema”, which can be done as following:   internal sealed class SimpleHybridLock : IDisposable { private Int32 m_waiters = 0; private AutoResetEvent m_waiterLock = new AutoResetEvent(false);   public void Enter() { if (Interlocked.Increment(ref m_waiters) == 1) return; m_waiterLock.WaitOne(); }   public void Leave() { if (Interlocked.Decrement(ref m_waiters) == 0) return; m_waiterLock.Set(); }   public void Dispose() { m_waiterLock.Dispose(); } } It’s a sample from Jeffry Richter’s book “CLR via C#”, 3rd edition. Primitive SimpleHybridLock have two public methods: Enter() and Leave(). You can put your concurrency-critical code between calls of these methods, and it would executed in only one thread at the moment. Code is really simple: first thread, called Enter(), increase counter. Second thread also increase counter, and suspend while m_waiterLock is not signaled. So, if we don’t have concurrent access to our lock, “heavy” methods WaitOne() and Set() will not called. It’s can give some performance bonus. ManualResetEvent use the similar idea. Of course, it have more “smart” technics inside, like a checking of recursive calls, and so on. I want to know a real difference between classic ManualResetEvent realization, and new –Slim. I wrote a simple “benchmark”: class Program { static void Main(string[] args) { ManualResetEventSlim mres = new ManualResetEventSlim(false); ManualResetEventSlim mres2 = new ManualResetEventSlim(false);   ManualResetEvent mre = new ManualResetEvent(false);   long total = 0; int COUNT = 50;   for (int i = 0; i < COUNT; i++) { mres2.Reset(); Stopwatch sw = Stopwatch.StartNew();   ThreadPool.QueueUserWorkItem((obj) => { //Method(mres, true); Method2(mre, true); mres2.Set(); }); //Method(mres, false); Method2(mre, false);   mres2.Wait(); sw.Stop();   Console.WriteLine("Pass {0}: {1} ms", i, sw.ElapsedMilliseconds); total += sw.ElapsedMilliseconds; }   Console.WriteLine(); Console.WriteLine("==============================="); Console.WriteLine("Done in average=" + total / (double)COUNT); Console.ReadLine(); }   private static void Method(ManualResetEventSlim mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } }   private static void Method2(ManualResetEvent mre, bool value) { for (int i = 0; i < 9000000; i++) { if (value) { mre.Set(); } else { mre.Reset(); } } } } I use 2 concurrent thread (the main thread and one from thread pool) for setting and resetting ManualResetEvents, and try to run test COUNT times, and calculate average execution time. Here is the results (I get it on my dual core notebook with T7250 CPU and Windows 7 x64): ManualResetEvent ManualResetEventSlim Difference is obvious and serious – in 10 times! So, I think preferable way is using ManualResetEventSlim, because not always on calling Set() and Reset() will be called “heavy” methods for working with Windows kernel-mode objects. It’s a small and nice improvement! ;)

    Read the article

  • Learning PostgreSql: old versions of rows are stored right in the table

    - by Alexander Kuznetsov
    PostgreSql features multi-version concurrency control aka MVCC. To implement MVCC, old versions of rows are stored right in the same table, and this is very different from what SQL Server does, and it leads to some very interesting consequences. Let us play with this thing a little bit, but first we need to set up some test data. Setting up. First of all, let us create a numbers table. Any production database must have it anyway: CREATE TABLE Numbers ( i INTEGER ); INSERT INTO Numbers ( i ) VALUES...(read more)

    Read the article

  • Recent JSR Updates-JSR 356, 357, 355, 349, 236

    - by heathervc
    JSR 357, Social Media API, was not approved by the SE/EE EC to continue development in the JCP program. JSR 356, Java API for WebSocket, was approved by the SE/EE EC to continue development in the JCP program. JSR 355,  JCP Executive Committee Merge, published an Early Draft Review; this review closes 27 April.  You can read more about JSR 355 here. JSR 349, Bean Validation 1.1, published an Early Draft Review; this review closes 27 April. JSR 236,  Concurrency Utilities for Java EE, has updated the JSR page and moved to JCP version 2.8.

    Read the article

  • Web services, Java EE, Spring, DB integration project ideas - maybe data mining related?

    - by saral jain
    I am a graduate Computer Science student (Data Mining and Machine Learning) and have good exposure to core Java (3 years). I have read up on a bunch of stuff on the following topics: Design patterns, Java EE Web services (SOAP and REST), Spring, and Hibernate Java Concurrency - advanced features like Task and Executors. I would now like to do a project combining this stuff -- over my free time of course -- to get a better understanding of these things and to kind of make an end to end software (to learn the best design principles etc + SVN, maven). Any good project ideas would be really appreciated. I just want to build this stuff to learn, so I don't really mind re-inventing the wheel. Also, anything related to data mining would be an added bonus as it fits with my research but is absolutely not necessary since this project is more to learn to do large scale software development.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >