Search Results

Search found 11197 results on 448 pages for 'related'.

Page 99/448 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • CQRS - Benefits

    - by Dylan Smith
    Thanks to all the comments and feedback from the last post I think I have a better understanding now of the benefits of CQRS (separate from the benefits of Event Sourcing). I’m going to try and sum it up here, and point out some areas where I could still use some advice: CQRS Benefits Sounds like the primary benefit of CQRS as an architecture is it allows you to create a simpler domain model by sucking out everything related to queries. I can definitely see the benefit to this, in general the domain logic related to commands is the high-value behavior in the software, but the logic required to service the queries would add a lot of low-value “noise” to the domain model that would dilute the high-value (command) behavior – sorting, paging, filtering, pre-fetch paths, etc. Also the most appropriate domain structure for implementing commands might not be the most optimal for implementing queries. To paraphrase Greg, this usually results in a domain model that is mediocre at both, piss-poor at one, or more likely piss-poor at both commands and queries. Not only will you be able to simplify your domain model by pulling out all the query logic, but at least a handful of commands in most systems will probably be “pass-though” type commands with little to no logic that just generate events. If these can be implemented directly in the command-handler and never touch the domain model, this allows you to slim down the domain model even more. Also, if you were to do event sourcing without CQRS, you no longer have a database containing the current state (only the domain model would) which makes it difficult (or impossible) to support ad-hoc querying and/or reporting that is common in most business software. Of course CQRS provides some great scalability benefits, not only scalability but I have to assume that it provides extremely low latency for most operations, especially if you have an asynchronous event bus. I know Greg says that you get a 3x scaling (Commands, Queries, Client) of your ability to perform parallel development, but IMHO, it seems like it only provides 1.5x scaling since even without CQRS you’re going to have your client loosely coupled to your domain - which is still a great benefit to be able to realize. Questions / Concerns If all the queries against an aggregate get pulled out to the Query layer, what if the only commands for that aggregate can be handled in a “pass-through” manner with the command handler directly generating events. Is it possible to have an aggregate that isn’t modeled in the domain model? Are there any issues or downsides to this? I know in the feedback from my previous posts it was suggested that having one domain model handling both commands and queries requires implementing a lot of traversals between objects that wouldn’t be necessary if it was only servicing commands. My question is, do you include traversals in your domain model based on the needs of the code, or based on the conceptual domain model? If none of my Commands require a Customer.Orders traversal, but the conceptual domain includes the concept of a set of orders belonging to a customer – should I model that in my domain model or not? I like the idea of using the Query side of the architecture as a place to put junior devs where the risk of them screwing something up has minimal impact. But I’m not sold on the idea that you can actually outsource it. Like I said in one of my comments on my previous post, the code to handle a query and generate DTO’s is going to be dead simple, but the code to process events and apply them to the tables on the query side is going to require a significant amount of domain knowledge to know which events to listen for to update each of the de-normalized tables (and what changes need to be made when each event is processed). I don’t know about everybody else, but having Indian/Russian/whatever outsourced developers have to do anything that requires significant domain knowledge has never been successful in my experience. And if you need to spec out for each new query which events to listen to and what to do with each one, well that’s probably going to be just as much work to document as it would be to just implement it. Greg made the point in a comment that doing an aggregate query like “Total Sales By Customer” is going to be inefficient if you use event sourcing but not CQRS. I don’t understand why that would be the case. I imagine in that case you’d simply have a method/property on the Customer object that calculated total sales for that customer by enumerating over the Orders collection. Then the application services layer would generate DTO’s off of the Customers collection that included say the CustomerID, CustomerName, TotalSales, or whatever the case may be. As long as you use a snapshotting implementation, I don’t see why that would be anymore inefficient in a DDD+Event Sourcing implementation than in a typical DDD implementation. Like I mentioned in my last post I still have some questions about query logic that haven’t been answered yet, but before I start asking those I want to make sure I have a strong grasp on what benefits CQRS provides.  My main concern with the query logic was that I know I could just toss it all into the query side, but I was concerned that I would be losing the benefits of using CQRS in the first place if I did that.  I want to elaborate more on this though with some example situations in an upcoming post.

    Read the article

  • Refresh bounded taskflows across regions using Contextual Events

    - by raghu.yadav
    Usecases: 1) Data Change in left region inputText field reflect changes in right region using contextual event. example by Frank Nimphius :Value change event refresh across regions using Contextual Events 2) Select Tree node in left region reflects dependent detail form in right region using dynamic regions and Contextual Events. example by Frank Nimphius:Example6-RangeCtx.unzip More related examples: http://thepeninsulasedge.com/frank_nimphius/2008/02/07/adf-faces-rc-refreshing-a-table-ui-from-a-contextual-event/ http://www.oracle.com/technology/products/jdev/tips/fnimphius/generictreeselectionlistener/index.html http://www.oracle.com/technology/products/jdev/tips/fnimphius/syncheditformwithtree/index.html http://biemond.blogspot.com/2009/01/passing-adf-events-between-task-flow.html http://www.oracle.com/technology/products/jdev/tips/fnimphius/opentaskflowintab/index.html http://lucbors.blogspot.com/2010/03/adf-11g-contextual-event-framework.html http://thepeninsulasedge.com/blog/?cat=2 http://www.ora600.be/news/adf-contextual-events-11g-r1-ps1

    Read the article

  • Upgrading to latest stable Mono

    - by Oli
    Mono 2.8 was recently released boasting a couple of large performance improvements. It's far too late for it to make it into Maverick and I'm fairly inpatient. I don't use Mono for anything mission-critical (just playing music and sorting photos) and if it breaks everything related to Mono, I can probably either live with it or fix it. I'm aware of how much I stand to lose if I mess things up. So with that acknowledged, does anybody here know how to build Mono in a way where it could be dropped in to replace the current Mono (2.6.7)? By this I mean ideally mirroring the packages that Ubuntu uses so that if the worse does happen, I can just downgrade the packages. Or is there a PPA that does all this for me?

    Read the article

  • Concurrent Business Events

    - by Manoj Madhusoodanan
    This blog describes the various business events related to concurrent requests.In the concurrent program definition screen we can see the various business events which are attached to concurrent processing. Following are the actual definition of above business events. Each event will have following parameters. Create subscriptions to above business events.Before testing enable profile option 'Concurrent: Business Intelligence Integration Enable' to Yes. ExampleI have created a scenario.Whenever my concurrent request completes normally I want to send out file as attachment to my mail.So following components I have created.1) Host file deployed on $XXCUST_TOP/bin to send mail.It accepts mail ids,subject and output file.(Code here)2) Concurrent Program to send mail which points to above host file.3) Subscription package to oracle.apps.fnd.concurrent.request.completed.(Code here)Choose a concurrent program which you want to send the out file as attachment.Check Request Completed check box.Submit the program.If it completes normally the business event subscription program will send the out file as attachment to the specified mail id.

    Read the article

  • Local Events | Azure Bootcamp

    - by Jeff Julian
    Coming to Kansas City April 8th and 9th is the Microsoft Azure Bootcamp. This event looks very promising for those developers who are looking into Azure for themselves or their companies. It covers the wide range of topics required to understand what Azure really is and is not. Space is limited so if you are considering Azure, register for this event today.Agenda:Module 1: Introduction to cloud computer and AzureHow it worksKey ScenariosThe development environment and SDKModule 2: Using Web RolesBasic ASP.NETBasic configurationModule 3: Blobs: File Storage in the cloudModule 4: Tables: Scalable hierarchical storageModule 5: Queues: Decoupling your systemsModule 6: Basic Worker RolesExecuting backend processesConsuming a queueLeveraging local storageModule 7: Advanced Worker RolesExternal EndpointsInter-role communicationModule 8: Building a business with AzureUsing Azure as an ISV or a partnerAdvantages to delivering valueBPOSPricingModule 9: SQL AzureSetting it upSQL Azure firewallRemote managementMigrating dataModule 10: AppFabricService BusAccess Control SystemIdentity in the cloudModule 11: Cloud ScenariosApp migration strategiesDisposable computingDynamic scaleShuntingPrototypingMultitenant applications (This is my second attempt at this post after MacJournal decided to crash and not save my work. Authoring tools all need auto-save features by now, that is a requirement set in stone by Microsoft Word 97) Related Tags: Azure, Microsoft, Kansas City

    Read the article

  • Default Save Directory for gnome-screensaver?

    - by trent
    Are there any sort of configuration options for specifying the default save location for gnome-screenshot, or is this hard-coded into the source code? It used to be ~/Desktop, which seems to have changed to ~/Pictures (in 12.04). The only possible solution I've seen is about Setting the default name (as it includes time stamp information now instead of simply Screenshot#), but that solution doesn't really seem ideal to me. Also, this post suggested that the last save location is remembered the next time you take a screenshot, but in my experience, this doesn't seem to be the case. And in any case, following on from that, that entry in gconf-editor doesn't even seem to accurately reflect the last location, so more than likely an entry related to an older version of gnome-screenshot.

    Read the article

  • What to do if you're burnt out?

    - by rsteckly
    Hi, I'm starting to get really frustrated with ASP.NET. It seems as if much of my time is spent learning abstractions over problems, then having to kick into overdrive when those abstractions (surprise!) have unexpected behavior. It seems as if I spend so much time just fixing those issues because they usually are UI related and therefore require integration testing that I spend very little time programming any kind of meaningful logic. I don't know if it is ASP or if it is programming. I just feel as if I'm getting paid, doing the work but really wasting time. On the other hand, I have fantasies about console programs and actually using algorithms. What is wrong with me? Why can't I force myself to churn through this ASP stuff more?

    Read the article

  • Kinect Turns DaVinci Physics Application Super Cool

    - by Gopinath
    Guys at RazorFish who are well known for their Microsoft Surface impressive stuff has ported their Da Vinci Application over to Kinect device. The end result is a super cool gesture based application. Check out the embedded video demonstration below If you wondering what is Da Vince Application is all about, here are few lines from RazorFish DaVinci is a Microsoft Surface application that blurs the lines between the physical and virtual world by combining object recognition, real-world physics simulation and gestural interface design. Related:Kinect + Windows 7 = Control PC With Hand Gestures This article titled,Kinect Turns DaVinci Physics Application Super Cool, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Microsoft Technical Computing

    In the past I have described the team I belong to here at Microsoft (Parallel Computing Platform) in terms of contributing to Visual Studio and related products, e.g. .NET Framework. To be more precise, our team is part of the Technical Computing group, which is still part of the Developer Division. This was officially announced externally earlier this month in an exec email (from Bob Muglia, the president of STB, to which DevDiv belongs). Here is an extract: " As we build the Technical...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Online games programming basics

    - by Renkon
    I am writing with regard to an issue I am having nowadays. I have come up with an interesting idea of making an online game in C#, yet I do not have the knowledge to work with more than a player. Basic games like a TIC-TAC-TOE or a SNAKE were done already, and I would like to do a simple, but online, game. Would you mind giving me some tutorials or guides related to that? I would really like to learn how to work online with the client/server structure (though, I do know the basics about that structure). I look forward to reading from you. Yours faithfully, Renkon.

    Read the article

  • References about Game Engine Architecture in AAA Games

    - by sharethis
    Last weeks I focused on game engine architecture and learned a lot about different approaches like component based, data driven, and so on. I used them in test applications and understand their intention but none of them looks like the holy grail. So I wonder how major games in the industry ("AAA Games") solve different architecture problems. But I noticed that there are barely references about game engine architecture out there. Do you know any resources of game engine architecture of major game titles like Battlefield, Call of Duty, Crysis, Skyrim, and so on? Doesn't matter if it is an article of a game developer or a wiki page or an entire book. I read this related popular question: Good resources for learning about game architecture? But it is focused on learning books rather than approaches in the industry. Hopefully the breadth of our community can carry together certain useful informations! Thanks a lot! Edit: This question is focused but not restricted to first person games.

    Read the article

  • Spotlight on Claims: Serving Customers Under Extreme Conditions

    - by [email protected]
    Oracle Insurance's director of marketing for EMEA, John Sinclair, recently attended the CII Spotlight on Claims event in London. Bad weather and its implications for the insurance industry have become very topical as the frequency and diversity of natural disasters - including rains, wind and snow - has surged across Europe this winter. On England's wettest day on record, the county of Cumbria was flooded with 12 inches of rain within 24 hours. Freezing temperatures wreaked havoc on European travel, causing high speed TVG trains to break down and stranding hundreds of passengers under the English Chanel in a tunnel all night long without heat or electricity. A storm named Xynthia thrashed France and surrounding countries with hurricane force, flooding ports and killing 51 people. After the Spring Equinox, insurers may have thought the worst had past. Then came along Eyjafjallajökull, spewing out vast quantities of volcanic ash in what is turning out to be one of most costly natural disasters in history. Such extreme events challenge insurance companies' ability to service their customers just when customers need their help most. When you add economic downturn and competitive pressures to the mix, insurers are further stretched and required to continually learn and innovate to meet high customer expectations with reduced budgets. These and other issues were hot topics of discussion at the recent "Spotlight on Claims" seminar in London, focused on how weather is affecting claims and the insurance industry. The event was organized by the CII (Chartered Insurance Institute), a group with 90,000 members. CII has been at the forefront in setting professional standards for the insurance industry for over a century. Insurers came to the conference to hear how they could better serve their customers under extreme weather conditions, learn from the experience of their peers, and hear about technological breakthroughs in climate modeling, geographic intelligence and IT. Customer case studies at the conference highlighted the importance of effective and constant communication in handling the overflow of catastrophe related claims. First and foremost is the need to rapidly establish initial communication with claimants to build their confidence in a positive outcome. Ongoing communication then needs to be continued throughout the claims cycle to mange expectations and maintain ownership of the process from start to finish. Strong internal communication to support frontline staff was also deemed critical to successful crisis management, as was communication with the broader insurance ecosystem to tap into extended resources and business intelligence. Advances in technology - such web based systems to access policies and enter first notice of loss in the field - as well as customer-focused self-service portals and multichannel alerts, are instrumental in improving customer satisfaction and helping insurers to deal with the claims surge, which often can reach four or more times normal workloads. Dynamic models of the global climate system can now be used to better understand weather-related risks, and as these models mature it is hoped that they will soon become more accurate in predicting the timing of catastrophic events. Geographic intelligence is also being used within a claims environment to better assess loss reserves and detect fraud. Despite these advances in dealing with catastrophes and predicting their occurrence, there will never be a substitute for qualified front line staff to deal with customers. In light of pressures to streamline efficiency, there was debate as to whether outsourcing was the solution, or whether it was better to build on the people you have. In the final analysis, nearly everybody agreed that in the future insurance companies would have to work better and smarter to keep on top. An appeal was also made for greater collaboration amongst industry participants in dealing with the extreme conditions and systematic stress brought on by natural disasters. It was pointed out that the public oftentimes judged the industry as a whole rather than the individual carriers when it comes to freakish events, and that all would benefit at such times from the pooling of limited resources and professional skills rather than competing in silos for competitive advantage - especially the end customer. One case study that stood out was on how The Motorists Insurance Group was able to power through one of the most devastating catastrophes in recent years - Hurricane Ike. The keys to Motorists' success were superior people, processes and technology. They did a lot of upfront planning and invested in their people, creating a healthy team environment that delivered "max service" even when they were experiencing the same level of devastation as the rest of the population. Processes were rapidly adapted to meet the challenge of the catastrophe and continually adapted to Ike's specific conditions as they evolved. Technology was fundamental to the execution of their strategy, enabling them anywhere access, on the fly reassigning of resources and rapid training to augment the work force. You can learn more about the Motorists experience by watching this video. John Sinclair is marketing director for Oracle Insurance in EMEA. He has more than 20 years of experience in insurance and financial services.

    Read the article

  • Windows Azure guidance from the Patterns and Practices team

    - by Eric Nelson
    The P&P team have started to share guidance on the Windows Azure Platform.  They plan to group their efforts around: 1. Moving to the Cloud 2. Integrating with the Cloud 3. Leveraging the Cloud First up is a document which explains the capabilities and limitations of Enterprise Library 5.0 Beta 2 in terms of use within .NET applications designed to run with the Windows Azure platform. You can download it here. Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • DLL-s needed to run ASP.NET MVC 3 RC on Windows Azure

    - by DigiMortal
    In this weekend I made one of my new apps run on Windows Azure. I am building this application using ASP.NET MVC 3 RC and Razor view engine. In this posting I will list DLL-s you need to have as local copies to get ASP.NET MVC 3 RC run on Windows Azure web role. Besides assemblies that are already references you may need to add references to some more assemblies. List of assemblies is here: Microsoft.Web.Infrastructure System.Web.Helpers System.Web.Mvc System.Web.Razor System.Web.WebPages System.Web.WebPages.Razor WebMatrix.Data You can find Razor and ASP.NET Web Pages related assemblies from folder: C:\Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies\ NB! If your project is using dynamically loaded assemblies that are not referenced from any of your project make sure you are including them as project items that are located in bin folder. This way these DLL-s are also put to deployment package and you don’t have to create code level references to them.

    Read the article

  • Data Warehouse Best Practices

    - by jean-pierre.dijcks
    In our quest to share our endless wisdom (ahem…) one of the things we figured might be handy is recording some of the best practices for data warehousing. And so we did. And, we did some more… We now have recreated our websites on Oracle Technology Network and have a separate page for best practices, parallelism and other cool topics related to data warehousing. But the main topic of this post is the set of recorded best practices. Here is what is available (and it is a series that ties together but can be read independently), applicable for almost any database version: Partitioning 3NF schema design for a data warehouse Star schema design Data Loading Parallel Execution Optimizer and Stats management The best practices page has a lot of other useful information so have a look here.

    Read the article

  • Is the new windows 8 sdk usable with visual c++ express 2010 on windows 7?

    - by JohnB
    This is inspired by and related to Is the June 2010 DX SDK really the latest? asked recently but it's a different question. I won't likely be purchasing the full visual studio 2012 for C++, I intend to use the free visual c++ express 2012 that targets desktop applications when it is released so for now I'm using visual c++ express 2010 running on windows 7. The latest directx11 sdk is the one included in the windows 8 SDK now, it's not a separate release any more. So my question is, can I use the windows 8 SDK to build directx11 programs that work on windows 7 using visual studio express 2010 running on windows 7. Or do I need to stick to the final DirectX SDK release for now?

    Read the article

  • SQL SERVER – Planned and Unplanned Availablity Group Failovers – Notes from the Field #031

    - by Pinal Dave
    [Note from Pinal]: This is a new episode of Notes from the Fields series. AlwaysOn is a very complex subject and not everyone knows many things about this. The matter of the fact is there is very little information available on this subject online and not everyone knows everything about this. This is why when a very common question related to AlwaysOn comes, people get confused. In this episode of the Notes from the Field series database expert John Sterrett (Group Principal at Linchpin People) explains a very common issue DBAs and Developer faces in their career and is related to Planned and Unplanned Availablity Group Failovers. Linchpin People are database coaches and wellness experts for a data driven world. Read the experience of John in his own words. Whenever a disaster occurs it will be a stressful scenario regardless of how small or big the disaster is. This gets multiplied when it is your first time working with newer technology or the first time you are going through a disaster without a proper run book. Today, were going to help you establish a run book for creating a planned failover with availability groups. To make today’s session simple were going to have two instances of SQL Server 2012 included in an availability group and walk through the steps of doing an unplanned failover.  We will focus on using the user interface and T-SQL to complete the failovers. We are going to use a two replica Availability Group where each replica is in another location. Therefore, we will be covering Asynchronous (non automatic failover) the following is a breakdown of our availability group utilized today. Seeing the following screen might be scary the first time you come across an unplanned failover.  It looks like our test database used in this Availability Group is not functional and it currently isn’t. The database status is not synchronizing which makes sense because the primary replica went down so it couldn’t synchronize. With that said, we can still failover and make it functional while we troubleshoot why we lost our primary replica. To start we are going to right click on the availability group that needs to be restarted and select failover. This will bring up the following wizard, which will walk you through several steps needed to complete the failover using the graphical user interface provided with SQL Server Management Studio (SSMS). You are going to see warning messages simply because we are in Asynchronous commit mode and can not guarantee ‘no data loss’ when we do failover. Just incase you missed it; you get another screen warning you about potential data loss because we are in Asynchronous mode. Next we get to connect to the specific replica we want to become the primary replica after the failover occurs. In our case, we only have two replicas so this is trivial. In order to failover, it’s required to connect to the replica that will become primary.  The following screen shows that the connection has been made successfully. Next, you will see the final summary screen. Once again, this reminds you that the failover action will cause data loss as were using Asynchronous commit mode due to the distance between instances used for disaster recovery. Finally, once the failover is completed you will see the following screen. If you followed along this long you might be wondering what T-SQL scripts are generated for clicking through all the sections of the wizard. If you have used Database Mirroring in the past you might be surprised.  It’s not too different, which makes sense because the data is being replicated via SQL Server endpoints just like the good old database mirroring. Now were going to take a look at how to do a failover with just T-SQL. First, were going to need to open a new query window and run our query in SQLCMD mode. Just incase you haven’t used SQLCMD mode before we will show you how to enable it below. Now you can run the following statement. Notice, we connect to the replica we want to become primary after failover and specify to force failover to allow data loss. We can use the following script to failback over when our primary instance comes back online. -- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect SQL2012PROD1 ALTER AVAILABILITY GROUP [AGSQL2] FORCE_FAILOVER_ALLOW_DATA_LOSS; GO Are your servers running at optimal speed or are you facing any SQL Server Performance Problems? If you want to get started with the help of experts read more over here: Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • WCF Error when using “Match Data” function in MDS Excel AddIn

    - by Davide Mauri
    If you’re using MDS and DQS with the Excel Integration you may get an error when trying to use the “Match Data” feature that uses DQS in order to help to identify duplicate data in your data set. The error is quite obscure and you have to enable WCF error reporting in order to have the error details and you’ll discover that they are related to some missing permission in MDS and DQS_STAGING_DATA database. To fix the problem you just have to give the needed permession, as the following script does: use MDS go GRANT SELECT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT INSERT ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT DELETE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] GRANT UPDATE ON mdm.tblDataQualityOperationsState TO [VMSRV02\mdsweb] USE [DQS_STAGING_DATA] GO ALTER AUTHORIZATION ON SCHEMA::[db_datareader] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_datawriter] TO [VMSRV02\mdsweb] ALTER AUTHORIZATION ON SCHEMA::[db_ddladmin] TO [VMSRV02\mdsweb] GO Where “VMSRV02\mdsweb” is the user you configured for MDS Service execution. If you don’t remember it, you can just check which account has been assigned to the IIS application pool that your MDS website is using:

    Read the article

  • What Did You Do? is a Bad Question

    - by Ajarn Mark Caldwell
    Brian Moran (blog | Twitter) did a great presentation today for the PASS Professional Development Virtual Chapter on The Art of Questions.  One of the points that Brian made was that there are good questions and bad (or at least not-as-good) questions.  Good questions tend to open-up the conversation and engender positive reactions (perhaps even trust and respect) between the participants; and bad questions tend to close-down a conversation either through the narrow list of possible responses (e.g. strictly Yes/No) or through the negative reactions they can produce.  And this explains why I so frequently had problems troubleshooting real-time problems with users in the past.  I’ll explain that in more detail below, but before we go on, let me recommend that you watch the recording of Brian’s presentation to learn why the question Why is often problematic in the U.S. and yet we so often resort to it. For a short portion (3 years) of my career, I taught basic computer skills and Office applications in an adult vocational school, and this gave me ample opportunity to do live troubleshooting of user challenges with computers.  And like many people who ended up in computer related jobs, I also have had numerous times where I was called upon by less computer-savvy individuals to help them with some challenge they were having, whether it was part of my job or not.  One of the things that I noticed, especially during my time as a teacher, was that when I was helping somebody, typically the first question I would ask them was, “What did you do?”  This seemed to me like a good way to start my detective work trying to figure out what happened, what went wrong, how to fix it, and how to help the person avoid it again in the future.  I always asked it in a polite tone of voice as I was just trying to gather the facts before diving in deeper.  However; 99.999% of the time, I always got the same answer, “Nothing!”  For a long time this frustrated me because (remember I’m in detective mode at that point) I knew it could not possibly be true.  They HAD to have done SOMETHING…just tell me what were the last actions you took before this problem presented itself.  But no, they always stuck with “Nothing”.  At which point, with frustration growing, and not a little bit of disdain for their lack of helpfulness, I would usually ask them to move aside while I took over their machine and got them out of whatever they had gotten themselves into.  After a while I just grew used to the fact that this was the answer I would usually receive, but I always kept asking because for the .001% of the people who would actually tell me, I could then help them understand what went wrong and how to avoid it in the future. Now, after hearing Brian’s talk, I understand what the problem was.  Even though I meant to just be in an information gathering mode, the words I was using, “What did YOU do?” have such a strong negative connotation that people would instinctively go into defense-mode and stop sharing information that might make them look bad.  Many of them probably were not even consciously aware that they had gone on the defensive, but the self-preservation instinct, especially self-preservation of the ego, is so strong that people would end up there without even realizing it. So, if “What did you do” is a bad question, what would have been better?  Well, one suggestion that Brian makes in his talk is something along the lines of, “Can you tell me what led up to this?” or “what was happening on the computer right before this came up?”  It’s subtle, but the point is to take the focus off of the person and their behavior; instead depersonalizing it and talk about events from more of a 3rd-party observer point of view.  With this approach, people will be more likely to talk about what the computer did and what they did in response to it without feeling the interrogation spotlight is on them.  They are also more likely to mention other events that occurred around the same time that may or may not be related, but which could certainly help you troubleshoot a larger problem if it is not just user actions.  And that is the ultimate goal of your asking the questions.  So yes, it does matter how you ask the question; and there are such things as good questions and bad questions.  Excellent topic Brian!  Thanks for getting the thinking gears churning! (Cross-posted to the Professional Development Virtual Chapter blog.)

    Read the article

  • How to create a "shutdown user" or "shutdown account"

    - by pcapademic
    Red Hat had a feature useful to me at the present time. There was an account, generally called "shutdown", and when you logged in with the account, the system shut down. In my specific case, I have Ubuntu Server running in a VM on my local system. The VM is running a web app, and when I'm done doing work, I want to shut down the VM. Unfortunately, I can't install VMware tools to get the "power button" based shutdown. Currently I login then sudo shutdown -h now, then type my password again, and things shutdown. Really, it's getting annoying all that waiting around and typing things. How do I replicate the "shutdown account" functionality in Ubuntu? A related question, were there any security gotchas that motivated people to stop using this kind of account?

    Read the article

  • Genetic algorithms with large chromosomes

    - by Howie
    I'm trying to solve the graph partitioning problem on large graphs (between a billion and trillion elements) using GA. The problem is that even one chromosome will take several gigs of memory. Are there any general compression techniques for chromosome encoding? Or should I look into distributed GA? NOTE: using some sort of evolutionary algorithm for this problem is a must! GA seems to be the best fit (although not for such large chromosomes). EDIT: I'm looking for state-of-the-art methods that other authors have used to solved the problem of large chromosomes. Note that I'm looking for either a more general solution or a solution particular to graph partitioning. Basically I'm looking for related works, as I, too, am attempting using GA for the problem of graph partitioning. So far I haven't found anyone that might have this problem of large chromosomes nor has tried to solve it.

    Read the article

  • JavaOne pictures and Community Commentary on JCP Awards

    - by heathervc
    We posted some pictures from JCP related events at JavaOne 2012 on the JCP Facebook page today.  The 2012 JCP Program Award winners and some of the nominees responded to the community recognition of their achievements during some of the JCP events last week.     “Our job on the EC is to balance the need of innovation – so we don’t standardize too early, or too late. We try to find that sweet spot that makes innovation and standardization work together, and not against each other.”- Ben Evans, CEO of jClarity and Executive Committee (EC) representative of the London Java Community, 2012 JCP Member/Participant of the Year Winner“SouJava has been evangelizing the Java platform, promoting the Java ecosystem in Brazil, and contributing to JSRs for several years. It’s very gratifying to have our work recognized, on behalf of many developers and Java User Groups around the world. This really is the work of a large group of people, represented by the few that can be here tonight.”- Michael Santos, representative of SouJava, 2012 JCP Member/Participant of the Year Winner "In the last years Credit Suisse has contributed to the development of Java EE specifications through participation in many customer advisory boards, through statements of requirements for extensions to the core Java related products in use, and active participation in JSRs. Winning the JCP Outstanding Spec Lead Award 2012 is very encouraging for our engagement and also demonstrates the level of expertise and commitment to drive the evolution of Java. Victor Grazi is happy and honored to receive this award." - Susanne Cech Previtali, Executive Committee (EC) representative of Credit Suisse, accepting award for 2012 JCP Outstanding Spec Lead Winner "Managing a JSR is difficult. There are so many decisions to be made and so many good and varied opinions, you never really know if you have decided correctly. The key to success is transparency and collaboration. I am truly humbled by receiving this award, there are so many other active JSRs.” Victor added that going forward in the JCP EC, they would like to simplify and open the process of participation – being addressed in the JCP.Next initiative of the JCP EC. "We would also like to encourage the engagement of universities, professors and students – as an important part of the Java community. While innovation is the lifeblood of our community and industry, without strong standards and compatibility requirements, we all end up in a maze of technology where everything is slightly different and doesn’t quite work with everything else." Victo Grazi, Executive Committee (EC) representative of Credit Suisse, 2012 JCP Outstanding Spec Lead Winner“I am very pleased, of course, to accept this award, but the credit really should go to all of those who have participated in the work of the JCP, while pushing for changes in the way it operates.  JCP.Next represents three JSRs. The first two are done, but the final step, JSR 358, is the complicated one, and it will bring in the lawyers. Just to give you an idea of what we’re dealing with, it affects licensing, intellectual property, patents, implementations not based on the Reference Implementation (RI), the role of the RI, compatibility policy, possible changes to the Technical Compatibility Kit (TCK), transparency, where do individuals fit in, open source, and more.”- Patrick Curran, JCP Chair, Spec Lead on JCP.Next JSRs (JSR 348, JSR 355 and JSR 358), 2012 JCP Most Significant JSR Winner“I’m especially glad to see the JCP community recognize JCP.Next for its importance. The governance work it represents is KEY to moving the Java platform forward and the success of the technology.”- John Rizzo, Executive Committee (EC) representative of Aplix Corporation, JSR Expert Group Member “I am deeply honored to be nominated. I had the privilege to receive two awards on behalf of Expert Groups and Spec Leads two years ago. But this time, I am nominated personally, which values my own contribution to the JCP, and of course, participation in JSRs and the EC work. I’m a fan of Agile Principles and Values Working. Being an Agile Coach and Consultant, I use it for some of the biggest EC Member companies and projects. It fuels my ability to help the JCP become more agile, lean and transparent as part of the JCP.Next effort.” - Werner Keil, Individual Executive Committee (EC) Member, a 2012 JCP Member/Participant of the Year Nominee, JSR Expert Group Member“The JCP ever has been some kind of institution for me,” Markus said. “If in technical doubt, I go there, look for the specifications of the implementation I work with at the moment and verify what I had observed. Since the beginning of my Java journey more than 12 years back now, I always had a strong relationship with the JCP. Shaping the future of a technology by joining the JCP – giving feedback and contributing to the road ahead through individual JSRs – that brings you to a whole new level.”Calling himself, “the new kid on the block,” he explained that for years he was afraid to join the JCP and contribute. But in reality, “Every single one of the big names I meet from the different Expert Groups is a nice person. People you can actually work with,” he says. “And nobody blames you for things you don't know. As long as you are committed and bring what is worth the most: passion, experiences and the desire to make a difference.” - Markus Eisele, a 2012 JCP Member of the Year Nominee, JSR Expert Group MemberCongratulations again to all of the nominees and winners of the JCP Program Awards.  Next year, we will add another award for the group of JUG members (not an entire JUG) that makes the best contribution to the Adopt-a-JSR program.  Let us know if you have other suggestions or improvements.

    Read the article

  • Found a good tool for jQuery Coding &ndash; jQueryPad

    - by Shaun
    Just found a good (looks like) tool for jQuery coding and debugging from the appinn.com (Chinese) named jQueryPad by Paul Stovell. With it we don’t need to switch between the visual studio and the browser when coding and debugging. There’s only one main screen where we can type the HTML and jQuery code and just press F5 to see the result in the bottom frame. .NET Frameworks 3.5 is required.   Hope this helps. Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • File added to project doesn't get added to packages

    - by lorin
    I'm creating customized binary versions of OpenStack nova packages (lp:nova) using their packaging scripts (lp:~openstack-ubuntu-packagers/ubuntu/natty/nova/ubuntu). I create binaries by doing: dpkg-buildpackage -b -rfakeroot -tc -uc -D This creates a set of packages (python-nova, nova-common, nova-compute, ...). In our customized version of the code (lp:~usc-isi/nova/hpc-trunk), we recently merged in some changes from another branch, and there's now a new file in our repository that wasn't in upstream: nova/virt/cpuinfo.xml.template. This file isn't getting added to any of the packages, where it should be added to python-nova. Why wouldn't dpkg-buildpackage be including this file? A more basic question: how does dpkg-buildpackage determine which files go in which packages? Is it related at all to the debian/watch file? This contains some URLs that are pointing to the upstream project. version=3 http://launchpad.net/nova/+download http://launchpad.net/nova/.*/nova-(.*)\.tar\.gz http://nova.openstack.org/tarballs/ nova-(.*).tar.gz

    Read the article

  • Rendering CV template with XeLaTex

    - by jacob
    Installed kubuntu on thursday Installed LaTeX on my kubuntu machine, using full Compiled an old document and it worked fine Downloaded a CV template from http://www.latextemplates.com/template/two-column-one-page-cv Compiled it, got error Fatal fontspec error: "cannot-use-pdftex" The fontspec package requires either XeTeX or LuaTeX to function. You must change your typesetting engine to, e.g., "xelatex" or "lualatex" instead of plain "latex" or "pdflatex". See the fontspec documentation for further information. For immediate help type H . Installed XeLaTex using this guide http://ledgersmb.org/faq/xelatex i.e. 7 Installed texlive-xetex that includes xelatex apt-get install texlive-xetex apt-get install liblatex-{driver,encode,table}-perl apt-get install libtemplate-plugin-latex-per 8) Compiled CV template again, did not work. Related: No Xelatex in texlive 2012 Excuse me if my question is not clear enough, I'm new to linux.

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >