Search Results

Search found 5885 results on 236 pages for 'finally'.

Page 108/236 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • HPC Server Dynamic Job Scheduling: when jobs spawn jobs

    - by JoshReuben
    HPC Job Types HPC has 3 types of jobs http://technet.microsoft.com/en-us/library/cc972750(v=ws.10).aspx · Task Flow – vanilla sequence · Parametric Sweep – concurrently run multiple instances of the same program, each with a different work unit input · MPI – message passing between master & slave tasks But when you try go outside the box – job tasks that spawn jobs, blocking the parent task – you run the risk of resource starvation, deadlocks, and recursive, non-converging or exponential blow-up. The solution to this is to write some performance monitoring and job scheduling code. You can do this in 2 ways: manually control scheduling - allocate/ de-allocate resources, change job priorities, pause & resume tasks , restrict long running tasks to specific compute clusters Semi-automatically - set threshold params for scheduling. How – Control Job Scheduling In order to manage the tasks and resources that are associated with a job, you will need to access the ISchedulerJob interface - http://msdn.microsoft.com/en-us/library/microsoft.hpc.scheduler.ischedulerjob_members(v=vs.85).aspx This really allows you to control how a job is run – you can access & tweak the following features: max / min resource values whether job resources can grow / shrink, and whether jobs can be pre-empted, whether the job is exclusive per node the creator process id & the job pool timestamp of job creation & completion job priority, hold time & run time limit Re-queue count Job progress Max/ min Number of cores, nodes, sockets, RAM Dynamic task list – can add / cancel jobs on the fly Job counters When – poll perf counters Tweaking the job scheduler should be done on the basis of resource utilization according to PerfMon counters – HPC exposes 2 Perf objects: Compute Clusters, Compute Nodes http://technet.microsoft.com/en-us/library/cc720058(v=ws.10).aspx You can monitor running jobs according to dynamic thresholds – use your own discretion: Percentage processor time Number of running jobs Number of running tasks Total number of processors Number of processors in use Number of processors idle Number of serial tasks Number of parallel tasks Design Your algorithms correctly Finally , don’t assume you have unlimited compute resources in your cluster – design your algorithms with the following factors in mind: · Branching factor - http://en.wikipedia.org/wiki/Branching_factor - dynamically optimize the number of children per node · cutoffs to prevent explosions - http://en.wikipedia.org/wiki/Limit_of_a_sequence - not all functions converge after n attempts. You also need a threshold of good enough, diminishing returns · heuristic shortcuts - http://en.wikipedia.org/wiki/Heuristic - sometimes an exhaustive search is impractical and short cuts are suitable · Pruning http://en.wikipedia.org/wiki/Pruning_(algorithm) – remove / de-prioritize unnecessary tree branches · avoid local minima / maxima - http://en.wikipedia.org/wiki/Local_minima - sometimes an algorithm cant converge because it gets stuck in a local saddle – try simulated annealing, hill climbing or genetic algorithms to get out of these ruts   watch out for rounding errors – http://en.wikipedia.org/wiki/Round-off_error - multiple iterations can in parallel can quickly amplify & blow up your algo ! Use an epsilon, avoid floating point errors,  truncations, approximations Happy Coding !

    Read the article

  • JRockit R28/JRockit Mission Control 4.0 is out!

    - by Marcus Hirt
    The next major release of JRockit is finally out! Here are some highlights: Includes the all new JRockit Flight Recorder – supersedes the old JRockit Runtime Analyser. The new flight recorder is inspired by the “black box” in airplanes. It uses a highly efficient recording engine and thread local buffers to capture data about the runtime and the application running in the JVM. It can be configured to always be on, so that whenever anything “interesting” happens, data can be dumped for some time back. Think of it as your own personal profiling time machine. Automatic shortest path calculation in Memleak – no longer any need for running around in circles when trying to find your way back to a thread root from an instance. Memleak can now show class loader related information and split graphs on a per class loader basis. More easily configured JMX agent – default port for both RMI Registry and RMI Server can be configured, and is by default the same, allowing easier configuration of firewalls. Up to 64 GB (was 4GB) compressed references. Per thread allocation profiling in the Management Console. Native Memory Tracking – it is now possible to track native memory allocations with very high resolution. The information can either be accessed using JRCMD, or the dedicated Native Memory Tracking experimental plug-in for the Management Console (alas only available for the upcoming 4.0.1 release). JRockit can now produce heap dumps in HPROF format. Cooperative suspension – JRockit is no longer using system signals for stopping threads, which could lead to hangs if signals were lost or blocked (for example bad NFS shares). Now threads check periodically to see if they are suspended. VPAT/Section 508 compliant JRMC – greatly improved keyboard navigation and screen reader support. See New and Noteworthy for more information. JRockit Mission Control 4.0.0 can be downloaded from here: http://www.oracle.com/technology/software/products/jrockit/index.html <shameless ad> There is even a book to go with JRMC 4.0.0/JRockit R28! http://www.packtpub.com/oracle-jrockit-the-definitive-guide/book/ </shameless ad>

    Read the article

  • How to (un)dock IBM Thinkpad X41 from X4 Dock(ing station) successfully?

    - by nutty about natty
    I'd like to start using my docking station again; however, it still doesn't work as it should, see the following bug descriptions (with special focus on Thinkpad X41 & the X4 Dock). Given that it still doesn't work (effective April 2012), my hope is fading that it will start working all of a sudden with Precise Pangolin at the end of the month. This issue is VERY important to me and I would be MOST grateful to anyone being able to sieve through the following links (some of which are actually quite recent) and translate their meaning into reliable and concrete simple (?) steps. I've read briefly about hal and udev, and can imagine that they are somewhat related to this, see links below. I don't want to fire at random. I don't want to tinker around with bash scripts if avoidable... Problem description (more or less ;-) Pressing the undock button on a "ThinkPad X4 Dock" with a ThinkPad X40 does not cause any udev events. And the lights on the dock never change to indicate it is safe to undock. and IBM Thinkpad X41 & docking station no joy :-( ... when pressing the blue undock button on the docking station: - The screen goes blank (with backlight remaining on), - with some SSD/HDD activity; - ctrl alt del causes a shut down after ... seconds, indicating that the system itself hasn't "crashed" but is still (somewhat ?) responsive. and With recent distributions, docking and undocking should function out of the box. You can monitor this by running # udevadm monitor and when you dock or press the undock button you should see a flurry of events. There are some issues though: No event on undock. - In some cases you may not get any events on undock. This is due to the ACPI dock drivers only registering the first logical Dock port they encounter and in some rare cases there may be more then one, such as on a ThinkPad X40 with ThinkPad X4 Dock. Patches are available, and are merged in 2.6.34. Now, if patches are available and merged into 2.6.34 - why isn't (un)docking simply working / fully supported in the latest version of Natty (which to my humble understanding has surpassed kernel version 2.6.34 a while ago)? More relevant links: ThinkPad X41 Docking Station issues and [HOWTO] Run scripts for laptop lid open/close and dock/undock events and finally Symptoms corrected by the latest BIOS Update - ThinkPad X41 - (Fix) USB devices connected to UltraBase X4 or ThinkPad X4 Dock may not be recognized in Boot Menu by pressing F12 during POST. Thanks!!

    Read the article

  • Oracle's Australian Graduate Recruitment Program

    - by david.talamelli
    I have been with Oracle for 5 years now and one thing that I have found that there is never a shortage of here is - Variety. Over the last 5 years I have had the opportunity to work on projects across various countries, across various technologies and skill-sets and also across various level of seniority. No two days are the same. One of the projects I was fortunate to be involved in occurred last year and it is one of the ones that is closest to me. Last year I was able to take responsibility for our 2011 Graduate Recruitment drive in Australia. Two weeks ago I went to Sydney to meet our Graduates who started in February 2011 with us and it was great to see them come to the end (or beginning actually) of our journey together. I am excited at the potential of what our Graduates careers will develop into here with us. I remember at our interviewing last year trying to explain life in Oracle, it is great to see those same Graduates with us now learning and developing life and business skills that I hope they will take with them in their professional careers. I was talking to one of my colleagues this week who mentioned the excitement and energy that our new Graduates bring is infectious, and I agree it really is. Our Graduates have a big learning curve ahead of them and they are about to start going on rotations into some of our Business Groups - but I think it is a great experience to see how a global company operates and pulls together to achieve results together. Here is a picture we took the other week of this year's Oracle Graduates (if any of our Graduates are reading this blog - it was great seeing you in NSW and I do wish you all the success here at Oracle) Once again Oracle's Graduate Program will be running in 2011 in Australia (Graduates will start in Jan/Feb 2012). The Oracle Australia Graduate Development Program is a one-year program consisting of orientation, formal training, project rotations in one core line of business and finally job placement. The formal training is a combination of structured development programs on soft skills and functional competencies via various delivery formats. Graduates are also expected to work in a team environment and complete multiple projects addressing real business challenges and at the time gaining a broad business understanding. For our Australia program we are hiring in our North Ryde and Melbourne offices. Resume submissions are being accepted now. First Round interviews will take place in June 2011 with Final Round interviews in July 2011. The Australia Graduate Program is open to Australian Residents and Citizens who are either in the final year of their studies or have graduated the previous year. For more details on Oracle and our Graduate Program visit our Campus website To express your interest, mail your resume to [email protected]

    Read the article

  • Why it may be good to be confused: Mary Lo Verde’s Motivational Discussion at Oracle

    - by user769227
    Why it may be good to be confused: Mary Lo Verde’s Motivational Discussion at Oracle by Olivia O'Connell Last week, we were treated to a call with Mary LoVerde, a renowned Life-Balance and Motivational Speaker. This was one of many events organized by Oracle Women’s Leadership (OWL). Mary made some major changes to her life when she decided to free herself of material positions and take each day as it came. Her life balance strategies have led her from working with NASA to appearing on Oprah. Mary’s MO is “cold turkey is better than dead duck!”, in other words, knowing when to quit. It is a surprising concept that flies in the face of the “winners don’t quit” notion and focuses on how we limit our capabilities and satisfaction levels by doing something that we don’t feel passionately about. Her arguments about quitting were based on the conception that ‘“it” is in the way of you getting what you really want’ and that ‘quitting makes things easier in the long run’. Of course, it is often difficult to quit, and though we know that things would be better if we did quit certain negative things in our lives, we are often ashamed to do so. A second topic centred on the perception of Confusion Endurance. Confusion Endurance is based around the idea that it is often good to not know exactly what you are doing and that it is okay to admit you don’t know something when others ask you; essentially, that humility can be a good thing. This concept was supposed to have to Leonardo Da Vinci, because he apparently found liberation in not knowing. Mary says, this allows us to “thrive in the tension of not knowing to unleash our creative potential” An anecdote about an interviewee at NASA was used to portray how admitting you don’t know can be a positive thing. When NASA asked the candidate a question with no obvious answer and he replied “I don’t know”, the candidate thought he had failed the interview; actually, the interviewers were impressed with his ability to admit he did not know. If the interviewee had guessed the answer in a real-life situation, it could have cost the lives of fellow astronauts. The highlight of the webinar for me? Mary told how she had a conversation with Capt. Chesley B. "Sully" Sullenberger who recalled the US Airways Flight 1549 / Miracle on the Hudson incident. After making its descent and finally coming to rest in the Hudson after falling 3,060 feet in 90 seconds, Sully and his co-pilot both turned to each other and said “well...that wasn’t as bad as we thought”. Confusion Endurance at its finest! Her discussion certainly gave food for thought, although personally, I was inclined to take some of it with a pinch of salt. Mary Lo Verde is the author of The Invitation, and you can visit her website and view her other publications at www.maryloverde.com. For details on the Professional Business Women of California visit: http://www.pbwc.org/

    Read the article

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Clegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

  • Is Nick Clegg a man or a mouse?

    - by BizTalk Visionary
    Well we got the hung election so many of us wanted! I believe it really is time for electoral change. Why? Consider: the ConMen under Cameroon have polled 36% of the great British voting public – well those that got to vote!! That means 64% of us don’t want him as PM. So what gives him the right to govern? Well an ancient voting system ideal for two party politics. But for the last 30 years we’ve had multi-party politics and going forward we may see 4 or 5 parties stepping up. We have to set in place a system that makes this work! So what does that mean today: Nick has a golden chance to push forward the case and in fact the absolute right for the change. He needs to keep this in mind when he discusses coalition with both Labour and the ConMen. So the mouse approach: Decides it is only fair to side with the ‘biggest’ vote and team up with the ConMen. Chances of electoral change? Big fat zero. Chance of achieving any of his other targets. Big fat zero. Why? Simple (as the Meer Kat would say). Cameroon needs to become PM by hook or crook. Once PM he holds the whip hand. Labour will dump Brown and head off into Leadership race land, Glegg will be knocking on number 10, having meaningless meetings and seeing no reward. Finally while Labour is at 6‘s and 7’s  the ‘new’ PM will call a new election, gain the majority they need and dump luckless Nick!! So the man approach: Team up with Labour. As one of the conditions – Brown to go. Run referendum for PR. Get PR through then force Labour to have new election under PR. Nick now hero and should be in a much better place following a PR election!! The man bit is standing up to the media attack for supporting Labour. Come Nick – be a man for a better Britain!!

    Read the article

  • SnagIt Live Writer Plug-in updated

    - by Rick Strahl
    I've updated my free SnagIt Live Writer plug-in again as there have been a few issues with the new release of SnagIt 11. It appears that TechSmith has trimmed the COM object and removed a bunch of redundant functionality which has broken the older plug-in. I also updated the layout and added SnagIt's latest icons to the form. Finally I've moved the source code to Github for easier browsing and downloading for anybody interested and easier updating for me. This plug-in is not new - I created it a number of years back, but I use the hell out it both for my blogging and for a few internal apps that with MetaWebLogApi to update online content. The plug-in makes it super easy to add captured image content directly into a post and upload it to the server. What does it do? Once installed the plug-in shows up in the list of plug-ins. When you click it launches a SnagIt Capture Dialog: Typically you set the capture settings once, and then save your settings. After that a single click or ENTER press gets you off capturing. If you choose the Show in SnagIt preview window option, the image you capture is is displayed in the preview editor to mark up images, which is one of SnagIt's great strengths IMHO. The image editor has a bunch of really nice effects for framing and marking up and highlighting of images that is really sweet. Here's a capture from a previous image in the SnagIt editor where I applied the saw tooth cutout effect: Images are saved to disk and can optionally be deleted immediately, since Live Writer creates copies of original images in its own folders before uploading the files. No need to keep the originals around typically. The plug-in works with SnagIt Versions 7 and later. It's a simple thing of course - nothing magic here, but incredibly useful at least to me. If you're using Live Writer and you own a copy of SnagIt do yourself a favor and grab this and install it as a plug-in. Resources: SnagIt Windows Live Writer Plug-in Installer Source Code on GitHub Buy SnagIt© Rick Strahl, West Wind Technologies, 2005-2012 Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Back home :-)

    - by Mike Dietrich
    Wrote this entry last night in the ICE from Stuttgart to Munich but the conncetion broke: 28.5 hour journey - and close by now. Actually I would have been even closer if our TGV wouldn't have had break problems as soon as we had entered German territory. And you don't want a train which goes up to a speed of 200 mph having issues with its breaks, right? So we missed the connection in Stuttgart but I've catched the last train this night towards Munich. Distance approx 1900 km all together. Usually it takes 2.5 hours with a direct flight with Air Lingus from Munich or a bit more when you'll go through Zurich or Frankfurt. But at least you meet more people and see a bit more from the landscapes passing by :-) Except for the break problem everything worked out well so far (I'm no there finally!). I had 4 hours to change in Paris from Gare de Nord to Gare de l'Est and one thing I really have to point out: the people working for SNCF, the French National Railways, were so organized and helpful, purely amazing. I asked the man at the counter where I had to pick up my prepaid tickets for directions to Gare de l'Est - and after we had a chat about Marlene Dietrich he just grabbed his iPhone, started Google Earth and showed me the way to walk. I pretty sure it's a stupid stereotype that people in Paris or France are so unfriendly to foreigners if they don't speak French. In my past 3 stays or travels to Paris in the past 2 years I had only great experiences. And another thing I really enjoy when being in France: the food!!! The sandwich I had at the train station was packed with yummy goat cheese. And there's always Paul. You might ask yourself: Who the heck is Paul? That's Paul - or actually their website. And at Paul's they serve usually excellent fruit tartes - and this time a nice Gateau Au Chocolate. And very good Cafe Cremé as well :-) That's actually the positive part traveling this way: the food you'll get is much better than the airline food - if your airline still serves something called food ...

    Read the article

  • On Writing Blogs

    - by Tony Davis
    Why are so many blogs about IT so difficult to read? Over at SQLServerCentral.com, we do a special subscription-only newsletter called Database Weekly. Every other week, it is my turn to look through all the blogs, news and events that might be of relevance to people working with databases. We provide the title, with the link, and a short abstract of what you can expect to read. It is a popular service with close to a million subscribers. You might think that this is a happy and fascinating task. Sometimes, yes. If a blog comes to the point quickly, and says something both interesting and original, then it has our immediate attention. If it backs up what it says with supporting material, then it is more-or-less home and dry, featured in DBW's list. If it also takes trouble over the formatting and presentation, maybe with an illustration or two and any code well-formatted, then we are agog with joy and it is marked as a must-visit destination in our blog roll. More often, however, a task that should be fun becomes a routine chore, and the effort of trawling so many badly-written blogs is enough to make any conscientious Health & Safety officer whistle through their teeth at the risk to the editor's spiritual and psychological well-being. And yet, frustratingly, most blogs could be improved very easily. There is, I believe, a simple formula for a successful blog. First, choose a single topic that is reasonably fresh and interesting. Second, get to the point quickly; explain in the first paragraph exactly what the blog is about, and then stay on topic. In writing the first paragraph, you must picture yourself as a pilot, hearing the smooth roar of the engines as your plane gracefully takes air. Too often, however, the accompanying sound is that of the engine stuttering before the plane veers off the runway into a field, and a wheel falls off. The author meanders around the topic without getting to the point, and takes frequent off-radar diversions to talk about themselves, or the weather, or which friends have recently tagged them. This might work if you're J.D Salinger, or James Joyce, but it doesn't help a technical blog. Sometimes, the writing is so convoluted that we are entirely defeated in our quest to shoehorn its meaning into a simple summary sentence. Finally, write simply, in plain English, and in a conversational way such that you can read it out loud, and sound natural. That's it! If you could also avoid any references to The Matrix then this is a bonus but is purely personal preference. Cheers, Tony.

    Read the article

  • Finalists for the Microsoft Accelerator for Windows Azure

    - by ScottGu
    Today, I am pleased to announce the ten finalists for the Microsoft Accelerator for Windows Azure powered by TechStars. These startups are about to launch into a three-month program where they will develop new products and businesses using Windows Azure. The response to the program has been fantastic - we received nearly 600 applications from entrepreneurs in 69 countries around the world, spanning a host of industries including retail, travel, entertainment, banking, real estate and more.  There were so many innovative ideas and amazing teams that it really made the selection process hard.  We finally landed on 10 finalists, based on their experience, qualifications, and innovative business ideas built on the cloud. This fall’s Windows Azure class includes: Advertory – Berlin, Germany. Advertory helps local businesses increase revenue and build customer loyalty. Appetas – Seattle, WA. Appetas' mission is to make restaurants look as beautiful online as they do on the plate! BagsUp – Sydney, Australia. Find great places from people you trust. Embarke – San Diego, CA. Embarke allows developers and companies the ability to integrate with any human communication channel (Facebook, Email, Text Message, Twitter) without having to learn the specifics, write code, or spend time on any of them. Fanzo – Seattle, WA. Fanzo puts sports fans in the spotlight. Find other fans, show off your fanswagger and get rewarded for your passion. MetricsHub – Bellevue, WA. A service providing cloud monitoring with incident detection and prebuilt workflows for remedying common problems. Mobilligy – Bellevue, WA. Mobilligy revolutionizes how people pay their bills by bringing convenient, secure, and instant bill payment support to mobile devices. Realty Mogul – Los Angeles, CA. Realty Mogul is a crowdfunding platform for real estate where accredited investors pool capital and invest in properties that are acquired, managed and eventually resold by professional private real estate companies and their management teams. Staq – San Francisco, CA. Back-end as a service for APIs. Socedo – Bellevue, WA. A simple and effective web application for lead generation and relationship management on Twitter. Each startup will be hosted in Seattle and mentored by entrepreneurs and venture capitalists as well as leaders from Windows Azure and other Microsoft organizations. The teams will spend the first month ideating and refining their business concepts with input and advice from their mentors as well as Microsoft customers, followed by two months of design and development. They will present their results to investors and Microsoft partners at an event in mid-January. We are really looking forward to seeing how their businesses evolve.  These teams have demonstrated incredible energy, passion, and innovative capabilities – and they are ready to show the world what’s possible with Windows Azure. Thanks, Scott P.S. And if you are new to Twitter you can also optionally follow me: @scottgu

    Read the article

  • SPSiteDataQuery Returns Only One List Type At A Time

    - by Brian Jackett
    The SPSiteDataQuery class in SharePoint 2007 is very powerful, but it has a few limitations.  One of these limitations that I ran into this morning (and caused hours of frustration) is that you can only return results from one list type at a time.  For example, if you are trying to query items from an out of the box custom list (list type = 100) and document library (list type = 101) you will only get items from the custom list (SPSiteDataQuery defaults to list type = 100.)  In my situation I was attempting to query multiple lists (created from custom list templates 10001 and 10002) each with their own content types. Solution     Since I am only able to return results from one list type at a time, I was forced to run my query twice with each time setting the ServerTemplate (translates to ListTemplateId if you are defining custom list templates) before executing the query.  Below is a snippet of the code to accomplish this. SPSiteDataQuery spDataQuery = new SPSiteDataQuery(); spDataQuery.Lists = "<Lists ServerTemplate='10001' />"; // ... set rest of properties for spDataQuery   var results = SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable();   // only change to SPSiteDataQuery is Lists property for ServerTemplate attribute spDataQuery.Lists = "<Lists ServerTemplate='10002' />";   // re-execute query and concatenate results to existing entity results = results.Concat(SPContext.Current.Web.GetSiteData(spDataQuery).AsEnumerable());   Conclusion     Overall this isn’t an elegant solution, but it’s a workaround for a limitation with the SPSiteDataQuery.  I am now able to return data from multiple lists spread across various list templates.  I’d like to thank those who commented on this MSDN page that finally pointed out the limitation to me.  Also a thanks out to Mark Rackley for “name dropping” me in his latest article (which I humbly insist I don’t belong in such company)  as well as encouraging me to write up a quick post on this issue above despite my busy schedule.  Hopefully this post saves some of you from the frustrations I experienced this morning using the SPSiteDataQuery.  Until next time, Happy SharePoint’ing all.         -Frog Out   Links MSDN Article for SPSiteDataQuery http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsitedataquery.lists.aspx

    Read the article

  • ASP.NET MVC 2 Model Binding for a Collection

    - by nmarun
    Yes, my yet another post on Model Binding (previous one is here), but this one uses features presented in MVC 2. How I got to writing this blog? Well, I’m on a project where we’re doing some MVC things for a shopping cart. Let me show you what I was working with. Below are my model classes: 1: public class Product 2: { 3: public int Id { get; set; } 4: public string Name { get; set; } 5: public int Quantity { get; set; } 6: public decimal UnitPrice { get; set; } 7: } 8:   9: public class Totals 10: { 11: public decimal SubTotal { get; set; } 12: public decimal Tax { get; set; } 13: public decimal Total { get; set; } 14: } 15:   16: public class Basket 17: { 18: public List<Product> Products { get; set; } 19: public Totals Totals { get; set;} 20: } The view looks as below:  1: <h2>Shopping Cart</h2> 2:   3: <% using(Html.BeginForm()) { %> 4: 5: <h3>Products</h3> 6: <% for (int i = 0; i < Model.Products.Count; i++) 7: { %> 8: <div style="width: 100px;float:left;">Id</div> 9: <div style="width: 100px;float:left;"> 10: <%= Html.TextBox("ID", Model.Products[i].Id) %> 11: </div> 12: <div style="clear:both;"></div> 13: <div style="width: 100px;float:left;">Name</div> 14: <div style="width: 100px;float:left;"> 15: <%= Html.TextBox("Name", Model.Products[i].Name) %> 16: </div> 17: <div style="clear:both;"></div> 18: <div style="width: 100px;float:left;">Quantity</div> 19: <div style="width: 100px;float:left;"> 20: <%= Html.TextBox("Quantity", Model.Products[i].Quantity)%> 21: </div> 22: <div style="clear:both;"></div> 23: <div style="width: 100px;float:left;">Unit Price</div> 24: <div style="width: 100px;float:left;"> 25: <%= Html.TextBox("UnitPrice", Model.Products[i].UnitPrice)%> 26: </div> 27: <div style="clear:both;"><hr /></div> 28: <% } %> 29: 30: <h3>Totals</h3> 31: <div style="width: 100px;float:left;">Sub Total</div> 32: <div style="width: 100px;float:left;"> 33: <%= Html.TextBox("SubTotal", Model.Totals.SubTotal)%> 34: </div> 35: <div style="clear:both;"></div> 36: <div style="width: 100px;float:left;">Tax</div> 37: <div style="width: 100px;float:left;"> 38: <%= Html.TextBox("Tax", Model.Totals.Tax)%> 39: </div> 40: <div style="clear:both;"></div> 41: <div style="width: 100px;float:left;">Total</div> 42: <div style="width: 100px;float:left;"> 43: <%= Html.TextBox("Total", Model.Totals.Total)%> 44: </div> 45: <div style="clear:both;"></div> 46: <p /> 47: <input type="submit" name="Submit" value="Submit" /> 48: <% } %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Nothing fancy, just a bunch of div’s containing textboxes and a submit button. Just make note that the textboxes have the same name as the property they are going to display. Yea, yea, I know. I’m displaying unit price as a textbox instead of a label, but that’s beside the point (and trust me, this will not be how it’ll look on the production site!!). The way my controller works is that initially two dummy products are added to the basked object and the Totals are calculated based on what products were added in what quantities and their respective unit price. So when the page loads in edit mode, where the user can change the quantity and hit the submit button. In the ‘post’ version of the action method, the Totals get recalculated and the new total will be displayed on the screen. Here’s the code: 1: public ActionResult Index() 2: { 3: Product product1 = new Product 4: { 5: Id = 1, 6: Name = "Product 1", 7: Quantity = 2, 8: UnitPrice = 200m 9: }; 10:   11: Product product2 = new Product 12: { 13: Id = 2, 14: Name = "Product 2", 15: Quantity = 1, 16: UnitPrice = 150m 17: }; 18:   19: List<Product> products = new List<Product> { product1, product2 }; 20:   21: Basket basket = new Basket 22: { 23: Products = products, 24: Totals = ComputeTotals(products) 25: }; 26: return View(basket); 27: } 28:   29: [HttpPost] 30: public ActionResult Index(Basket basket) 31: { 32: basket.Totals = ComputeTotals(basket.Products); 33: return View(basket); 34: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } That’s that. Now I run the app, I see two products with the totals section below them. I look at the view source and I see that the input controls have the right ID, the right name and the right value as well. 1: <input id="ID" name="ID" type="text" value="1" /> 2: <input id="Name" name="Name" type="text" value="Product 1" /> 3: ... 4: <input id="ID" name="ID" type="text" value="2" /> 5: <input id="Name" name="Name" type="text" value="Product 2" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } So just as a regular user would do, I change the quantity value of one of the products and hit the submit button. The ‘post’ version of the Index method gets called and I had put a break-point on line 32 in the above snippet. When I hovered my mouse on the ‘basked’ object, happily assuming that the object would be all bound and ready for use, I was surprised to see both basket.Products and basket.Totals were null. Huh? A little research and I found out that the reason the DefaultModelBinder could not do its job is because of a naming mismatch on the input controls. What I mean is that when you have to bind to a custom .net type, you need more than just the property name. You need to pass a qualified name to the name property of the input control. I modified my view and the emitted code looked as below: 1: <input id="Product_Name" name="Product.Name" type="text" value="Product 1" /> 2: ... 3: <input id="Product_Name" name="Product.Name" type="text" value="Product 2" /> 4: ... 5: <input id="Totals_SubTotal" name="Totals.SubTotal" type="text" value="550" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now, I update the quantity and hit the submit button and I see that the Totals object is populated, but the Products list is still null. Once again I went: ‘Hmm.. time for more research’. I found out that the way to do this is to provide the name as: 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> 2: <!-- this will be rendered as --> 3: <input id="Products_0__ID" name="Products[0].ID" type="text" value="1" /> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } It was only now that I was able to see both the products and the totals being properly bound in the ‘post’ action method. Somehow, I feel this is kinda ‘clunky’ way of doing things. Seems like people at MS felt in a similar way and offered us a much cleaner way to solve this issue. The simple solution is that instead of using a Textbox, we can either use a TextboxFor or an EditorFor helper method. This one directly spits out the name of the input property as ‘Products[0].ID and so on. Cool right? I totally fell for this and changed my UI to contain EditorFor helper method. At this point, I ran the application, changed the quantity field and pressed the submit button. Of course my basket object parameter in my action method was correctly bound after these changes. I let the app complete the rest of the lines in the action method. When the page finally rendered, I did see that the quantity was changed to what I entered before the post. But, wait a minute, the totals section did not reflect the changes and showed the old values. My status: COMPLETELY PUZZLED! Just to recap, this is what my ‘post’ Index method looked like: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: basket.Totals = ComputeTotals(basket.Products); 5: return View(basket); 6: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } A careful debug confirmed that the basked.Products[0].Quantity showed the updated value and the ComputeTotals() method also returns the correct totals. But still when I passed this basket object, it ended up showing the old totals values only. I began playing a bit with the code and my first guess was that the input controls got their values from the ModelState object. For those who don’t know, the ModelState is a temporary storage area that ASP.NET MVC uses to retain incoming attempted values plus binding and validation errors. Also, the fact that input controls populate the values using data taken from: Previously attempted values recorded in the ModelState["name"].Value.AttemptedValue Explicitly provided value (<%= Html.TextBox("name", "Some value") %>) ViewData, by calling ViewData.Eval("name") FYI: ViewData dictionary takes precedence over ViewData's Model properties – read more here. These two indicators led to my guess. It took me quite some time, but finally I hit this post where Brad brilliantly explains why this is the preferred behavior. My guess was right and I, accordingly modified my code to reflect the following way: 1: [HttpPost] 2: public ActionResult Index(Basket basket) 3: { 4: // read the following posts to see why the ModelState 5: // needs to be cleared before passing it the view 6: // http://forums.asp.net/t/1535846.aspx 7: // http://forums.asp.net/p/1527149/3687407.aspx 8: if (ModelState.IsValid) 9: { 10: ModelState.Clear(); 11: } 12:   13: basket.Totals = ComputeTotals(basket.Products); 14: return View(basket); 15: } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } What this does is that in the case where your ModelState IS valid, it clears the dictionary. This enables the values to be read from the model directly and not from the ModelState. So the verdict is this: If you need to pass other parameters (like html attributes and the like) to your input control, use 1: <%= Html.TextBox(string.Format("Products[{0}].ID", i), Model.Products[i].Id) %> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Since, in EditorFor, there is no direct and simple way of passing this information to the input control. If you don’t have to pass any such ‘extra’ piece of information to the control, then go the EditorFor way. The code used in the post can be found here.

    Read the article

  • “Cloud Integration in Minutes” – True or False?

    - by Bruce Tierney
    The short answer is “yes”. Connecting on-premise and cloud applications “in minutes” is true…provided you only consider the connectivity subset of integration and have a small number of cloud integration touch points. At the recent Gartner AADI conference, 230 attendees filled up the Oracle session to get a more comprehensive answer to this question. During the session, titled “Simplifying Integration – The Cloud & Mobile Pre-requisite”, Oracle’s Tim Hall described cloud connectivity and then, equally importantly, the other essential and sometimes overlooked aspects of integration required to ensure a long term application and service integration strategy. To understand the challenges and opportunities faced by cloud integration, the session started off with a slide that describes how connectivity can quickly transition from simplicity to complexity as the number of applications and service vendor instances grows: Increased complexity puts increased demand on the integration platform As companies expand from on-premise applications into a hybrid on-premise/cloud infrastructure with support for mobile, cloud, and social, there is a new sense of urgency to implement a unified and comprehensive service integration platform. Without getting this unified platform in place, companies face increased complexity and cost managing a growing patchwork of niche integration toolsets as well as the disparate standards mandated by each SaaS vendor as shown in the image below: dddddddddddddddddddd Incomplete and overlapping offerings from a patchwork of niche vendors Also at Gartner AADI, Oracle SOA Suite customer Geeta Pyne, Director of Middleware at BMC presented their successful strategy on how BMC efficiently manages their cloud integration despite disparate requirements from each vendor. From one of Geeta’s slide: Interfaces are dictated by SaaS vendors; wide variety (SOAP, REST, Socket, HTTP/POX, SFTP); Flexibility of Oracle Service Bus/SOA Suite helps to support Every vendor has their way to handle Security; WS-Security, Custom Header; Support in Oracle Service Bus helps to adhere to disparate requirements At BMC, the flexibility of Oracle Service Bus and Oracle SOA Suite allowed them to support the wide variation in the functional requirements as mandated by their SaaS vendors. In contrast to the patchwork platform approach of escalating complexity from overlapping SaaS toolkits, Oracle’s strategy is to provide a unified platform to support disparate requirements from your SaaS vendors, on-premise apps, legacy apps, and more. Furthermore, Oracle SOA Suite includes the many aspects of comprehensive integration beyond basic connectivity including orchestration, analytics (BAM, events…), service virtualization and more in a single unified interface. Oracle SOA Suite – Unified and comprehensive To summarize, yes you can achieve “cloud integration in minutes” when considering the connectivity subset of integration but be sure to look for ways to simplify as you consider a more comprehensive view of integration beyond basic connectivity such as service virtualization, management, event processing and more. And finally, be sure your integration platform has the deep flexibility to handle the requirements of all your future SaaS applications…many of which are unknown to you now.

    Read the article

  • Oracle WebCenter: The Best of the Best

    - by kellsey.ruppel(at)oracle.com
    You may remember that the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  Last week, we provided an overview of Oracle WebCenter, and this week, we'll focus on Convergence and how the new release of Oracle WebCenter is the Best of the Best..Our development team has been working very hard to bring all the best capabilities from each of the existing portal products into one modern user experience platform that provides a robust foundation for moving customers into the future.  Each of the development teams still maintain their existing products to support the current customers,  but they've been tasked with converging their unique best of breed features into the new WebCenter release so that it will meet the broadest set of use cases possible. For example, we've taken the fastest and most scalable portlet engine in the industry with Oracle WebLogic Portal, integrated it within WebCenter, and improved performance further, to deliver even more performance for our customers.  In addition, we've focused on extending the reach of all the different user experience resources so that customers can deliver robust capabilities into their existing portals, applications, composite applications, dashboards, mobile applications, really any channel that requires information.  And finally, we've combined a whole set of community and multi-site capabilities leveraging the pioneering capabilities of Plumtree portal directly into the new WebCenter release.  While at the same time we've built and delivered the new WebCenter release, we've also provided new feature releases of all the existing products.  In this way, customers can continue to gain value out of their existing investments while at the same time have the smoothest path to upgrading to the new WebCenter release. With the new WebCenter release, we are delivering a converged platform to address all portal requirements that have been delivered by different point products in our portal portfolio in the past. Additionally, this release delivers the most modern user experience that goes well beyond the experience the other portal products provided. This is because the new WebCenter release has been built from the ground up with modern technologies around rich clients, SOA, and customizations compared with other portal products whose architecture has been adapted to add capabilities like AJAX, personalization, and social computing.The new WebCenter release addresses the broadest set of use cases using single product set and single architecture spanning extranet sites to social communities. It helps customers manage, maintain and develop one technology set, but leverage it throughout their organization whether it's embedded in an application or a new destination for improved customer and employee productivity. Additionally, the new release of WebCenter leverages the best and most performant features of all the existing portfolio products to deliver the fastest and most scalable portal platform.  Most importantly, it supports the broadest development models spanning from J2EE/Java to HTML/REST to .NET.Keep checking back this week as we provide additional resources and information on how the new release of Oracle WebCenter is the Best of the Best - converging all the best capabilities from each of the existing portal products into one modern user experience platform.

    Read the article

  • Bind Variable and SQL error during statement preparation

    - by Abhishek Dwivedi
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}  I was getting the following exception at run-time. JBO-27122: SQL error during statement preparation. Statement: SELECT AxEO.A_ID, AxEO.B_ID, AxEO.C_ID, ByEO.A_ID, ByEO.B_ID, ByEO.C_ID, Cz.A_ID, Cz.B_ID, Cz.C_ID FROM ABC_x AxEO, ABC_y ByEO, ABC_z CzEO WHERE AxEO.A_ID = ByEO.A_ID AND  CzEO.A_ID = :Bind_PId I copied and pasted the query on SQL worksheet, replaced :Bind_PId with a valid id, and executed the query. The query worked alright, implying the query was alright. I tried to connect to different DBs but the issue persisted, meaning it was not a DB issue either. Finally, the root cause was found to be in the concerned VO; one of the bind variables (say Bind_TId) was marked "Required". De-selecting the Required check-box resolved the issue. In retrospect, the issue looks to be rather straight-forward. However, the error message is not very helpful, if not misleading. Besides, it's counter-intuitive to think that a bind variable which is not being used in a query can cause error while statement preparation. The other bind variable - Bind_TId - was being used in other view criteria, not the view criteria involved in the given query. Still, it was required.

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Oracle VM Deep Dives

    - by rickramsey
    "With IT staff now tasked to deliver on-demand services, datacenter virtualization requirements have gone beyond simple consolidation and cost reduction. Simply provisioning and delivering an operating environment falls short. IT organizations must rapidly deliver services, such as infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS). Virtualization solutions need to be application-driven and enable:" "Easier deployment and management of business critical applications" "Rapid and automated provisioning of the entire application stack inside the virtual machine" "Integrated management of the complete stack including the VM and the applications running inside the VM." Application Driven Virtualization, an Oracle white paper That was published in August of 2011. The new release of Oracle VM Server delivers significant virtual networking performance improvements, among other things. If you're not sure how virtual networks work or how to use them, these two articles by Greg King and friends might help. Looking Under the Hood at Virtual Networking by Greg King Oracle VM Server for x86 lets you create logical networks out of physical Ethernet ports, bonded ports, VLAN segments, virtual MAC addresses (VNICs), and network channels. You can then assign channels (or "roles") to each logical network so that it handles the type of traffic you want it to. Greg King explains how you go about doing this, and how Oracle VM Server for x86 implements the network infrastructure you configured. He also describes how the VM interacts with paravirtualized guest operating systems, hardware virtualized operating systems, and VLANs. Finally, he provides an example that shows you how it all looks from the VM Manager view, the logical view, and the command line view of Oracle VM Server for x86. Fundamental Concepts of VLAN Networks by Greg King and Don Smerker Oracle VM Server for x86 supports a wide range of options in network design, varying in complexity from a single network to configurations that include network bonds, VLANS, bridges, and multiple networks connecting the Oracle VM servers and guests. You can create separate networks to isolate traffic, or you can configure a single network for multiple roles. Network design depends on many factors, including the number and type of network interfaces, reliability and performance goals, the number of Oracle VM servers and guests, and the anticipated workload. The Oracle VM Manager GUI presents four different ways to create an Oracle VM network: Bonds and ports VLANs Both bond/ports and VLANS A local network This article focuses the second option, designing a complex Oracle VM network infrastructure using only VLANs, and it steps through the concepts needed to create a robust network infrastructure for your Oracle VM servers and guests. More Resources Virtual Networking for Dummies Download Oracle VM Server for x86 Find technical resources for Oracle VM Server for x86 -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • Shutdown Hangs for 5 Minutes on Kubuntu 14.04

    - by Augustinus
    I've had persistent problems with a 5 minute hang at shutdown for the last three versions of Kubuntu (13.04, 13.10, and now 14.04). I suspect this is not a KDE-specific problem. Recently, I performed a fresh installation of Kubuntu 14.04 from a live-USB, and shutdown worked normally for about a week. The hang-up is now happening again, and I can't figure out why. A brief description of the problem: The hang-up occurs with all methods of initiating a normal shutdown: Clicking the shutdown or restart button in KDE, sudo shutdown -h now, sudo reboot The shutdown splash screen appears. Using the down-arrow to access verbose messages, I see "Asking all remaining processes to terminate." This message remains for 5 minutes with no disk activity. Finally, a rapid series of messages flurries to the screen: * All processes ended within 300 seconds... [ OK ] nm-dispatcher.action: Caught signal 15, shutting down... ModemManager[852]: <warn> Could not acquire the 'org.freedesktop.ModemManager1' service name ModemManager[852]: <info> ModemManager is shut down * Deactivating swap... [ OK ] * Unmounting local filesystems... [ OK ] * Will now restart` Possible Sources of the Problem: Before the problem re-appeared, I have mainly been doing routine computing. I have kept the system up-to-date using apt-get upgrade and apt-get dist-upgrade. The only other notable incident was a power failure. I do not have the computer connected to a UPS, so the power failure resulted in an immediate shutdown. Could this have corrupted an important file which must be accessed at shutdown? Is there any way that could cause a 5-minute hang-up? Here is a list of packages that have been updated before the problem appeared: bash iotop dpkg dpkg-dev python3-software-properties libdpkg-perl software-properties-kde software-properties-common akonadi-backend-mysql libakonadiprotocolinternals1 akonadi-server firefox-locale-en firefox flashplugin-installer libqapt2 libqapt2-runtime thunderbird openjdk-7-jre-headless thunderbird-locale-en kubuntu-driver-manager qapt-deb-installer openjdk-7-jre qapt-batch icedtea-7-jre-jamvm libelf1 dpkg dpkg-dev libdpkg-perl libjbig0 gettext-base libgettextpo-dev libssl1.0.0 libgettextpo0 libasprintf-dev linux-headers-3.13.0-24 gettext libasprintf0c2 linux-headers-3.13.0-24-generic openssl linux-libc-dev gstreamer0.10-qapt kubuntu-desktop linux-image-extra-3.13.0-24-generic linux-image-3.13.0-24-generic I would appreciate any help with this.

    Read the article

  • A Plea for Plain English

    - by Tony Davis
    The English language has, within a lifetime, emerged as the ubiquitous 'international language' of scientific, political and technical communication. On the one hand, learning a single, common language, International English, has made it much easier to participate in and adopt new technologies; on the other hand it must be exasperating to have to use English at international conferences, or on community sites, when your own language has a long tradition of scientific and technical usage. It is also hard to master the subtleties of using a foreign language to explain advanced ideas. This requires English speakers to be more considerate in their writing. Even if you’re used to speaking English, you may be brought up short by this sort of verbiage… "Business Intelligence delivering actionable insights is becoming more critical in the enterprise, and these insights require large data volumes for trending and forecasting" It takes some imagination to appreciate the added hassle in working out what it means, when English is a language you only use at work. Try, just to get a vague feel for it, using Google Translate to translate it from English to Chinese and back again. "Providing actionable business intelligence point of view is becoming more and more and more business critical, and requires that these insights and projected trends in large amounts of data" Not easy eh? If you normally use a different language, you will need to pause for thought before finally working out that it really means … "Every Business Intelligence solution must be able to help companies to make decisions. In order to detect current trends, and accurately predict future ones, we need to analyze large volumes of data" Surely, it is simple politeness for English speakers to stop peppering their writing with a twisted vocabulary that renders it inaccessible to everyone else. It isn’t just the problem of writers who use long words to give added dignity to their prose. It is the use of Colloquial English. This changes and evolves at a dizzying rate, adding new terms and idioms almost daily; it is almost a new and separate language. By contrast, ‘International English', is gradually evolving separately, at its own, more sedate, pace. As such, all native English speakers need to make an effort to learn, and use it, switching from casual colloquial patter into a simpler form of communication that can be widely understood by different cultures, even if it gives you less credibility on the street. Simple-Talk is based, at least in part, on the idea that technical articles can be written simply and clearly in a form of English that can be easily understood internationally, and that they can be written, with a little editorial help, by anyone, and read by anyone, regardless of their native language. Cheers, Tony.

    Read the article

  • Book Review: Inside Windows Communicat?ion Foundation by Justin Smith

    - by Sam Abraham
    In gearing up for a new major project, I have taken it upon myself to research and review various aspects of our Microsoft stack of choice seeking new creative ways for us to leverage in our upcoming state-of-the-art solution projected to position us ahead of the competition. While I am a big supporter of search engines and online articles as a quick and usually reliable source of information, I have opted in my investigative quest to actually “hit the books”.  I have also made it a habit to provide quick reviews for material I go over hoping this can be of help to someone who may be looking for items others may have had success using for reference. I have started a few months ago by investigating better ways to implementing, profiling and troubleshooting SQL Server 2008. My reference of choice was Itzik Ben-Gan et al’s “Inside Microsoft SQL Server 2008” series. While it has been a month since my last book review, this by no means meant that I have been sitting idle. It has been pretty challenging to balance research with the continuous flow of projects and deadlines all while balancing that with my family duties which, of course, always comes first. In this post, I will be providing a quick review of my latest reading: Inside Windows Communication Foundation by Justin Smith. This book has been on my reading list for a very long time and I am proud to have finally tackled it. Justin’s book presents a great coverage of WCF internals. His simple, concise and well-worded style has simplified the relatively complex internals of WCF and made it comprehensible. Justin opted to organize the book into three parts: an introduction to WCF, coverage of the Channel Layer and a look at WCF internals at the ServiceModel layer. Part I introduced the concepts and made the case behind WCF while covering a simplified version of WCF’s message patterns, endpoints and contracts. In Part II, Justin provided a thorough coverage of the internals of Messages, Channels and Channel Managers. Part III concluded this nice reading with coverage of Bindings, Contracts, Dispatchers and Clients. While one would not likely need to extend WCF at that low level of the API, an understanding of the inner-workings of WCF is a must to avoid pitfalls mainly caused by misinformation or erroneous assumptions. Problems can quickly arise in high-traffic hosted solutions, but most can be easily avoided with some minimal time investment and education. My next goal is to pay a closer look at WCF from the programmer’s API perspective now that I have acquired a better understanding of its inner working.   Many thanks to the O’Reilly User Group Program and its support of our West Palm Beach Developers’ Group.   Stay tuned for more… All the best, --Sam

    Read the article

  • SQL SERVER – 2011 – Introduction to SEQUENCE – Simple Example of SEQUENCE

    - by pinaldave
    SQL Server 2011 will contain one of the very interesting feature called SEQUENCE. I have waited for this feature for really long time. I am glad it is here finally. SEQUENCE allows you to define a single point of repository where SQL Server will maintain in memory counter. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO SEQUENCE is very interesting concept and I will write few blog post on this subject in future. Today we will see only working example of the same. Let us create a sequence. We can specify various values like start value, increment value as well maxvalue. -- First Run SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO -- Second Run SELECT NEXT VALUE FOR Seq, c.AccountNumber FROM Sales.Customer c GO Once the sequence is defined, it can be fetched using following method. Every single time new incremental value is provided, irrespective of sessions. Sequence will generate values till the max value specified. Once the max value is reached, query will stop and will return error message. Msg 11728, Level 16, State 1, Line 2 The sequence object ‘Seq’ has reached its minimum or maximum value. Restart the sequence object to allow new values to be generated. We can restart the sequence from any particular value and it will work fine. -- Restart the Sequence ALTER SEQUENCE [Seq] RESTART WITH 1 GO -- Sequence Restarted SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO Let us do final clean up. -- Clean Up DROP SEQUENCE [Seq] GO There are lots of things one can find useful about this feature. We will see that in future posts. Here is the complete code for easy reference. USE AdventureWorks2008R2 GO CREATE SEQUENCE [Seq] AS [int] START WITH 1 INCREMENT BY 1 MAXVALUE 20000 GO -- First Run SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO -- Second Run SELECT NEXT VALUE FOR Seq, c.AccountNumber FROM Sales.Customer c GO -- Restart the Sequence ALTER SEQUENCE [Seq] RESTART WITH 1 GO -- Sequence Restarted SELECT NEXT VALUE FOR Seq, c.CustomerID FROM Sales.Customer c GO -- Clean Up DROP SEQUENCE [Seq] GO Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • 2D Barcode Addendum

    - by Tim Dexter
    Having finally got my external drive back(long story) today from Oklahoma (thank you so much Sammy) Im back with a full compliment of Oracle and blogging tools at my disposal. I have missed JDeveloper this past week, which I have found, I immensely prefer over Eclipse (let the flaming commence :0) I use Zoundry Raven for writing articles and its not installed locally but on my external drove, so I have been soldiering on with the blog server's pain in the backside UI for writing. Now I have my favority editor back and things are calming down workwise, I will start to get the Excel template posts out. Today thou, a note about 2D barcode support or more specifically any barcode that needs some data manipulation before the barcode font is applied. I wrote about these fonts a long time back and laid out the java class you would need to write if you had an algorithm from the font manufacturer to use. I missed out a valuable point and James at Luminex fell into the trap. He was wanting to use the datamatrix font from IDAutomation but and had built the java class to be called from the RTF template but it was not encoding or at least did not appear to be. New debugging feature to the rescue. Kan over at the bipconsultng blog documented the feature a while back. Just adding <?xdo-debug-level:'STATEMENT'?> to my test template generated all the debug files in my c:\temp directory. No messing with files, just a simple command ... at last! Kan has documented the feature here. With the log in hand I spotted a java error stack referencing a missing code128a method, huh? Looking at James' class he had the following snippet: ENCODERS.put("code128a",mUtility.getClass().getMethod("code128a",clazz)); ENCODERS.put("code128b",mUtility.getClass().getMethod("code128b", clazz)); ENCODERS.put("code128c",mUtility.getClass().getMethod("code128c", clazz)); ENCODERS.put("pdf417",mUtility.getClass().getMethod("pdf417", clazz)); ENCODERS.put("datamatrix",mUtility.getClass().getMethod("datamatrix", clazz)); His class did not include the other code128 and pdf147 methods and BIP was expecting them. An easy fix, just comment them out, rebuild and deploy and the encoding started working. If you are hitting similar problems, check that class and ensure all of the referenced methods are available, if not, delete or get commenting. James now has purdy labels popping out that his hard ware can read, sweet!

    Read the article

  • Running Windows Phone Developers Tools CTP under VMWare Player - Yes you can! - But do you want to?

    - by Liam Westley
    This blog is the result of a quick investigation of running the Windows Phone Developer Tools CTP under VMWare Player.  In the release notes for Windows Phone Developer Tools CTP it mentions that it is not supported under VirtualPC or Hyper-V.  Some developers have policies where ‘no non-production code’ can be installed on their development workstation and so the only way they can use a CTP like this is in a virtual machine. The dilemma here is that the emulator for Windows Phone itself is a virtual machine and running a virtual machine within another virtual machine is normally frowned upon.  Even worse, previous Windows Mobile emulators detected they were in a virtual machine and refused to run.  Why VMWare? I selected VMWare as a possible solution as it is possible to run VMWare ESXi under VMWare Workstation by manually setting configuration options in the VMX configuration file so that it does not detect the presence of a virtual environment. I actually found that I could use VMWare Player (the free version, that can now create VM images) and that there was no need for any editing of the configuration file (I tried various switches, none of which made any difference to performance). So you can run the CTP under VMWare Player, that’s the good news. The bad news is that it is incredibly slow, bordering on unusable.  However, if it’s the only way you can use the CTP, at least this is an option. VMWare Player configuration I used the latest VMWare Player, 3.0, running under Windows x64 on my HP 6910p laptop with an Intel T7500 Dual Core CPU running at 2.2GHz, 4Gb of memory and using a separate drive for the virtual machines. I created a machine in VMWare Player with a single CPU, 1536 Mb memory and installed Windows 7 x64 from an ISO image.  I then performed a Windows Update, installed VMWare Tools, and finally the Windows Phone Developer Tools CTP After a few warnings about performance, I configured Windows 7 to run with Windows 7 Basic theme rather than use Aero (which is available under VMWare Player as it has a WDDM driver). Timings As a test I first launched Microsoft Visual Studio 2010 Express for Windows Phone, and created a default Windows Phone Application project.  I then clicked the run button, which starts the emulator and then loads the default application onto the emulator. For the second test I left the emulator running, stopped the default application, added a single button to change the page title and redeployed to the already running emulator by clicking the run button.   Test 1 (1st run) Test 2 (emulator already running)   VMWare Player 10 minutes  1 minute   Windows x64 native 1 minute  < 10 seconds   Conclusion You can run the Windows Phone Developer Tools CTP under VMWare Player, but it’s really, really slow and you would have to have very good reasons to try this approach. If you need to keep a development system free of non production code, and the two systems aren’t required to run simultaneously, then I’d consider a boot from VHD option.  Then you can completely isolate the Windows Phone Developer Tools CTP and development environment into a single VHD separate from your main development system.

    Read the article

  • SQL SERVER – Manage Help Settings – CTRL + ALT + F1

    - by pinaldave
    It is a miracle that curiosity survives formal education. ~ Albert Einstein I have 3 years old daughter and she never misses any chance to play with the system. I have multiple computers and I always make sure that if I am working with production server, I never leave it open but when I am doing some experiment I often leave my computer open. My daughter loves the part when I have left the computer open and I am not observing her. Recently I had the same scenario, I got urgent call and I moved away from my computer and when I returned she was playing with SSMS left open my computer. Here is the screen which was visible on the screen. For a moment, I could not figure out what was this screen and what was about to get updated. I tried to ask her what keys she pressed the reaction was “I wanted – eya eya o”. Well, what more I expect from 3 years old. She is no computer genius – she just learned to use notepad and paint on my machine. Finally, when I saw the above screen in detail, I realize that this screen was from the help screen and something got updated. I have been using SQL Server for a long time but I never updated help on the screen. When I need to search something if I remember that I have written it earlier I will go to http://search.sqlauthority.com and will search there or will search on Google. As this computer was already updated I fired up Virtual Machine and tried to look recreate how my daughter was reached to above screen. Here are the steps which I have to do to reach to above screen. Go to SSMS >> Toolbar >> Help >> Manage Help Settings (or type CTRL+ALT+F1) and click it. Above click brought up following screen. I clicked on Check for update online brought following screen up. When I clicked on Update it brought me back to original screen which my daughter was able to bring up earlier. I found it so interesting that what took me 2-3 minutes to figure out and the screen which I have never come across in my career I learned from my curiosity like my daughter. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >