Search Results

Search found 74197 results on 2968 pages for 'part time'.

Page 382/2968 | < Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >

  • In the Cloud, Everything Costs Money

    - by BuckWoody
    I’ve been teaching my daughter about budgeting. I’ve explained that most of the time the money coming in is from only one or two sources – and you can only change that from time to time. The money going out, however, is to many locations, and it changes all the time. She’s made a simple debits and credits spreadsheet, and I’m having her research each part of the budget. Her eyes grow wide when she finds out everything has a cost – the house, gas for the lawnmower, dishes, water for showers, food, electricity to run the fridge, a new fridge when that one breaks, everything has a cost. She asked me “how do you pay for all this?” It’s a sentiment many adults have looking at their own budgets – and one reason that some folks don’t even make a budget. It’s hard to face up to the realities of how much it costs to do what we want to do. When we design a computing solution, it’s interesting to set up a similar budget, because we don’t always consider all of the costs associated with it. I’ve seen design sessions where the new software or servers are considered, but the “sunk” costs of personnel, networking, maintenance, increased storage, new sizes for backups and offsite storage and so on are not added in. They are already on premises, so they are assumed to be paid for already. When you move to a distributed architecture, you'll see more costs directly reflected. Store something, pay for that storage. If the system is deployed and no one is using it, you’re still paying for it. As you watch those costs rise, you might be tempted to think that a distributed architecture costs more than an on-premises one. And you might be right – for some solutions. I’ve worked with a few clients where moving to a distributed architecture doesn’t make financial sense – so we didn’t implement it. I still designed the system in a distributed fashion, however, so that when it does make sense there isn’t much re-architecting to do. In other cases, however, if you consider all of the on-premises costs and compare those accurately to operating a system in the cloud, the distributed system is much cheaper. Again, I never recommend that you take a “here-or-there-only” mentality – I think a hybrid distributed system is usually best – but each solution is different. There simply is no “one size fits all” to architecting a solution. As you design your solution, cost out each element. You might find that using a hybrid approach saves you money in one design and not in another. It’s a brave new world indeed. So yes, in the cloud, everything costs money. But an on-premises solution also costs money – it’s just that “dad” (the company) is paying for it and we don’t always see it. When we go out on our own in the cloud, we need to ensure that we consider all of the costs.

    Read the article

  • Is having several desktop environments on one account bad?

    - by Joseph_carp
    I have gnome classic, cinnamon, unity, gnome 3, and KDE installed on my only user account because I enjoy a little change from time to time (although my favorite is gnome classic) so I installed all of these desktop environments. I heard from a friend that it could potentially cause some problems. I was also told that it would be okay if I created a separate account for each environment, but I don't want to if I don't have to. Any help is much appreciated, thank you.

    Read the article

  • WebCenter Customer Spotlight: spectrumK Holding GmbH

    - by me
    Author: Peter Reiser - Social Business Evangelist Oracle WebCenter Solution Summary spectrumK Holding GmbH was founded in 2007 by various German health insurance funds and national insurance associations and is a service provider for the healthcare market, covering patient care management, financial management, and information management, as well as payment services and legal counseling. spectrumK Holding GmbH business objectives was to implement innovative new Web-based services and solution systems for health insurance funds by integrating a multitude of isolated solutions from different organizations. Using Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio, the customer created a multiple-portal environment and deployed the 1st three applications for patient receipt, a medication navigator, and disability information. spectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations. Company Overview spectrumK Holding GmbH was founded in 2007 by various company health insurance funds and national insurance associations. A service provider for the healthcare market, spectrumK consists of one holding company and four operative subsidiaries. Its broad product portfolio of compulsory health funds covers patient care management, financial management, and information management, as well as payment services and legal counseling. Business ChallengesspectrumK Holding GmbH business objectives were to implement innovative new Web-based services and solution systems for the health insurance funds by integrating a multitude of isolated solutions from different organizations. Specifically, spectrumK was looking to: Establish a portal-based environment to provide health coverage information services to the insured, with the option to integrate a multitude of isolated solutions from different organizations Implement innovative new Web-based spectrumK service products and solutions systems for health insurance funds Lower costs while improving services for the health fund’s clients Find an infrastructure that supports the small development team in efficient implementation and operation of the solution Reuse standard modules while enabling easy, inexpensive adaptations to customer-specific corporate requirements Solution Deployed spectrumK Holding GmbH created a multiple-portal environment, called “KundenCenter+“ which is based on the integration of Oracle WebCenter Portal, Oracle WebCenter Content, and Site Studio. They initiated and launched the first three of the company’s KundenCenter+, Oracle-based modules for patient receipt, a medication navigator, and disability information, with numerous successful deployments and individual customer environment adaptations. Business ResultsspectrumK Holding GmbH accelerated time-to-market for new features by reducing the development time, achieved 40% development and cost savings using standard modules and realized 80% overall savings using the Oracle multiple portal environment, as compared to individual installations Additional Information  spectrumK Holding GmbH Snapshot Oracle WebCenter Suite Oracle Customer Support Oracle Consulting Oracle WebCenter Content

    Read the article

  • Why can't we capture the design of software more effectively?

    - by Ira Baxter
    As engineers, we all "design" artifacts (buildings, programs, circuits, molecules...). That's an activity (design-the-verb) that produces some kind of result (design-the-noun). I think we all agree that design-the-noun is a different entity than the artifact itself. A key activity in the software business (indeed, in any business where the resulting product artifact needs to be enhanced) is to understand the "design (the-noun)". Yet we seem, as a community, to be pretty much complete failures at recording it, as evidenced by the amount of effort people put into rediscovering facts about their code base. Ask somebody to show you the design of their code and see what you get. I think of a design for software as having: An explicit specification for what the software is supposed to do and how well it does it An explicit version of the code (this part is easy, everybody has it) An explanation for how each part of the code serves to achieve the specification A rationale as to why the code is the way it is (e.g., why a particualr choice rather than another) What is NOT a design is a particular perspective on the code. For example [not to pick specifically on] UML diagrams are not designs. Rather, they are properties you can derive from the code, or arguably, properties you wish you could derive from the code. But as a general rule, you can't derive the code from UML. Why is it that after 50+ years of building software, why don't we have regular ways to express this? My personal opinion is that we don't have good ways to express this. Even if we do, most of the community seems so focused on getting "code" that design-the-noun gets lost anyway. (IMHO, until design becomes the purpose of engineering, with the artifact extracted from the design, we're not going to get around this). What have you seen as means for recording designs (in the sense I have described it)? Explicit references to papers would be good. Why do you think specific and general means have not been succesful? How can we change this?

    Read the article

  • Overload Avoidance

    - by mikef
    A little under a year ago, Matt Simmons wrote a rather reflective article about his terrifying brush with stress-induced ill health. SysAdmins and DBAs have always been prime victims of work-related stress, but I wonder if that predilection is perhaps getting worse, despite the best efforts of Matt and his trusty side-kick, HR. The constant pressure from share-holders and CFOs to 'streamline' the workforce is partially to blame, but the more recent culprit is technology itself. I can't deny that the rise of technologies like virtualization, PowerCLI, PowerShell, and a host of others has been a tremendous boon. As a result, individual IT professionals are now able to handle more and more tasks and manage increasingly large and complex environments. But, without a doubt, this is a two-edged sword; The reward for competence is invariably more work. Unfortunately, SysAdmins play such a pivotal role in modern business that it's easy to see how they can very quickly become swamped in conflicting demands coming from different directions. However, that doesn't justify the ridiculous hours many are asked (or volunteer) to devote to their work. Admirably though their commitment is, it isn't healthy for them, it sets a dangerous expectation, and eventually something will snap. There are times when everyone needs to step up to the plate outside of 'normal' work hours, but that time isn't all the time. Naturally, with all that lovely technology, you can automate more and more of those tricky tasks to keep on top of the workload, but you are still only human. Clever though you may be, there is a very real limit to how far technology can take you. I'm not suggesting that you avoid these technologies, or deliberately aim for mediocrity; I'm just saying that you need to be more than just technically skilled (and Wesley Nonapeptide riffs on and around this topic in his excellent 'Telepathic Robot Drones' blog post). You need to be able to manage expectations, not just Exchange. Specifically, that means your own expectations of what you are capable of, because those come before everyone else's. After all, how can you keep your work-life balance under control, if you're the one setting the bar way too high? Talking to your manager, or discussing issues with your users, is only going to be productive if you have some facts to work with. "Know Thyself" is the first law of managing work overload, and this is obviously a skill which people develop over time; the fact that veteran Sysadmins exist at all is testament to this. I'd just love to know how you get to that point. Personally, I'm using RescueTime to keep myself honest, but I'm open to recommendations for better methods. Do you track your own time, do you have an intuitive sense of what is possible, or do you just rely on someone else to handle that all for you? Cheers, Michael

    Read the article

  • Hot fix published for TFS2010 upgrade issues

    - by jehan
    Microsoft has released a hot fix for the issues that are identified after the migration of TFS2005/TFS2008 servers to TFS2010. The issues are related to Merging and Labels: ·         Labels that were created before the upgrade are entirely empty.  Labels could be also have incorrect contents. ·         The merge wizard in Visual Studio does not display all valid merge targets for a given source path/branch. ·         During merging, merge candidates are shown for changes that were already merged prior to the upgrade. If you have not yet upgraded to TFS 2010, the hotfix is now available and is highly recommended to be applied before configuring your team project collections. Because this hotfix applies to the upgrade of version control content, it must be applied after TFS 2010 setup is complete, but before configuration is started.  At the end of the setup experience, the Success screen is shown indicating the completion of the installation.  Normally, users will continue on to the configuration part, but in this case, the user need to cancel the configuration part by un-checking the “Launch Team Foundation Server Configuration Tool” box, which will enable the Cancel button. After exiting setup, the hotfix executable can be run to update the upgrade steps. Once the hotfix is installed, the TFS Configuration Wizard will need to be re-launched from the Start Menu to complete the upgrade process.    The hotfix has been published on MSDN Code Gallery – you can find it here: http://code.msdn.microsoft.com/KB2135068   If you have upgraded to TFS2010 and facing any of the above issues, then checkout this KB for Resolution: http://support.microsoft.com/kb/2193796/en-us

    Read the article

  • How can I retrieve the details of the file from an outbound operation in BPEL 11g

    - by [email protected]
    Several times, we come across requirements where we need to capture the details of the file that got written out as a part of a BPEL process invoking a File/Ftp Adapter. Consider a case where we're using FileNamingConvention as "PurchaseOrder_%SEQ%.txt" and we need to do some post processing based on the filename (please remember that we wouldn't know the filename until the adapter invocation completes) In order to achieve this, we need to manually tweak the WSDL so that the File/Ftp Adapter can return the metadata of the file that was written out. In general, the File/Ftp Write/Put WSDL operations are one way as shown below:         The File/Ftp Adapters are designed to return the metadata back if this WSDL is tweaked into a two-way WSDL. In addition, the <wsdl:output/> must import the fileread.xsd schema (see below). You will need to copy fileread.xsd from  here into the xsd folder of your composite.       Finally, we will need to tweak the  WSDL. (highlighted below)           Finally, the BPEL <invoke> would look as shown below. Please note that the file metadata would be returned as a part of the BPEL output variable:

    Read the article

  • How In-Memory Database Objects Affect Database Design: The Conceptual Model

    - by drsql
    After a rather long break in the action to get through some heavy tech editing work (paid work before blogging, I always say!) it is time to start working on this presentation about In-Memory Databases. I have been trying to decide on the scope of the demo code in the back of my head, and I have added more and taken away bits and pieces over time trying to find the balance of "enough" complexity to show data integrity issues and joins, but not so much that we get lost in the process of trying to...(read more)

    Read the article

  • How common is it to submit papers to journals or conferences outside of academia?

    - by Furry
    I worked in academia a few years, but more on the D-side of R&D. The race for papers never appealed to me and I'm a practical not theoretical type, but I do like reading papers on certain topics (e.g. Google Papers, NLP, FB papers, ...) a lot. How common is it that normally working developers submit papers to conferences or even journals? It seems to be somewhat common in certain companies (it's not common or encouraged in mine). Do journals or conferences even take papers by an academic nobody (BSc) under consideration? I ask, because I have a few rough ideas and I would just like to bring them into form, one way or the other. Bonus question: Is there a list of CS (in the widest sense) conferences/journals with short descriptions? PS (Four out of five researchers I met published quite some fluffy stuff for my taste. I am no expert, but those people told me sometimes themselves, that the implementation does not matter, just the idea and the presentation. I always wondered about that. I probably could write about ideas all day long (not instantly but with a bit of preparation), but the implementation and the practical part is the really hard part, that academia just does not like to be concerned with. Also many papers actually scream: I was written so the publication list of my author gets longer - which is a waste of time for everyone, and often a waste of tax money, too. When I think of CS-ish papers, I think of running implementations or actual data, like e.g. Google's Map Reduce, Serving Large-scale Batch Computed Data with Project Voldemort or the like.)

    Read the article

  • Good Video Game User Interface Design Books/Websites?

    - by Tucker Morgan
    I having been programming games for some time, but while my teachers say that my code is good and advanced, my friends say that the interface is hard to understand and not the easiest to navigate. I want to learn how to design good user interfaces so that I can program better games, and people will have a easier time getting around. Does anyone know of any good books or websites about designing video game interfaces?

    Read the article

  • Oracle Partners Delivering Business Transformations With Oracle WebCenter

    - by Brian Dirking
    This week we’ve been discussing a new online event, “Transform Your Business by Connecting People, Processes, and Content.” This event will include a number of Oracle partners presenting on their successes with transforming their customers by connecting people, processes, and content: Deloitte - Collaboration and Web 2.0 Technologies in Supporting Healthcare, delivered by Mike Matthews, the Canadian Healthcare partner and mandate partner on Canadian Partnership Against Cancer at Deloitte InfoSys - Leverage Enterprise 2.0 and SOA Paradigms in Building the Next Generation Business Platforms, delivered by Rizwan MK, who heads InfoSys' Oracle technology delivery business unit, defining and delivering strategic business and technology solutions to Infosys clients involving Oracle applications Capgemini - Simplifying the workflow process for work order management in the utility market, delivered by Léon Smiers, a Solution Architect for Capgemini. Wipro - Oracle BPM in Banking and Financial Services - Wipro's Technology and Implementation Expertise, delivered by Gopalakrishna Bylahalli, who is responsible for the Transformation Practice in Wipro which includes Business Process Transformation, Application Transformation and Information connect. In Mike Matthews’ session, one thing he will explore is how to CPAC has brought together an informational website and a community. CPAC has implemented Oracle WebCenter, and as part of that implementation, is providing a community where people can make connections and share their stories. This community is part of the CPAC website, which provides information of all types on cancer. This make CPAC a one-stop shop for the most up-to-date information in Canada.

    Read the article

  • Thank you Geeks With Blogs for letting me join your community!

    - by GreeNTUG
    First, a link to the blog I can no longer edit because Office Live blew away my digital identity and so I can no longer log into it (the source of a loooong blog about protecting your digital identity sometime when I have more time and after it has played out to the end) http://greentug.spaces.live.com/ The following are the communities I participate in: Green & Sustainability.  I run a virtual user group on Green and Sustainability as it relates to developers and software architects.  It was located at greentug.groups.live.com, and we will need to find a new digital location for it, because I am locked out of that site as well. BizSpark Tampa Bay:  I run a BizSpark group for Microsoft technologists (meetup.com, search for BizSpark Tampa Bay) and speak at Code Camps about "No Better Time to Start Your Own Tech Business".  The meetup group facilitates a balanced presentation that is respectful to anyone wanting to start their own business, whether part-time or full-time, whether micro (just you), sustainable (grow to 2-25-ish, self-funded), high growth (get venture capital or other funding, grow it, sell it within 5 years, do it again), or hybrid (the new model going forward).  It is an "action" group, with assignments and homework if you want to get the most out of it.   At the end of a year you will either have your business on the path to where you want it to be, or you will know the steps you need to do to get it there. Women in Technology Have been participating in the Women in Technology community since 2008, my main interests in this area are mentoring women in the workplace to have them believe they can become geeks and double their income, and to mentor them with respect to starting and running their own business. Access 2010/SharePoint 2010.  This is a game-changer with respect to the Access community (the ap both devs and IT Pros love to hate, the other a-word that's not a fruit).  I conducted Lunch n Learns and Brunch n Learns around this topic before the Office 2010/SharePoint 2010 launch, and spoke on the topic at SharePoint Saturday Tampa in Nov 2009. Interested in learning more about: Using Silverlight HD Streaming out in the non-technical world (horses and equestrian sport).  Migrating to Access Web Services and VB .Net from VBA (see the Access 2010/SharePoint 2010 interest above) Windows Phone 7!  Exciting opportunities both for Green and Sustainability and for my "day job" of Environmental, Health & Safety (EHS). My day job is Environmental, Health & Safetey (EHS) consulting and software solutions, where that interfaces with the developer world is with respect to opportunities around Green and Sustainability, The SmartGrid and Juval Lowy's EnergyNet, both of which will require a lot of technology and software to make them work, The new Microsoft Partner competency for "Digital Home", and The Y2K kind of deadline around how managing chemicals in ERP systems is changing because of Global Harmonization, which hits the EU with a hard deadline on 11/30/10 (yes, this year), and hits the USA about 15 months later. Hope you enjoy my contributions to the digital geek community, and feel free to email me, [email protected] (the email leftover after my digital identity was blown away), and [email protected] (this one could go away at some future point) Best, Kathy Malone

    Read the article

  • Free training at Northwest Cadence

    - by Martin Hinshelwood
    Even though I have only been at Northwest Cadence for a short time I have already done so much. What I really wanted to do was let you guys know about a bunch of FREE training that NWC offers. These sessions are at a fantastic time for the UK as 9am PST (Seattle time) is around 5pm GMT. Its a fantastic way to finish off your Fridays and with the lack of love for developers in the UK set to continue I would love some of you guys to get some from the US instead. There are really two offerings. The first is something called Coffee talks that take you through an hours worth of detail in a specific category. Coffee Talks These coffee talks have some superb topics and you can get excellent interaction with the presenter as they are kind of informal. Date Day Time Topic Register Here 01/04/11 Tuesday 8:30AM – 9:30AM PST Real World Business and Technical Benefits of ALM with TFS 2010 150656 01/28/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152810 02/11/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152844 02/25/11 Friday 2:00PM - 3:00PM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152816 03/11/11 Friday 9:00AM - 10:00AM PST Lab Manager The Ultimate “No More No Repro” Tool 152809 03/25/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152838 04/08/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152846 04/22/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152839 05/06/11 Friday 2:00PM - 3:00PM PST Real World Business and Technical Benefits of ALM with TFS 2010 150657 05/20/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152842 06/03/11 Friday 9:00AM - 10:00AM PST Visual Source Safe to Team Foundation Server 152847 06/17/11 Friday 9:00AM - 10:00AM PST The Full Testing Experience Professional Quality Assurance with Visual Studio 2010 152843   ALM Training Engagement Program Microsoft has released a new program to bring free Visual Studio 2010 Training Sessions to select customers on Microsoft Visual Studio products and how Application Lifecycle Management (ALM) solutions can help drive greater business impact. For more details on this program, please see the process chart below.  To get started send an email to us; This training is paid for by Microsoft and you would need to commit to 4 sessions in order to get accepted into the program. So these have more hoops to jump through to get them, but the content is much more formal and centres around adoption.

    Read the article

  • More Changes...

    - by MOSSLover
    Stuff has changed drastically for me in the past two to three years.  I moved over 1000 miles from Saint Louis.  I go outside and I get up in front of crowds with less issues.  Now I'm changing jobs again.  I'm not really sure what to say here.  I was obviously unhappy and I needed to do something different.  So quit two days ago and I guess it worked out that I end with B&R this Friday, then head to TEC and SPS Huntsville and a week from this Monday I start my new job at Gig Werks.  I'm not sure what to expect or where I'm heading, but I think it's a step in the right direction.  I won't really know what kind of impact this will have on my life for at least another 6 months to a year. For some reason I can't sleep tonight and I think it's really a reflection of my last day.  Tomorrow is an ending and a beginning at the same time.  So it's both kind of sad and exciting.  I don't know why I'm really excited to go to Disney Land for the second time ever in my life time.  I get to ride the Teacups.  For the longest time when I was a kid I wanted to go to Disney Land.  I wanted to ride the teacups.  In 2007, at the age of 25, I rode the teacups for my first ever visit to LA.  That was the start of finally syncing up with my childhood goals.  I wanted to live near a major city.  I wanted to visit all the major cities in the world.  I wanted to see everything and meet everyone.  This job change will probably turn into something great I just don't know it yet.  I'm walking again outside my comfort zone and stepping into uncharted territory.  In 2-3 years I'll probably write another blog post how this week lead to something great.  It just stinks when you have to leave behind something you know and love.  I will miss all my current colleagues, but I'm sure I'll gain some new ones and keep in touch with the old.  To 2010 being a great year for change and hopefully by the end of the year I can say I went to Europe.  To reaching my goals and my dreams.  Don't let anyone stop you from getting what you want in life (unless you are axe murderer please don't kill anyone that's just wrong).  Have a good weekend everyone!

    Read the article

  • Big Visible Charts

    - by Robert May
    An important part of Agile is the concept of transparency and visibility. In proper functioning teams, stakeholders can look at any team at any time in the iteration or release and see how that team is doing by simply looking at what we call Big Visible Charts. If you’ve done Scrum, you’ve seen these charts. However, interpreting these charts can often be an art form. There are several different charts that can be useful. In this newsletter, I’ll focus on the Iteration Burndown and Cumulative Flow charts. I’ve included a copy of the spreadsheet that I used to create the charts, and if you don’t have a tool that creates them for you, you can use this spreadsheet to do so. Our preferred tool for managing Scrum projects is Rally. Rally creates all of these charts for you, saving you quite a bit of time. The Iteration Burndown and Cumulative Flow Charts This is the main chart that teams use. Although less useful to stakeholders, this chart is critical to the team and provides quite a bit of information to the team about how their iteration is going. Most charts are a combination of the charts below, so you may need to combine aspects of each section to understand what is happening in your iterations. Ideal Ah, isn’t that a pretty picture? Unfortunately, it’s also very unrealistic. I’ve seen iterations that come close to ideal, but never that match perfectly. If your iteration matches perfectly, chances are, someone is playing with the numbers. Reality is just too difficult to have a burndown chart that matches this exactly. Late Planning Iteration started, but the team didn’t. You can tell this by the fact that the real number of estimated hours didn’t appear until day two. In the cumulative flow, you can also see that nothing was defined in Day one and two. You want to avoid situations like this. You’ll note that the team had to burn faster than is ideal to meet the iteration because of the late planning. This often results in long weeks and days. Testing Starved Determining whether or not testing is starved is difficult without the cumulative flow. The pattern in the burndown could be nothing more that developers not completing stories early enough or could be caused by stories being too big. With the cumulative flow, however, you see that only small bites are in progress and stories were completed early, but testing didn’t start testing until the end of the iteration, and didn’t complete testing all stories in the iteration. When this happens, question whether or not your testing resources are sufficient for your team and whether or not acceptance is adequately defined. No Testing With this one, both graphs show the same thing; the team needs testers and testing! Without testing, what was completed cannot be verified to make sure that it is acceptable to the business. If you find yourself in this situation, review your testing practices and acceptance testing process and make changes today. Late Development With this situation, both graphs tell a story. In the top graph, you can see that the hours failed to burn down as quickly as the team expected. This could be caused by the team not correctly estimating their hours or the team could have had illness or some other issue that affected them. Often, when teams are tackling something that is more unknown, they’ll run into technical barriers that cause the burn down to happen slower than expected. In the cumulative flow graph, you can see that not much was completed in the first few days. This could be because of illness or technical barriers or simply poor estimation. Testing was able to keep up with everything that was completed, however. No Tool Updating When you see graphs that look like this, you can be assured that it’s because the team is not updating the tool that generates the graphs. Review your policy for when they are to update. On the teams that I run, I require that each team member updates the tool at least once daily. You should also check to see how well the team is breaking down stories into tasks. If they’re creating few large tasks, graphs can look similar to this. As a general rule, I never allow tasks, other than Unit Testing and Uncertainty, to be greater than eight hours in duration. Scope Increase I always encourage team members to enter in however much time they think they have left on a task, even if that means increasing the total amount of time left to do. You get a much better and more realistic picture this way. Increasing time remaining could explain the burndown graph, but by looking at the cumulative flow graph, we can see that stories were added to the iteration and scope was increased. Since planning should consume all of the hours in the iteration, this is almost always a bad thing. If the scope change happened late in the iteration and the hours remaining were well below the ideal burn, then increasing scope is probably o.k., but estimation needs to get better. However, with the charts above, that’s clearly not what happened and the team was required to do extra work to make the iteration. If you find this happening, your product owner and ScrumMasters need training. The team also needs to learn to say no. Scope Decrease Scope decreases are just as bad as scope increases. Usually, graphs above show that the team did a poor job of estimating their stories and part way through had to reduce scope to change the iteration. This will happen once in a while, but if you find it’s a pattern on your team, you need to re-evaluate planning. Some teams are hopelessly optimistic. In those cases, I’ll introduce a task I call “Uncertainty.” With Uncertainty, the team estimates how many hours they might need if things don’t go well with the tasks they’ve defined. They try to estimate things that could go poorly and increase the time appropriately. Having an Uncertainty task allows them to have a low and high estimate. Uncertainty should not just be an arbitrary buffer. It must correlate to real uncertainty in the tasks that have been defined. Stories are too Big Often, we see graphs like the ones above. Note that the burndown looks fairly good, other than the chunky acceptance of stories. However, when you look at cumulative flow, you can see that at one point, everything is in progress. This is a bad thing. When you see graphs like this, you’re in one of two states. You may just have a very small team and can only handle one or two stories in your iteration. If you have more than one or two people, then the most likely problem is that your stories are far too big. To combat this, break large high hour stories into smaller pieces that can be completed independently and accepted independently. If you don’t, you’ll likely be requiring your testers to do heroic things to complete testing on the last day of the iteration and you’re much more likely to have the entire iteration fail, because of the limited amount of things that can be completed. Summary There are other charts that can be useful when doing scrum. If you don’t have any big visible charts, you really need to evaluate your process and change. These charts can provide the team a wealth of information and help you write better software. If you have any questions about charts that you’re seeing on your team, contact me with a screen capture of the charts and I’ll tell you what I’m seeing in those charts. I always want this information to be useful, so please let me know if you have other questions. Technorati Tags: Agile

    Read the article

  • SharePoint 2010 Search - not search additional content sources

    - by Chris W
    I've got SP 2010 crawling a secondary intranet system that we'll run in parallel as part of a long running migration to SharePoint when it releases. Whilst it's crawling the pages without problem I can't see how to get the results to appear as part of the Quick Search results if the user does a search from the little search dialog box on the home page. Searches completed within a My Sites pages lists results from port the SharePoint installation and the external content source. Searches from the main search dialog only list results of SharePoint items. I tried adding the drop down option to select the site to search but this list only includes the name of the current site and doesn't offer an 'All Sites' scope option which I think would include the content. What am I doing wrong?

    Read the article

  • How can I avoid team burnout?

    - by Shawn Dalma
    I work for a small web company that deals with a lot of projects, a few at any given time are development heavy for us (400-1500 hours or more) and I've been noticing developers get extremely burnt out on a project after 150 hours or so. I've been toying around with the idea of working some form of rotation/rest so when someone reaches the threshold, they at least get some time off of working on that project. Is there an industry standard approach?

    Read the article

  • HTML5 Development for Dummies

    - by Geertjan
    What's HTML5 all about and what does it actually mean, concretely, to develop HTML5 applications? NetBeans IDE 7.3 provides something called "Project Easel", which is a bundling of HTML5-related tools into a coherent toolset. Within a matter of hours, you'll know everything you need to know about what all this is about if you follow the steps below.  Get A Solid Overview. Start by viewing this screencast from JavaOne 2012 (click the media link on the right side once you've clicked the link below, a downloadable MP4 file is also available there):https://oracleus.activeevents.com/connect/sessionDetail.ww?SESSION_ID=4038That is an awesome way to get you in the right mindframe for what HTML5 is and how it fits into the programming world, together with a very cool and entertaining demo, presented by JB Brock. He starts with about three slides and then does a super awesome demo that puts you into the picture very quickly. Understand How HTML5 Relates To Java EE. Now here's a very cool follow up to the above, again demo-driven (click the media links on the right side once you've clicked the link below):https://oracleus.activeevents.com/connect/sessionDetail.ww?SESSION_ID=4737David Konecny takes the Affable Bean project created via the NetBeans E-commerce Tutorial and creates an HTML5 front end for it! I.e., you are shown how HTML5 can provide a different front end, as an alternative to JSF. Why would you do that? Well, that's explained in David's session, as well as in JB Brock's session, i.e., choose the right technology for the right situation. Sometimes HTML5 might make sense, other times JSF might make sense. Follow The NetBeans Screencasts. To revise and firm up everything you've learned from the above two JavaOne sessions, watch two screencasts by Ken Ganfield, part 1, Getting Started with HTML5 and part 2, Working with JavaScript in HTML5 Applications. In particular, you'll learn how NetBeans IDE provides tools to thoroughly cover the needs of HTML5 developers. Having taken the above three steps, you now have a thorough background, together with an understanding of the tools and procedures needed for creating your own HTML5 applications.

    Read the article

  • Java Certification Exams and Their Move to Pearson VUE

    - by Harold Green
    You may be aware that Oracle recently migrated all Sun-branded certification exams from Prometric to Pearson VUE. Below are answers to some frequently-asked questions that we've been getting recently: What changes to the exams should I be aware of?Only minor changes were made to the exams during the transition to Pearson VUE: Renumbering of all exams to the Oracle exam numbering structure (i.e. 1Z0...). Most exam score reports were enhanced to provide more detailed feedback. Score reports now list every exam objective for which a question (or questions) were answered incorrectly. The previous format provided only section-level performance feedback. For three Java exams, some lengthy (time-consuming) questions were removed & replaced with shorter (less time-consuming) questions. This was done in order to shorten the required exam time (to 150 minutes). Some interactive question types were removed from several Java and Solaris exams (including "matching" and "drag-and-drop" questions). The passing scores (for the exams that were revised) were statistically adjusted to make them equal to their prior passing scores, thus ensuring that the exams maintained the same level of difficulty as before. The exam objectives and the exam questions themselves did not change. Candidates should study the same material and objectives. Are there also new testing practices I should be aware of?Oracle follows a common industry practice of placing occasional un-scored questions on our certification exams. Candidates will not know which questions are unscored. At the time of this blog post, only one of the migrated exams (1Z0-898) contains unscored questions.I started the Master certification path through Prometric, and now I need to complete the requirements through Pearson VUE. Where can I get guidance on this process?Visit our Vendor Transition FAQs to find comprehensive instructions. Oracle has created several specific paths to accommodate candidates who were at at varying stages of completion of their master path when the transition occurred. Make sure to follow the specific path designed for your case, as you will need to know which exam number to select in order to submit/re-submit your requirements. QUICK LINKS Oracle Certification Blog Post: Java, Oracle Solaris, MySQL and Other Former Sun Certification Exams Now Being Delivered At Pearson VUE Oracle Certification Website: Vendor Transition Announcement

    Read the article

  • Building a Flash Platformer

    - by Jonathan O
    I am basically making a game where the whole game is run in the onEnterFrame method. This is causing a delay in my code that makes debugging and testing difficult. Should programming an entire platformer in this method be efficient enough for me to run hundreds of lines of code? Also, do variables in flash get updated immediately? Are there just lots of threads listening at the same time? Here is the code... stage.addEventListener(Event.ENTER_FRAME, onEnter); function onEnter(e:Event):void { //Jumping if (Yoshi.y > groundBaseLevel) { dy = 0; canJump = true; onGround = true; //This line is not updated in time } if (Key.isDown(Key.UP) && canJump) { dy = -10; canJump = false; onGround = false; //This line is not updated in time } if(!onGround) { dy += gravity; Yoshi.y += dy; } //limit screen boundaries //character movement if (! Yoshi.hitTestObject(Platform)) //no collision detected { if (Key.isDown(Key.RIGHT)) { speed += 4; speed *= friction; Yoshi.x = Yoshi.x + movementIncrement + speed; Yoshi.scaleX = 1; Yoshi.gotoAndStop('Walking'); } else if (Key.isDown(Key.LEFT)) { speed -= 4; speed *= friction; Yoshi.x = Yoshi.x - movementIncrement + speed; Yoshi.scaleX = -1; Yoshi.gotoAndStop('Walking'); } else { speed *= friction; Yoshi.x = Yoshi.x + speed; Yoshi.gotoAndStop('Still'); } } else //bounce back collision detected { if(Yoshi.hitTestPoint(Platform.x - Platform.width/2, Platform.y - Platform.height/2, false)) { trace('collision left'); Yoshi.x -=20; } if(Yoshi.hitTestPoint(Platform.x, Platform.y - Platform.height/2, false)) { trace('collision top'); onGround=true; //This update is not happening in time speed = 0; } } }

    Read the article

  • basic beginning emacs questions - install latest version and pick appropriate UI

    - by MountainX
    I'm running the latest Kubuntu (12.04 beta 2) and I would like to run the latest emacs (currently v24). The repos are one version behind. What's the best way to install v24 or later (and avoid future version conflicts)? Also, is there any reason not to aways use the GUI version of emacs if X is running? For example, could I set the GUI emacs version as the default text editor and use it to edit cron jobs (crontab -e)? I'm assuming the answer is yes, but since I haven't done that yet (my default editor is nano), I want to check if there are reasons I should leave nano as the default editor. Usually when I'm working on the command line I end up using nano. Now that I think about it, I have no idea why I keep doing that. Is there any downside to calling a GUI editor when working in an X terminal? EDIT: I briefly tested these two versions GNU Emacs 24.0.94.1 (x86_64-pc-linux-gnu, GTK+ Version 3.3.20) from GNU Emacs 23.3.1 (x86_64-pc-linux-gnu) installed by default in Kubuntu. This post explains some of the differences between versions. Unfortunately (for me) the defaults installed version (23.3.1, 23.3+1-1ubuntu9) is the nox version. Package: emacs23-nox Status: install ok installed Version: 23.3+1-1ubuntu9 Replaces: emacs23, emacs23-gtk, emacs23-lucid The package with version 24 opens in GUI mode by default. That's what I prefer. Some of the version 24 changes that interest me are listed in the references below. But there appear to be a multitude of different packages and versions I could install. References: What’s New In Emacs 24 (part 1) | Mastering Emacs http://www.masteringemacs.org/articles/2011/12/06/what-is-new-in-emacs-24-part-1/ " shell-mode uses pcomplete rules, with the standard completion UI. Yowzah! There’s a lot of cool, new functionality hidden away in this gem of a change." EmacsWiki: Recent Changes http://www.emacswiki.org/emacs/?action=rc;showedit=0

    Read the article

  • Configure custom SSL certificate for RDP on Windows Server 2012 in Remote Administration mode?

    - by Ryan Bolger
    So the release of Windows Server 2012 has removed a lot of the old Remote Desktop related configuration utilities. In particular, there is no more Remote Desktop Session Host Configuration utility that gave you access to the RDP-Tcp properties dialog that let you configure a custom certificate for the RDSH to use. In its place is a nice new consolidated GUI that is part of the overall "edit deployment properties" workflow in the new Server Manager. The catch is that you only get access to that workflow if you have the Remote Desktop Services role installed (as far as I can tell). This seems like a bit of an oversight on Microsoft's part. How can we configure a custom SSL certificate for RDP on Windows Server 2012 when it's running in the default Remote Administration mode without needlessly installing the Remote Desktop Services role?

    Read the article

  • How to Use and Customize Google Chrome Web Apps

    - by The Geek
    Google announced their new Chrome Web Store today, with loads of web sites and games that can be installed as applications in your browser, synced across every PC, and customized to launch the way you want them to. Here’s how it all works. Note: this guide really isn’t aimed at expert geeks, though you’re more than welcome to leave your thoughts in the comments. What Are Chrome Web Apps Again? The new Chrome Web Apps are really nothing more than regular web sites, optimized for Chrome and then wrapped up with a pretty icon and installed in your browser. Some of these sites, especially web-based games, can also be purchased through the Chrome Web Store for a small fee, though the majority of services are free Latest Features How-To Geek ETC The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography Fun and Colorful Firefox Theme for Windows 7 Happy Snow Bears Theme for Chrome and Iron [Holiday] Download Full Command and Conquer: Tiberian Sun Game for Free Scorched Cometary Planet Wallpaper Quick Fix: Add the RSS Button Back to the Firefox Awesome Bar Dropbox Desktop Client 1.0.0 RC for Windows, Linux, and Mac Released

    Read the article

  • Leadership does not see value in standard process for machine configuration and new developer orientation

    - by opensourcechris
    About 3 months ago our lead web developer and designer(same person) left the company, greener pastures was the reason for leaving. Good for them I say. My problem is that his department was completely undocumented. Things have been tough since the lead left, there is a lot of knowledge both theoretical knowledge we use to quote new projects and technical/implementation knowledge of our existing products that we have lost as a result of his departure. My normal role is as a product manager (for our products themselves) and as a business analyst for some of our project based consulting work. I've taught myself to code over the past year and in an effort to continue moving forward I've taken on the task of setting my laptop up as a development machine with hopes of implementing some of the easier feature requests and fixing some of the no brainer bugs that get submitted into our ticketing system. But, no one knows how to take a fresh Windows machine and configure it to work seamlessly with our production apps. I have requested my boss, who is still in contact with the developer who left, ask them to document and create a process to onboard a new developer, software installation, required packages, process to deploy to the productions application servers, etc. None of this exists, and I'm spinning my wheels trying to get my computer working as a functional development machine. But she does not seem to understand the need for such a process to exist. Apparently the new developer who replaced the one who left has been using a machine that was pre-configured for our environment, so even the new developer could not set up a new machine if we added another developer. My question is two part: Am I wrong in assuming a process to on-board and configure a new computer to be part of our development eco-system should exist? Am I being a whinny baby and should I figure the process out and create a document on my own?

    Read the article

< Previous Page | 378 379 380 381 382 383 384 385 386 387 388 389  | Next Page >