Search Results

Search found 5953 results on 239 pages for 'customer stories'.

Page 64/239 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • Does the order of the columns in a SELECT statement make a difference?

    - by Frank Computer
    This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement? example: SELECT customer.id, transaction.id, transaction.efective_date, transaction.a, [...] FROM customer, transaction WHERE customer.id = transaction.id; I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

    Read the article

  • Why does my WCF service return and ARRAY instead of a List <T> ?

    - by user193189
    In the web servce I say public List<Customer> GetCustomers() { PR1Entities dc = new PR1Entities(); var q = (from x in dc.Customers select x).ToList(); return q; } (customer is a entity object) Then I generate the proxy when I add the service.. and in the reference.cd it say public wcf1.ServiceReference1.Customer[] GetCustomers() { return base.Channel.GetCustomers(); } WHY IS IT AN ARRAY? I asked for a List. help.

    Read the article

  • left join without duplicate values using MIN()

    - by Clipper87
    I have a table_1: id custno 1 1 2 2 3 3 and a table_2: id custno qty descr 1 1 10 a 2 1 7 b 3 2 4 c 4 3 7 d 5 1 5 e 6 1 5 f When I run this query to show the minimum order quantities from every customer: SELECT DISTINCT table_1.custno,table_2.qty,table_2.descr FROM table_1 LEFT OUTER JOIN table_2 ON table_1.custno = table_2.custno AND qty = (SELECT MIN(qty) FROM table_2 WHERE table_2.custno = table_1.custno ) Then I get this result: custno qty descr 1 5 e 1 5 f 2 4 c 3 7 d Customer 1 appears twice each time with the same minimum qty (& a different description) but I only want to see customer 1 appear once. I don't care if that is the record with 'e' as a description or 'f' as a description. How could I do this ? Thx!

    Read the article

  • Ubuntu 10.04 with HD flash

    - by Brad Robertson
    Just noticed that 10.04 is out. My media server has been packed away for a few months but I might dust if off and give 10.04 a shot but I thought I'd see if anyone has any success stories with HD flash in either Chrome or Firefox. I'm currently running Ubuntu 9.10 and it was a large enough pain to get VDPAU working with my Zotac Ion-ITX-C board (eventually found an mplayer PPA that had it compiled in) From reading the 10.04 docs it looks like this is standard now, but I'm wondering about streaming HD, from, say flash or Divx. I've never been able to get HD flash to play without it being extremely choppy, and I chalk this up to the lack of hardware assisted decoding like VDPAU (a guess). My board certainly isn't a competitor in CPU power or memory, which is why i've needed the HW accelerated decoding for HD vids in the past. Just wondering if anyone has had any success stories playing HD vid online (flash, divx or what have you)

    Read the article

  • Redmine + Backlogs not working on Turnkey Linux (Ubuntu)

    - by Riddler
    I'm trying to get Redmine + Backlogs work, so for starters I took a virtual appliance with Redmine from Turnkey Linux (http://www.turnkeylinux.org/redmine) and installed Backlogs on top of it, following the installation instructions (http://www.redminebacklogs.net/en/installation/ - used method #2). It seems to have installed ok, but when I go to the "Backlogs" tab and attempt to create some stories, this is what I get - first shows some kind of error/warning icon, others continue to display "in progress" icon indefinitely (can't post a screenshot, unfortunately, but you can take a look at it here: http://www.redmine.org/attachments/5329/Backlogs.jpg). None of the stories get actually created - leaving this tab and returning back to it shows empty backlogs. So.. what am I doing wrong, and how to fix this?

    Read the article

  • What's the weirdest thing you've ever seen a non-techie do to a computing device?

    - by googletorp
    I thought it would be fun to bring out the stories on what people have done throughout the ages with computers due to their sometimes, to put it politely, less than perfect grasp of technology. I'll start out with a little story from my high-school days. After graduation, a classmate of mine, who took IT and subsequently scored an A in the subject, was sent an e-mail by a friend. She replied to his mail: "I'm sorry, but I can't reply your mail, since I don't have your email address." Till this day I still can't understand why she gave such a weird reply as she was a bright girl although not a techie. Now, let's hear those other battle stories from the real world...

    Read the article

  • How do you measure the value of your software?

    - by Mike
    Hi, One of the principles of agile is that you should measure working software: Working software is the primary measure of progress - 12 principles of Agile The thing is, while I can measure my software in terms of stories done, bugs squashed or the volume of defect reports decreasing, I'm stuck on how to measure the value of my software. If I use Mike Cohn as an example and his helping SalesForce.com deliver 500% more value to it's customers compared to the previous year* - how do I measure that increase? How do I measure where I am right now? Other metrics he uses are the number of features and the number of features per developer. This is something I could work out if my backlog was in good order and the stories were cut up by 'feature', but we're just starting out with Agile, so I need some way of working out what the value is we deliver now, then use a similar metric in say, six months, to see if we've increased our output. I've heard about measuring value of software by an uptick in revenue, or an increase in customer satisfaction (how would you measure that though?) but those increases could be attributed to anything in the company (sales, accounting, support) and not directly to the work my department is doing. So, how do you guys measure the value of your software and how did you start? Thanks, Mike *Succeeding With Agile - Mike Cohn

    Read the article

  • Why’s (Poignant) Guide to Ruby

    - by Ben Griswold
    You’re familiar with O’Reilly’s brilliant Head First Series, right?  Great.  Then you know how every book begins with an explanation of the Head First teaching style and you know the teaching format which Kathy Sierra and Bert Bates developed is based on research in cognitive science, neurobiology and educational psychology and it’s all about making learning visual and conversational and attractive and emotional and it’s highly effective.  Anyway, it’s a great series and you should read every last one of the books. Moving on… I’ve been wanting to learn more about Ruby and Why’s (Poignant) Guide to Ruby has been on my reading list for a while and there was talk about cartoon foxes and other silliness and I figured Why’s (Poignant) Guide to Ruby probably takes the same unorthodox teaching style as the Head First books – and that’s great – so I read the book, in piecemeal, over the last couple of weeks and, well, I figured wrong. Now having read the book, here’s my take on Why’s (Poignant) Guide – it’s very creative and clever and it does a darn good job of introducing one to Ruby.  If you’re interested in Ruby or simply interested, the online book is worth your time.  If you’re thinking (like me) that cartoon foxes will be doing the teaching, that’s simple not the case.  However, the cartoons and the random stories in the sidebar may serve a purpose. Unlike the Head First books where images and captions are used to further explain the teachings, the cartoons and stories in Why’s Guide serve as intermission and offer your brain a brief moment of rest before the next Ruby concept is explained.  It’s not a bad strategy, but definitely not as effective as the Head First techniques.  

    Read the article

  • Google bots are severely affecting site performance

    - by Lynn
    I have an aggregate site on a linux server that pulls in feeds from a universe of about 2,000 blogs. It's in Wordpress 3.4.2 and I have a cron job that is staggered to run five times an hour on another server to pull in the stories and then publish them to the front page of this site. This is so I didn't put too much pressure all on one server. However, the Google bots, which visit a few times every hour bring the server to its knees in the morning and evenings when there is an increase in traffic on the site. The bots have something like 30,000 links to follow at this point. How do I throttle the bots to simply grab the new stories off the front page and stop there? EDIT- Details of my server configuration: The way we have this set up is the server that handles all the publishing is an unmanaged instance via AWS. It mounts the NFS server and connects to the RDS to update content, etc. You get to this publishing instance via a plugin that detects the wp-admin link and then redirects you into there. The front end app server also mounts the NFS and requests data from the RDS. It is the only one that has the WP Super Cache on it.... The OS is Ubuntu on the App server and the NFS runs CentOs. The front end is Nginx and the publishing server is Apache.

    Read the article

  • Dealing with selfish team member(s)

    - by thegreendroid
    My team is facing a difficult quandary, a couple of team members are essentially selfish (not to be confused with dominant!) and are cherry-picking stories/tasks that will give them the most recognition within the company (at sprint reviews etc. when all the stakeholders are present). These team members are very good at what they do and are fully aware of what they are doing. When we first started using agile about a year ago, I can say I was quite selfish too (coming from a very individual-focused past). I took ownership of certain stories and didn't involve anyone else in it, which in hindsight wasn't the right thing to do and I learnt from that experience almost immediately. We are a young team of very ambitious twenty somethings so I can understand the selfishness to some extent (after all everyone should be ambitious!). But the level to which this selfishness has reached of late has started to bother me and a few others within my team. The way I see it, agile/scrum is all about the team and not individuals. We should be looking out for each other and helping each other improve. I made this quite clear during our last retrospective, that we should be fair and give everyone a chance. I'll wait and see what comes out of it in the next few sprints. In the meantime, what are some of the troubles that you have faced with selfish members and how did you overcome them?

    Read the article

  • Developer Preview of Java SE 8 for ARM Now Available

    - by Tori Wieldt
    A Developer Preview of Java SE 8 including JavaFX (JDK 8) on Linux for ARM processors is now available for immediate download from Java.net. As Java Evangelist Stephen Chin says, "This is a great platform for doing small embedded projects, a low cost computing system for teaching, and great fun for hobbyists." This Developer Preview is provided to the community so that you can provide us with valuable feedback on the ongoing progress of the project. We wanted to get this release out to you as quickly as we can so you can start using this build of Java SE 8 on an ARM device, such as the Raspberry Pi (http://raspberrypi.org/). Download JDK 8 for ARM Read the documentation for this early access release Let Us Know What You Think!Use the Forums to share your stories, comments and questions. Java SE Snapshots: Project Feedback Forum  JavaFX Forum We are interested in both problems and success stories. If something does not work or behaves differently than what you expect, please check the list of known issues and if yours is not listed there, then report a bug at JIRA Bug Tracking System. More ResourcesJavaFX on Raspberry Pi – 3 Easy Steps by Stephen Chin OTN Tech Article: Getting Started with Java SE Embedded on the Raspberry Pi by Bill Courington and Gary Collins Java Magazine Article: Getting Started with Java SE for Embedded Devices on Raspberry Pi (Free subscription required) Video: Quickie Guide Getting Java Embedded Running on Raspberry Pi by Hinkmond Wong 

    Read the article

  • Agile development challenges

    - by Bob
    With Scrum / user story / agile development, how does one handle scheduling out-of-sync tasks that are part of a user story? We are a small gaming company working with a few remote consultants who do graphics and audio work. Typically, graphics work should be done at least a week (sometimes 2 weeks) in advance of the code so that it's ready for integration. However, since SCRUM is supposed to focus on user stories, how should I split the stories across iteration so that they still follow the user story model? Ideally, a user story should be completed by all the team members in the same iteration, I feel that splitting them in any way violates the core principle of user story driven development. Also, one front end developer can work at 2X pace of backend developers. However, that throws the scheduling out of sync as well because he is either constantly ahead of them or what we have done is to have him work on tasks that not specific to this iteration just to keep busy. Either way, it's the same issue as above, splitting up user story tasks. If someone can recommend an active Google agile development group that discusses these and other issues, that'll be great. Also, if you know of a free alternative to Pivotal Labs, let me know as well. I'm looking now at Agilo.

    Read the article

  • Tips for achieving "continual" delivery

    - by Ben
    A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: there is a release planning meeting to determine which stories are "known good" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms "subtle" bugs are found on production that were not identified on the staging environment. last minute hot-fixes continue into the Tuesday. problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve "continual" delivery?

    Read the article

  • Real life example of an agile game development process outputs

    - by Ken
    I'm trying to learn about applying agile methodologies to game development. But seems to be impossible to find real life examples. There seems to be plenty of material discussing how 'in principle' agile is applied to a game. But that is NOT what I am looking for. I have the Keith book. What I AMlooking for are real EXAMPLES of things like; Initial user stories Final user stories (complete, covering the entire game requirements) Acceptance criteria Task list Sprint backlogs (before and after each sprint) The agile books seem to have some limited examples, many of which seem contrived or limited. In this era of open source software, there must be a publicly available documented example of the process applied to a real game. I am asking specifically about games because they are so different from normal applications. Regular applications are built to all users to complete specific tasks in order to get stuff done(book a room, print a report etc). People play games for much less tangible reasons, so I think the process is significantly different. [it doesn't have to be scrum, it could be any process, just needs to be a real life example game and be reasonably complete]

    Read the article

  • SQL Source Control Contest

    - by Ajarn Mark Caldwell
    If you’re a regular reader of this blog, you know that I have written several posts about how important I think it is to protect your source code, to version it, and in particular, all the aspects I like about Red Gate’s SQL Source Control product.  But for a moment, let’s take a break from my writing and I want to hear your stories.  What nightmare situation are you in, or can you imagine, where source control for your database would save the world.  Or maybe your life is not so dramatic, but you do see a challenge that, if you just had a good tool like SQL Source Control, it would go much smoother.  What’s your pain?  You have read my writings, now tell me your story, and be in the running for a free copy of SQL Source Control from Red Gate. Yes, that’s right.  Although I am just a fan of Red Gate, they have authorized me to give out a handful of licenses to blog readers who are willing to share their story by posting a comment to this blog entry.  Simply add your comment below (be sure to include a valid email address in the box that asks for that) to be entered.  The contest starts immediately and over the next few days, the best stories will win.

    Read the article

  • Twitter Tuesday - Top 10 @ArchBeat Tweets - June 3-9, 2014

    - by OTN ArchBeat
    The Top 10 tweets from @OTNArchBeat for the last seven days. RT @DBAKevlar: #EM12c rel4 is out! Woohoo!! Jun 3, 2014 at 10:36 AM Top 10 Arch Community Articles for May 2014 >> props to @markrittman @kevin_mcginley @porushh et al Jun 4, 2014 at 12:52 PM Architecture of Analytics: @markrittman @kevin_mcginley >> Free OTN Virtual Tech Summit - July 9 Jun 4, 2014 at 09:13 AM My Top 10 Tweets - May 27 - June 2 #ADF #Essbase #FusionApps #Goldengate #Kscope14 #WebLogic. Jun 3, 2014 at 10:27 AM Starting and Stopping a #JavaEE Environment when using Oracle #WebLogic | Rene van Wijk #oracleace Jun 5, 2014 at 11:00 AM Video: #KScope14 Preview: @DebraLilley never stops moving, never stops learning. Jun 3, 2014 at 11:19 AM The OTNArchBeat Daily is out! Stories via @oraclebase Jun 9, 2014 at 01:47 PM Where did my MDB concurrency go? | Eric Gross #weblogic Jun 9, 2014 at 08:48 AM Exalogic Tech tips and code samples from A-Team architect Andrew Hopkinson Jun 6, 2014 at 11:47 AM The OTNArchBeat Daily is out! Stories via @KentGraziano @DBAKevlar @dbasolved Jun 3, 2014 at 01:48 PM adf, essbase,

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Bye Bye Year of the Dragon, Hello BPM

    - by Michelle Kimihira
    As CNN asks you to vote for most intriguing person of the year, what technologies do you think were most intriguing in 2012? Was it Social, Mobile, BPM or were you most captivated by Customer Experience? Well, we too observed these technology trends on the upswing and foresee that these will remain in limelight for 2013. What if we told you that there is a solution that brings these technologies together and helps not only to create efficient business processes but also an engaging customer experience. As we transition into 2013 let’s take a look at some of the top trending topics in BPM.  Ajay Khanna discusses these trends in OracleBPM blog, Bye Bye Year of the Dragon, Hello BPM.  Additional Information Product Information on Oracle.com: Oracle Fusion Middleware Follow us on Twitter and Facebook and YouTube Subscribe to our regular Fusion Middleware Newsletter

    Read the article

  • Learn Cloud Computing – It’s Time

    - by Ben Griswold
    Last week, I gave an in-house presentation on cloud computing.  I walked through an overview of cloud computing – characteristics (on demand, elastic, fully managed by provider), why are we interested (virtualization, distributed computing, increased access to high-speed internet, weak economy), various types (public, private, virtual private cloud) and services models (IaaS, PaaS, SaaS.)  Though numerous providers have emerged in the cloud computing space, the presentation focused on Amazon, Google and Microsoft offerings and provided an overview of their platforms, costs, data tier technologies, management and security.  One of the biggest talking points was why developers should consider the cloud as part of their deployment strategy: You only have to pay for what you consume You will be well-positioned for one time event provisioning You will reap the benefits of automated growth and scalable technologies For the record: having deployed dozens of applications on various platforms over the years, pricing tends to be the biggest customer concern.  Yes, scalability is a customer consideration, too, but it comes in distant second.  Boy do I hope you’re still reading… You may be thinking, “Cloud computing is well and good and it sounds catchy, but should I bother?  After all, it’s just another technology bundle which I’m supposed to ramp up on because it’s the latest thing, right?”  Well, my clients used to be 100% reliant upon me to find adequate hosting for them.  Now I find they are often aware of cloud services and some come to me with the “possibility” that deploying to the cloud is the best solution for them.  It’s like the patient who walks into the doctor’s office with their diagnosis and treatment already in mind thanks to the handful of Internet searches they performed earlier that day.  You know what?  The customer may be correct about the cloud. It may be a perfect fit for their app.  But maybe not…  I don’t think there’s a need to learn about every technical thing under the sun, but if you are responsible for identifying hosting solutions for your customers, it is time to get up to speed on cloud computing and the various offerings (if you haven’t already.)  Here are a few references to get you going: DZone Refcardz #82 Getting Started with Cloud Computing by Daniel Rubio Wikipedia Cloud Computing – What is it? Amazon Machine Images (AMI) Google App Engine SDK Azure SDK EC2 Spot Pricing Google App Engine Team Blog Amazon EC2 Team Blog Microsoft Azure Team Blog Amazon EC2 – Cost Calculator Google App Engine – Cost and Billing Resources Microsoft Azure – Cost Calculator Larry Ellison has stated that cloud computing has been defined as "everything that we currently do" and that it will have no effect except to "change the wording on some of our ads" Oracle launches worldwide cloud-computing tour NoSQL Movement  

    Read the article

  • Authorize.Net, Silent Posts, and URL Rewriting Don't Mix

    The too long, didn't read synopsis: If you use Authorize.Net and its silent post feature and it stops working, make sure that if your website uses URL rewriting to strip or add a www to the domain name that the URL you specify for the silent post matches the URL rewriting rule because Authorize.Net's silent post feature won't resubmit the post request to URL specified via the redirect response. I have a client that uses Authorize.Net to manage and bill customers. Like many payment gateways, Authorize.Net supports recurring payments. For example, a website may charge members a monthly fee to access their services. With Authorize.Net you can provide the billing amount and schedule and at each interval Authorize.Net will automatically charge the customer's credit card and deposit the funds to your account. You may want to do something whenever Authorize.Net performs a recurring payment. For instance, if the recurring payment charge was a success you would extend the customer's service; if the transaction was denied then you would cancel their service (or whatever). To accomodate this, Authorize.Net offers a silent post feature. Properly configured, Authorize.Net will send an HTTP request that contains details of the recurring payment transaction to a URL that you specify. This URL could be an ASP.NET page on your server that then parses the data from Authorize.Net and updates the specified customer's account accordingly. (Of course, you can always view the history of recurring payments through the reporting interface on Authorize.Net's website; the silent post feature gives you a way to programmatically respond to a recurring payment.) Recently, this client of mine that uses Authorize.Net informed me that several paying customers were telling him that their access to the site had been cut off even though their credit cards had been recently billed. Looking through our logs, I noticed that we had not shown any recurring payment log activity for over a month. I figured one of two things must be going on: either Authorize.Net wasn't sending us the silent post requests anymore or the page that was processing them wasn't doing so correctly. I started by verifying that our Authorize.Net account was properly setup to use the silent post feature and that it was pointing to the correct URL. Authorize.Net's site indicated the silent post was configured and that recurring payment transaction details were being sent to http://example.com/AuthorizeNetProcessingPage.aspx. Next, I wanted to determine what information was getting sent to that URL.The application was setup tolog the parsed results of the Authorize.Net request, such as what customer the recurring payment applied to; however,we were not logging the actual HTTP request coming from Authorize.Net. I contacted Authorize.Net's support to inquire if they logged the HTTP request send via the silent post feature and was told that they did not. I decided to add a bit of code to log the incoming HTTP request, which you can do by using the Request object's SaveAs method. This allowed me to saveevery incoming HTTP request to the silent post page to a text file on the server. Upon the next recurring payment, I was able to see the HTTP request being received by the page: GET /AuthorizeNetProcessingPage.aspx HTTP/1.1Connection: CloseAccept: */*Host: www.example.com That was it. Two things alarmed me: first, the request was obviously a GET and not a POST; second, there was no POST body (obviously), which is where Authorize.Net passes along thedetails of the recurring payment transaction.What stuck out was the Host header, which differed slightly from the silent post URL configured in Authorize.Net. Specifically, the Host header in the above logged request pointed to www.example.com, whereas the Authorize.Net configuration used example.com (no www). About a month ago - the same time these recurring payment transaction detailswere no longer being processed by our ASP.NET page - we had implemented IIS 7's URL rewriting feature to permanently redirect all traffic to example.com to www.example.com. Could that be the problem? I contacted Authorize.Net's support again and asked them if their silent post algorithmwould follow the301HTTP response and repost the recurring payment transaction details. They said, Yes, the silent post would follow redirects. Their reports didn't jive with my observations, so I went ahead and updated our Authorize.Net configuration to point to http://www.example.com/AuthorizeNetProcessingPage.aspx instead of http://example.com/AuthorizeNetProcessingPage.aspx. And, I'm happy to report, recurring payments and correctly being processed again! If you use Authorize.Net and the silent post feature, and you notice that your processing page is not longer working, make sure you are not using any URL rewriting rules that may conflict with the silent post URL configuration. Hope this saves someone the time it took me to get to the bottom of this. Happy Programming!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What’s New for Oracle Commerce? Executive QA with John Andrews, VP Product Management, Oracle Commerce

    - by Katrina Gosek
    Oracle Commerce was for the fifth time positioned as a leader by Gartner in the Magic Quadrant for E-Commerce. This inspired me to sit down with Oracle Commerce VP of Product Management, John Andrews to get his perspective on what continues to make Oracle a leader in the industry and what’s new for Oracle Commerce in 2013. Q: Why do you believe Oracle Commerce continues to be a leader in the industry? John: Oracle has a great acquisition strategy – it brings best-of-breed technologies into the product fold and then continues to grow and innovate them. This is particularly true with products unified into the Oracle Commerce brand. Oracle acquired ATG in late 2010 – and then Endeca in late 2011. This means that under the hood of Oracle Commerce you have market-leading technologies for cross-channel commerce and customer experience, both designed and developed in direct response to the unique challenges online businesses face. And we continue to innovate on capabilities core to what our customers need to be successful – contextual and personalized experience delivery, merchant-inspired tools, and architecture for performance and scalability. Q: It’s not a slow moving industry. What are you doing to keep the pace of innovation at Oracle Commerce? John: Oracle owes our customers the most innovative commerce capabilities. By unifying the core components of ATG and Endeca we are delivering on this promise. Oracle Commerce is continuing to innovate and redefine how commerce is done and in a way that drive business results and keeps customers coming back for experiences tailored just for them. Our January and May 2013 releases not only marked the seventh significant releases for the solution since the acquisitions of ATG and Endeca, we also continue to demonstrate rapid and significant progress on the unification of commerce and customer experience capabilities of the two commerce technologies. Q: Can you tell us what was notable about these latest releases under the Oracle Commerce umbrella? John: Specifically, our latest product innovations give businesses selling online the ability to get to market faster with more personalized commerce experiences in the following ways: Mobile: the latest Commerce Reference Application in this release offers a wider range of examples for online businesses to leverage for iOS development and specifically new iPad reference capabilities. This release marks the first release of the iOS Universal application that serves both the iPhone and iPad devices from a single download or binary. Business users can now drive page content management and layout of search results and category pages, as well as create additional storefront elements such as categories, facets / dimensions, and breadcrumbs through Experience Manager tools. Cross-Channel Commerce: key commerce platform capabilities have been added to support cross-channel commerce, including an expanded inventory model to maintain inventory for stores, pickup in stores and Web-based returns. Online businesses with in-store operations can now offer advanced shipping options on the web and make returns and exchange logic easily available on the web. Multi-Site Capabilities: significant enhancements to the Commerce Platform multi-site architecture that allows business users to quickly launch and manage multiple sites on the same cluster and share data, carts, and other components. First introduced in 2010, with this latest release business users can now partition or share customer profiles, control users’ site-based access, and manage personalization assets using site groups. Internationalization: continued language support and enhancements for business user tools as well and search and navigation. Guided Search now supports 35 total languages with 11 new languages (including Danish, Arabic, Norwegian, Serbian Cyrillic) added in this release. Commerce Platform tools now include localized support for 17 locales with 4 new languages (Danish, Portuguese (European), Finnish, and Thai). No development or customization is required in order for business users to use the applications in any of these supported languages. Business Tool Experience: valuable new Commerce Merchandising features include a new workflow for making emergency changes quickly and increased visibility into promotions rules and qualifications in preview mode. Oracle Commerce business tools continue to become more and more feature rich to provide intuitive, easy- to-use (yet powerful) capabilities to allow business users to manage content and the shopping experience. Commerce & Experience Unification: demonstrable unification of commerce and customer experience capabilities include – productized cartridges that provide supported integration between the Commerce Platform and Experience Management tools, cross-channel returns, Oracle Service Cloud integration, and integrated iPad application. The mission guiding our product development is to deliver differentiated, personalized user experiences across any device in a contextual manner – and to give the business the best tools to tune and optimize those user experiences to meet their business objectives. We also need to do this in a way that makes it operationally efficient for the business, keeping the overall total cost of ownership low – yet also allows the business to expand, whether it be to new business models, geographies or brands. To learn more about the latest Oracle Commerce releases and mission, visit the links below: • Hear more from John about the Oracle Commerce mission • Hear from Oracle Commerce customers • Documentation on the new releases • Listen to the Oracle ATG Commerce 10.2 Webcast • Listen to the Oracle Endeca Commerce 3.1.2 Webcast

    Read the article

  • Oracle Enterprise Data Quality: Ever Integration-ready

    - by Mala Narasimharajan
    It is closing in on a year now since Oracle’s acquisition of Datanomic, and the addition of Oracle Enterprise Data Quality (EDQ) to the Oracle software family. The big move has caused some big shifts in emphasis and some very encouraging excitement from the field.  To give an illustration, combined with a shameless promotion of how EDQ can help to give quick insights into your data, I did a quick Phrase Profile of the subject field of emails to the Global EDQ mailing list since it was set up last September. The results revealed a very clear theme:   Integration, Integration, Integration! As well as the important Siebel and Oracle Data Integrator (ODI) integrations, we have been asked about integration with a huge variety of Oracle applications, including EBS, Peoplesoft, CRM on Demand, Fusion, DRM, Endeca, RightNow, and more - and we have not stood still! While it would not have been possible to develop specific pre-integrations with all of the above within a year, we have developed a package of feature-rich out-of-the-box web services and batch processes that can be plugged into any application or middleware technology with ease. And with Siebel, they work out of the box. Oracle Enterprise Data Quality version 9.0.4 includes the Customer Data Services (CDS) pack – a ready set of standard processes with standard interfaces, to provide integrated: Address verification and cleansing  Individual matching Organization matching The services can are suitable for either Batch or Real-Time processing, and are enabled for international data, with simple configuration options driving the set of locale-specific dictionaries that are used. For example, large dictionaries are provided to support international name transcription and variant matching, including highly specialized handling for Arabic, Japanese, Chinese and Korean data. In total across all locales, CDS includes well over a million dictionary entries.   Excerpt from EDQ’s CDS Individual Name Standardization Dictionary CDS has been developed to replace the OEM of Informatica Identity Resolution (IIR) for attached Data Quality on the Oracle price list, but does this in a way that creates a ‘best of both worlds’ situation for customers, who can harness not only the out-of-the-box functionality of pre-packaged matching and standardization services, but also the flexibility of OEDQ if they want to customize the interfaces or the process logic, without having to learn more than one product. From a competitive point of view, we believe this stands us in good stead against our key competitors, including Informatica, who have separate ‘Identity Resolution’ and general DQ products, and IBM, who provide limited out-of-the-box capabilities (with a steep learning curve) in both their QualityStage data quality and Initiate matching products. Here is a brief guide to the main services provided in the pack: Address Verification and Standardization EDQ’s CDS Address Cleaning Process The Address Verification and Standardization service uses EDQ Address Verification (an OEM of Loqate software) to verify and clean addresses in either real-time or batch. The Address Verification processor is wrapped in an EDQ process – this adds significant capabilities over calling the underlying Address Verification API directly, specifically: Country-specific thresholds to determine when to accept the verification result (and therefore to change the input address) based on the confidence level of the API Optimization of address verification by pre-standardizing data where required Formatting of output addresses into the input address fields normally used by applications Adding descriptions of the address verification and geocoding return codes The process can then be used to provide real-time and batch address cleansing in any application; such as a simple web page calling address cleaning and geocoding as part of a check on individual data.     Duplicate Prevention Unlike Informatica Identity Resolution (IIR), EDQ uses stateless services for duplicate prevention to avoid issues caused by complex replication and synchronization of large volume customer data. When a record is added or updated in an application, the EDQ Cluster Key Generation service is called, and returns a number of key values. These are used to select other records (‘candidates’) that may match in the application data (which has been pre-seeded with keys using the same service). The ‘driving record’ (the new or updated record) is then presented along with all selected candidates to the EDQ Matching Service, which decides which of the candidates are a good match with the driving record, and scores them according to the strength of match. In this model, complex multi-locale EDQ techniques can be used to generate the keys and ensure that the right balance between performance and matching effectiveness is maintained, while ensuring that the application retains control of data integrity and transactional commits. The process is explained below: EDQ Duplicate Prevention Architecture Note that where the integration is with a hub, there may be an additional call to the Cluster Key Generation service if the master record has changed due to merges with other records (and therefore needs to have new key values generated before commit). Batch Matching In order to allow customers to use different match rules in batch to real-time, separate matching templates are provided for batch matching. For example, some customers want to minimize intervention in key user flows (such as adding new customers) in front end applications, but to conduct a more exhaustive match on a regular basis in the back office. The batch matching jobs are also used when migrating data between systems, and in this case normally a more precise (and automated) type of matching is required, in order to minimize the review work performed by Data Stewards.  In batch matching, data is captured into EDQ using its standard interfaces, and records are standardized, clustered and matched in an EDQ job before matches are written out. As with all EDQ jobs, batch matching may be called from Oracle Data Integrator (ODI) if required. When working with Siebel CRM (or master data in Siebel UCM), Siebel’s Data Quality Manager is used to instigate batch jobs, and a shared staging database is used to write records for matching and to consume match results. The CDS batch matching processes automatically adjust to Siebel’s ‘Full Match’ (match all records against each other) and ‘Incremental Match’ (match a subset of records against all of their selected candidates) modes. The Future The Customer Data Services Pack is an important part of the Oracle strategy for EDQ, offering a clear path to making Data Quality Assurance an integral part of enterprise applications, and providing a strong value proposition for adopting EDQ. We are planning various additions and improvements, including: An out-of-the-box Data Quality Dashboard Even more comprehensive international data handling Address search (suggesting multiple results) Integrated address matching The EDQ Customer Data Services Pack is part of the Enterprise Data Quality Media Pack, available for download at http://www.oracle.com/technetwork/middleware/oedq/downloads/index.html.

    Read the article

  • Need help with software licensing? Read on&hellip;

    - by juanlarios
    Figuring out which software licensing options best suit your needs while being cost-effective can be confusing. Some businesses end up making their purchases through retail stores which means they miss out on volume licensing opportunities and others may unknowingly be using unlicensed software which means their business may be at risk. So let me help you make the best decision for your situation. You may want to review this blog post that lays out licensing basics for any organization that needs to license software for more than 5 or less than 250 devices or users. It details the different ways you can buy a license and what choices are available for volume licensing, which can give you pricing advantages and provide flexible options for your business. As technology evolves and more organizations move to online services such as Microsoft Office 365, Microsoft Dynamics CRM Online, Windows Azure Platform, Windows Intune and others, it’s important to understand how to purchase, activate and use online service subscriptions to get the most out of your investment. Once purchased through a volume licensing agreement or the Microsoft Online Subscription Program, these services can be managed through web portals: · Online Services Customer Portal (Microsoft Office 365, Microsoft Intune) · Dynamics CRM Online Customer Portal (Microsoft Dynamics CRM Online) · Windows Azure Customer Portal (Windows Azure Platform) · Volume Licensing Service Center (other services) Learn more >> Licensing Resources: The SMB How to Buy Portal – receive clear purchasing and licensing information that is easy to understand in order to help facilitate quick decision making. Microsoft License Advisor (MLA) – Use MLA to research Microsoft Volume Licensing products, programs and pricing. Volume Licensing Service Center (VLSC) – Already have a volume License? Use the VLSC to get you easy access to all your licensing information in one location. Online Services – licensing information for off-premise options. Windows 7 Comparison: – Compare versions of Windows and find out which one is right for you. Office 2010 Comparison: – Find out which Office suite is right for you. Licensing FAQs – Frequently Asked Questions About Product Licensing. Additional Resources You May Find Useful: · TechNet Evaluation Center Try some of our latest Microsoft products For free, Like System Center 2012 Pre-Release Products, and evaluate them before you buy. · Springboard Series Your destination for technical resources, free tools and expert guidance to ease the deployment and management of your Windows-based client infrastructure.   · AlignIT Manager Tech Talk Series A monthly streamed video series with a range of topics for both infrastructure and development managers.  Ask questions and participate real-time or watch the on-demand recording.

    Read the article

  • Social Business Forum Milano: Day 2

    - by me
    @YourService. The business world has flipped and small business can capitalize  by Frank Eliason (twitter: @FrankEliason ) Technology and social media tools have made it easier than ever for companies to communicate with consumers. They can listen and join in on conversations, solve problems, get instant feedback about their products and services, and more. So why, then, are most companies not doing this? Instead, it seems as if customer service is at an all time low, and that the few companies who are choosing to focus on their customers are experiencing a great competitive advantage. At Your Service explains the importance of refocusing your business on your customers and your employees, and just how to do it. Explains how to create a culture of empowered employees who understand the value of a great customer experience Advises on the need to communicate that experience to their customers and potential customers Frank Eliason, recognized by BusinessWeek as the 'most famous customer service manager in the US, possibly in the world,' has built a reputation for helping large businesses improve the way they connect with customers and enhance their relationships Quotes from the Audience: Bertrand Duperrin ?@bduperrin social service is not about shutting up the loudest cutsomers ! #sbf12 @frankeliason Paolo Pelloni ?@paolopelloniGautam Ghosh ?@GautamGhosh RT @cecildijoux: #sbf12 @frankeliason you need to change things and fix the approach it's not about social media it's about driving change  Peter H. Reiser ?@peterreiser #sbf12 Company Experience = Product Experience + Customer Interactions + Employee Experience @yourservice Engage or lose! Socialize, mobilize, conversify: engage your employees to improve business performance Christian Finn (twitter: @cfinn) First Christian was presenting the flying monkey   Then he outlined the four principals to fix the Intranet: 1. Socalize the Intranet 2. Get Thee to a Single Repository 3. Mobilize the Intranet 4. Conversationalize Your Processes Quotes from the Audience: Oscar Berg ?@oscarberg Engaged employees think their work bring out the best of their ideas @cfinn #sbf12 http://pic.twitter.com/68eddp48 John Stepper ?@johnstepper I like @cfinn's "conversify your processes" A nice related concept to "narrating your work", part of working out loud. http://johnstepper.com/2012/05/26/working-out-loud-your-personal-content-strategy/ Oscar Berg ?@oscarberg Organizations are talent markets - socializing your intranet makes this market function better @cfinn #sbf12 For profit, productivity, and personal benefit: creating a collaborative culture at Deutsche Bank John Stepper (twitter:@johnstepper) Driving adoption of collaboration + social media platforms at Deutsche Bank. John shared some great best practices on how to deploy an enterprise wide  community model  in a large company. He started with the most important question What is the commercial value of adding social ? Then he talked about the success of Community of Practices deployment and outlined some key use cases including the relevant measures to proof the ROI of the investment. Examples:  Community of practice -> measure: systematic collection of value stories  Self-service website  -> measure: based on representative models Optimizing asset inventory - > measure: Actual counts  This use case was particular interesting.  It is a crowd sourced spending/saving of infrastructure model.  User can cancel IT services they don't need (as example Software xx).  5% of the saving goes to social responsibility projects. The John outlined some  best practices on how to address the WIIFM (What's In It For Me) question of the individual users:  - change from hierarchy to graph -  working out loud = observable work + narrating  your work  - add social skills to career objectives - example: building a purposeful social network course/training as part of the job development curriculum And last but not least John gave some important tips on how to get senior management buy-in by establishing management sponsored division level collaboration boards which defines clear uses cases and measures. This divisional use cases are then implemented using a common social platform.  Thanks John - I learned a lot from your presentation!   Quotes from the Audience: Ana Silva ?@AnaDataGirl #sbf12 what's in it for individuals at Deutsche Bank? Shapping their reputations in a big org says @johnstepper #e20Ana Silva ?@AnaDataGirl Any reason why not? MT @magatorlibero #sbf12 is Deutsche B. experience on applying social inside company applicable to Italian people? Oscar Berg ?@oscarberg Your career is not a ladder, it is a network that opens up opportunities - @johnstepper #sbf12 Oscar Berg ?@oscarberg @johnstepper: Institutionalizing collaboration is next - collaboration woven into the fabric of daily work #sbf12 Ana Silva ?@AnaDataGirl #sbf12 @johnstepper talking about how Deutsche Bank is using #socbiz to build purposeful CoP & save money

    Read the article

  • Metro: Promises

    - by Stephen.Walther
    The goal of this blog entry is to describe the Promise class in the WinJS library. You can use promises whenever you need to perform an asynchronous operation such as retrieving data from a remote website or a file from the file system. Promises are used extensively in the WinJS library. Asynchronous Programming Some code executes immediately, some code requires time to complete or might never complete at all. For example, retrieving the value of a local variable is an immediate operation. Retrieving data from a remote website takes longer or might not complete at all. When an operation might take a long time to complete, you should write your code so that it executes asynchronously. Instead of waiting for an operation to complete, you should start the operation and then do something else until you receive a signal that the operation is complete. An analogy. Some telephone customer service lines require you to wait on hold – listening to really bad music – until a customer service representative is available. This is synchronous programming and very wasteful of your time. Some newer customer service lines enable you to enter your telephone number so the customer service representative can call you back when a customer representative becomes available. This approach is much less wasteful of your time because you can do useful things while waiting for the callback. There are several patterns that you can use to write code which executes asynchronously. The most popular pattern in JavaScript is the callback pattern. When you call a function which might take a long time to return a result, you pass a callback function to the function. For example, the following code (which uses jQuery) includes a function named getFlickrPhotos which returns photos from the Flickr website which match a set of tags (such as “dog” and “funny”): function getFlickrPhotos(tags, callback) { $.getJSON( "http://api.flickr.com/services/feeds/photos_public.gne?jsoncallback=?", { tags: tags, tagmode: "all", format: "json" }, function (data) { if (callback) { callback(data.items); } } ); } getFlickrPhotos("funny, dogs", function(data) { $.each(data, function(index, item) { console.log(item); }); }); The getFlickr() function includes a callback parameter. When you call the getFlickr() function, you pass a function to the callback parameter which gets executed when the getFlicker() function finishes retrieving the list of photos from the Flickr web service. In the code above, the callback function simply iterates through the results and writes each result to the console. Using callbacks is a natural way to perform asynchronous programming with JavaScript. Instead of waiting for an operation to complete, sitting there and listening to really bad music, you can get a callback when the operation is complete. Using Promises The CommonJS website defines a promise like this (http://wiki.commonjs.org/wiki/Promises): “Promises provide a well-defined interface for interacting with an object that represents the result of an action that is performed asynchronously, and may or may not be finished at any given point in time. By utilizing a standard interface, different components can return promises for asynchronous actions and consumers can utilize the promises in a predictable manner.” A promise provides a standard pattern for specifying callbacks. In the WinJS library, when you create a promise, you can specify three callbacks: a complete callback, a failure callback, and a progress callback. Promises are used extensively in the WinJS library. The methods in the animation library, the control library, and the binding library all use promises. For example, the xhr() method included in the WinJS base library returns a promise. The xhr() method wraps calls to the standard XmlHttpRequest object in a promise. The following code illustrates how you can use the xhr() method to perform an Ajax request which retrieves a file named Photos.txt: var options = { url: "/data/photos.txt" }; WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); var data = JSON.parse(xmlHttpRequest.responseText); console.log(data); }, function(xmlHttpRequest) { console.log("fail"); }, function(xmlHttpRequest) { console.log("progress"); } ) The WinJS.xhr() method returns a promise. The Promise class includes a then() method which accepts three callback functions: a complete callback, an error callback, and a progress callback: Promise.then(completeCallback, errorCallback, progressCallback) In the code above, three anonymous functions are passed to the then() method. The three callbacks simply write a message to the JavaScript Console. The complete callback also dumps all of the data retrieved from the photos.txt file. Creating Promises You can create your own promises by creating a new instance of the Promise class. The constructor for the Promise class requires a function which accepts three parameters: a complete, error, and progress function parameter. For example, the code below illustrates how you can create a method named wait10Seconds() which returns a promise. The progress function is called every second and the complete function is not called until 10 seconds have passed: (function () { "use strict"; var app = WinJS.Application; function wait10Seconds() { return new WinJS.Promise(function (complete, error, progress) { var seconds = 0; var intervalId = window.setInterval(function () { seconds++; progress(seconds); if (seconds > 9) { window.clearInterval(intervalId); complete(); } }, 1000); }); } app.onactivated = function (eventObject) { if (eventObject.detail.kind === Windows.ApplicationModel.Activation.ActivationKind.launch) { wait10Seconds().then( function () { console.log("complete") }, function () { console.log("error") }, function (seconds) { console.log("progress:" + seconds) } ); } } app.start(); })(); All of the work happens in the constructor function for the promise. The window.setInterval() method is used to execute code every second. Every second, the progress() callback method is called. If more than 10 seconds have passed then the complete() callback method is called and the clearInterval() method is called. When you execute the code above, you can see the output in the Visual Studio JavaScript Console. Creating a Timeout Promise In the previous section, we created a custom Promise which uses the window.setInterval() method to complete the promise after 10 seconds. We really did not need to create a custom promise because the Promise class already includes a static method for returning promises which complete after a certain interval. The code below illustrates how you can use the timeout() method. The timeout() method returns a promise which completes after a certain number of milliseconds. WinJS.Promise.timeout(3000).then( function(){console.log("complete")}, function(){console.log("error")}, function(){console.log("progress")} ); In the code above, the Promise completes after 3 seconds (3000 milliseconds). The Promise returned by the timeout() method does not support progress events. Therefore, the only message written to the console is the message “complete” after 10 seconds. Canceling Promises Some promises, but not all, support cancellation. When you cancel a promise, the promise’s error callback is executed. For example, the following code uses the WinJS.xhr() method to perform an Ajax request. However, immediately after the Ajax request is made, the request is cancelled. // Specify Ajax request options var options = { url: "/data/photos.txt" }; // Make the Ajax request var request = WinJS.xhr(options).then( function (xmlHttpRequest) { console.log("success"); }, function (xmlHttpRequest) { console.log("fail"); }, function (xmlHttpRequest) { console.log("progress"); } ); // Cancel the Ajax request request.cancel(); When you run the code above, the message “fail” is written to the Visual Studio JavaScript Console. Composing Promises You can build promises out of other promises. In other words, you can compose promises. There are two static methods of the Promise class which you can use to compose promises: the join() method and the any() method. When you join promises, a promise is complete when all of the joined promises are complete. When you use the any() method, a promise is complete when any of the promises complete. The following code illustrates how to use the join() method. A new promise is created out of two timeout promises. The new promise does not complete until both of the timeout promises complete: WinJS.Promise.join([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The message “complete” will not be written to the JavaScript Console until both promises passed to the join() method completes. The message won’t be written for 5 seconds (5,000 milliseconds). The any() method completes when any promise passed to the any() method completes: WinJS.Promise.any([WinJS.Promise.timeout(1000), WinJS.Promise.timeout(5000)]) .then(function () { console.log("complete"); }); The code above writes the message “complete” to the JavaScript Console after 1 second (1,000 milliseconds). The message is written to the JavaScript console immediately after the first promise completes and before the second promise completes. Summary The goal of this blog entry was to describe WinJS promises. First, we discussed how promises enable you to easily write code which performs asynchronous actions. You learned how to use a promise when performing an Ajax request. Next, we discussed how you can create your own promises. You learned how to create a new promise by creating a constructor function with complete, error, and progress parameters. Finally, you learned about several advanced methods of promises. You learned how to use the timeout() method to create promises which complete after an interval of time. You also learned how to cancel promises and compose promises from other promises.

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >