Search Results

Search found 3761 results on 151 pages for 'revision history'.

Page 123/151 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • SharePoint 2013 Developer Ramp-Up - Part 1

    Today I had a little spare time during the morning hours and I decided that after checking MVA that I'm going to query the available course material over at Pluralsight. Wow, thanks to fantastic corporations and acquisitions there are lots of courses available. Nicely split by SharePoint version as well as particular interest group. Additionally, I found a couple of online blogs and community sites that I'm going to visit regularly during the next couple of weeks. Today's resource(s) Of course, I'm "all in" for the latest developer resources: SharePoint 2013 Developer Ramp-Up - Part 1 - Understanding the Platform and Developer Experience SharePoint 2013 Developer Ramp-Up - Part 2 SharePoint 2013 Developer Ramp-Up - Part 3 SharePoint 2013 Developer Ramp-Up - Part 4 SharePoint 2013 Developer Ramp-Up - Part 5 SharePoint 2013 Developer Ramp-Up - Part 6 I guess, I'm going to stick to the Pluralsight library until the end of this week. We'll see... Anyway, apart from the video material I came across a couple of other websites which I'd like to list here, too. That's mainly for personal reference instead of bookmarking in the browser, I'll use my own blog for that purpose. Atkinson's SharePoint Blog Düsseldorfer Jung Doerflers SharePoint Blog SharePoint Community Absolute SharePoint The links are in no preferential order and I added them as soon as I found them. Most probably, I'm going to report about specific articles from those resources during this challenge. So, stay tuned and I try to provide more details on certain topics. Takeaway First contact with the 'real stuff' in order to get an idea about software development in Microsoft SharePoint and beyond. Unfortunately and as already expected, the marketing department over at Microsoft seemed to have nothing better to do than to invent new names and baptise literally the same product with every release. Luckily, the release cycles between versions have been three years (roughly) - 2007, 2010, and 2013. Nonetheless, there will be a lot of version-specfic issues to tackle during this learning phase. Especially, when it's about historical expressions like 'WSS'* like I had it yesterday... It's going to be exciting and demanding to catch up with roughly 6-7 years of development and changes. Okay, let's face it. * WSS stands for Windows SharePoint Services 3.0 which forms the 'core engine' of SharePoint 2007. Part 1 of Andrew Connell's series on SharePoint 2013 for developers provides a brief history and overview of the various product names and their relation to the actual SharePoint version. I guess, I might create a cheat-sheet or something comparable in order to reduce the level of confusion while reading through other material: SharePoint 2007 (aka SharePoint v3 aka SharePoint 12) Windows SharePoint Services (WSS) 3.0 Microsoft Office SharePoint Server (MOSS) 2007 .NET Framework 3.0, 32-bit or 64-bit OS SharePoint 2010 (aka SharePoint v4 aka SharePoint 14) Microsoft SharePoint Foundation (SPF) 2010 Microsoft SharePoint Server (SPS) 2010 .NET Framework 3.5 SP1, 64-bit OS only SharePoint 2013 Microsoft SharePoint Foundation (SPF) 2013 Microsoft SharePoint Server (SPS) 2013 .NET Framework 4.5, 64-bit OS only After this quick excursion it is getting more interesting. SharePoint 2013 has a number of Development Practices and Techniques under the hood, and it will be quite a decision process depending on the task requirements to choose the correct path to go. At the moment, the following two options seem to be my future fields of operation: Client-Side Object Model (CSOM) REST API and OData syntax As part of my job assignment, I see myself developing within Visual Studio 2012/2013. Most probably the client development in C# will be using CSOM but of course I'll keep an eye on the REST API, too. JavaScript has quite a momentum since a while and it would a shame to ignore this type of opportunity and possibilities.

    Read the article

  • Thoughts on ODTUG Kscope12

    - by thatjeffsmith
    The rodeo rocked! This wasn’t my first rodeo, and it wasn’t my first Kscope, but it was probably my favorite one. What is Kscope? It’s the annual conference for the Oracle Development Tools User Group. 1,000+ attendees from 20+ countries with an average Jeff Klout score of 65. I just made that metric and score up, but this conference attracts the best and brightest in the Oracle database space. I’m not just talking about the speakers either. The attendees are all top notch. They actively participate in sessions, make an effort to get to know their fellow conference mates, and often turn into volunteers and speakers. Developers that enjoy unit testing, understand the importance of modeling your data, and are eager to understand the Oracle CBO – these are traits that describe the ‘average’ ODTUG developer. 2012′s event was held in San Antonio. Yes, it was very hot. But this might have been the nicest Marriott property I’ve ever visited, and I’ve stayed at some nice ones in Hawaii and St. Thomas. They had free WiFi everywhere – the rooms, the Conference Center, the lobby, bars, everywhere. And it worked. The after hours events were very fun. I embarrassed myself several times, but that’s OK. The rodeo was an awesome event and the Thirsty Games experience was something I hope does not make it onto YouTube or Facebook — talking to you Chet Justice. I finally got to meet and spend some time with some folks I’ve always wanted to get to know better, @timothyjgorman, @alexgorbachev, @lj_dobson, @dschleis, @kentGraziano, @chriscmuir, @GaloBalda, @patch72, and many, many more! I even made some new friends thanks to the Mentor program and @carol_finn. 2013′s event will be in New Orleans. If you haven’t joined ODTUG or haven’t made it to Kscope, go ahead and mark your calendars. I had 3 presentations this year. Sunday’s was not a good performance, and I want to apologize to anyone who was there and was hoping for more. My Tips and Debugging sessions on Monday and Tuesday were more to my liking, and I enjoyed them as a presenter. I hope you enjoyed them as an attendee. I understand that my slidedecks were corrupted on the ODTUG site, and I’m working with the coordinator now to get those fixed ASAP. Apparently the 2 most well-received Tips was the /*CSV*/ formatting hint and recalling your previous SQL history via the keyboard. I’ll be doing a follow-up webinar with ODTUG in a few weeks for those members that weren’t able to see my Tips and Debugger sessions in San Antonio. I’ll be sure to post details on that here when I have the details. My next scheduled conference is Oracle Open World, and I may have a couple of shows after that to round out 2012.

    Read the article

  • JCP Awards 10 Year Retrospective

    - by Heather VanCura
    As we celebrate 10 years of JCP Program Award recognition in 2012,  take a look back in the Retrospective article covering the history of the JCP awards.  Most recently, the JCP awards were  celebrated at JavaOne Latin America in Brazil, where SouJava was presented the JCP Member of the Year Award for 2012 (won jointly with the London Java Community) for their contributions and launch of the Global Adopt-a-JSR Program. This is also a good time to honor the JCP Award Nominees and Winners who have been designated as Star Spec Leads.  Spec Leads are key to the Java Community Process (JCP) program. Without them, none of the Java Specification Requests (JSRs) would have begun, much less completed and become implemented in shipping products.  Nominations for 2012 Start Spec Leads are now open until 31 December. The Star Spec Lead program recognizes Spec Leads who have repeatedly proven their merit by producing high quality specifications, establishing best practices, and mentoring others. The point of such honor is to endorse the good work that they do, showcase their methods for other Spec Leads to emulate, and motivate other JCP program members and participants to get involved in the JCP program. Ed Burns – A Star Spec Lead for 2009, Ed first got involved with the JCP program when he became co-Spec Lead of JSR 127, JavaServer Faces (JSF), a role he has continued through JSF 1.2 and now JSF 2.0, which is JSR 314. Linda DeMichiel – Linda thus involved in the JCP program from its very early days. She has been the Spec Lead on at least three JSRs and an EC member for another three. She holds a Ph.D. in Computer Science from Stanford University. Gavin King – Nominated as a JCP Outstanding Spec Lead for 2010, for his work with JSR 299. His endorsement said, “He was not only able to work through disputes and objections to the evolving programming model, but he resolved them into solutions that were more technically sound, and which gained support of its pundits.” Mike Milikich –  Nominated for his work on Java Micro Edition (ME) standards, implementations, tools, and Technology Compatibility Kits (TCKs), Mike was a 2009 Star Spec Lead for JSR 271, Mobile Information Device Profile 3. David Nuescheler – Serving as the CTO for Day Software, acquired by Adobe Systems, David has been a key player in the growth of the company’s global content management solution. In 2002, he became Spec Lead for JSR 170, Content Repository for Java Technology API, continuing for the subsequent version, JSR 283. Bill Shannon – A well-respected name in the Java community, Bill came to Oracle from Sun as a Distinguished Engineer and is still performing at full speed as Spec Lead for JSR 342, Java EE 7,  as an alternate EC member, and hands-on problem solver for the Java community as a whole. Jim Van Peursem – Jim holds a PhD in Computer Engineering. He was part of the Motorola team that worked with Sun labs on the Spotless VM that became the KVM. From within Motorola, Jim has been responsible for many aspects of Java technology deployment, from an independent Connected Limited Device Configuration (CLDC) and Mobile Information Device Profile (MIDP) implementations, to handset development, to working with the industry in defining many related standards. Participation in the JCP Program goes well beyond technical proficiency. The JCP Awards Program is an attempt to say “Thank You” to all of the JCP members, Expert Group Members, Spec Leads, and EC members who give their time to contribute to the evolution of Java technology.

    Read the article

  • Take a chance !

    - by Hartmut Wiese
    Hi everybody, Later today I am going to reach out to the JDE Partner in EMEA I am already in contact with and ask for participation and collaboration within the new EMEA JDE Partner Community. I am very excited about this community and I really believe we will have much more success in the future selling and implementing JDEdwards in this large region. For those who don´t know me yet ... I am really a long time in the JDEdwards business. I have been a JDE PreSales Consultant and joined JDEdwards in 1998 in Germany. After JDEdwards/PeopleSoft was aquired by Oracle I changed my role and become responsible on an EMEA level for the Oracle Accelerate and the Oracle Business Accelerator program. A lot of you are already know me ... and hopefully believe and trust me as well. Within the last five months I talked to approx. 60 partners already face-to-face during the various events I attended. We had two PreSales Universities delievered already and I have been to one JDE Exsite event, a JDE Executive Forum, two User groups events and one JDE Partner Event. Again approximately 60+ partner discussions and everybody likes the idea of the community and how I am going to run this in the future. At the JDEdwards UK User Group event (NOV 13) there was an external speaker talking about risk. It was a very good speech. One key element of his speech was that a sequence of (small) failures might lead to a big success. He gave very good examples from the history not software related at all but as a results some of the well done individuals everybody knows today started very small and they failed several times before they become successful. But these persons did not gave up and in the long run they win and succeeded. I really spent some time reflecting this to our business as of today. My intention to write these lines is to convince each partner out there to think about investing in JDEdwards TODAY. There are currently a number of potential investment ideas on the table for you. We have a very strong and powerful ERP System. We have advantages against all our competitors. Each partner has the ability to create his own SaaS model and deliver individual services to the customers. We also have three Business Accelerators available which really speeds up the implementation by still having full flexibility to change for example any processing option if needed. A huge number of customers are on old releases globally and think about upgrading. New technology makes new business processes available (e.g. iPad). Oracle is a pretty much forward looking company and we build tools and products. In the area of JDEdwards our partners are combining the Oracle tools and products and bringing the value to the customers. At one point in time you have decided to run your business on your own and to become a JDE/PSFT/ORCL partner. This was a risk of course at that point of time. You did not fail and this is very good of course. Business has changed and Oracle has the product and tools for you to become even more successful in the future but it is a very good time for you to take a risk again. I am not able to promise you anything but the situation is very good. You might not win every deal or increase your margin immediately but I truly believe you will find new ways of doing your business in the future by adopting some of our ideas. The only person who can stop you ... is you. Please try something new/different. Success sometimes needs some time and initial failures but if you never failed - you have never lived. To get support during this phase please share your doubts, thoughts, experiences inside the new JDEdwards community and learn from others who went to similar processes. Please join here. Take care and best regards Hartmut Wiese

    Read the article

  • Be the surgeon

    - by Rob Farley
    It’s a phrase I use often, especially when teaching, and I wish I had realised the concept years earlier. (And of course, fits with this month’s T-SQL Tuesday topic, hosted by Argenis Fernandez) When I’m sick enough to go to the doctor, I see a GP. I used to typically see the same guy, but he’s moved on now. However, when he has been able to roughly identify the area of the problem, I get referred to a specialist, sometimes a surgeon. Being a surgeon requires a refined set of skills. It’s why they often don’t like to be called “Doctor”, and prefer the traditional “Mister” (the history is that the doctor used to make the diagnosis, and then hand the patient over to the person who didn’t have a doctorate, but rather was an expert cutter, typically from a background in butchering). But if you ask the surgeon about the pain you have in your leg sometimes, you’ll get told to ask your GP. It’s not that your surgeon isn’t interested – they just don’t know the answer. IT is the same now. That wasn’t something that I really understood when I got out of university. I knew there was a lot to know about IT – I’d just done an honours degree in it. But I also knew that I’d done well in just about all my subjects, and felt like I had a handle on everything. I got into developing, and still felt that having a good level of understanding about every aspect of IT was a good thing. This got me through for the first six or seven years of my career. But then I started to realise that I couldn’t compete. I’d moved into management, and was spending my days running projects, rather than writing code. The kids were getting older. I’d had a bad back injury (ask anyone with chronic pain how it affects  your ability to concentrate, retain information, etc). But most of all, IT was getting larger. I knew kids without lives who knew more than I did. And I felt like I could easily identify people who were better than me in whatever area I could think of. Except writing queries (this was before I discovered technical communities, and people like Paul White and Dave Ballantyne). And so I figured I’d specialise. I wish I’d done it years earlier. Now, I can tell you plenty of people who are better than me at any area you can pick. But there are also more people who might consider listing me in some of their lists too. If I’d stayed the GP, I’d be stuck in management, and finding that there were better managers than me too. If you’re reading this, SQL could well be your thing. But it might not be either. Your thing might not even be in IT. Find out, and then see if you can be a world-beater at it. But it gets even better, because you can find other people to complement the things that you’re not so good at. My company, LobsterPot Solutions, has six people in it at the moment. I’ve hand-picked those six people, along with the one who quit. The great thing about it is that I’ve been able to pick people who don’t necessarily specialise in the same way as me. I don’t write their T-SQL for them – generally they’re good enough at that themselves. But I’m on-hand if needed. Consider Roger Noble, for example. He’s doing stuff in HTML5 and jQuery that I could never dream of doing to create an amazing HTML5 version of PivotViewer. Or Ashley Sewell, a guy who does project management far better than I do. I could go on. My team is brilliant, and I love them to bits. We’re all surgeons, and when we work together, I like to think we’re pretty good! @rob_farley

    Read the article

  • It's like I'm in recovery mode after update, but I'm not

    - by mawburn
    I used the Ubuntu software updater and updated to the most recent packages. After the last update today, it's like I have gone into recovery mode, but I haven't. I am running UbuntuGNOME First, everything looks like this: Switching to dark mode does nothing. Also, default applications do not work. Such as Startup and the default screenshot application. Everything was working fine before the latest software update. System Info Ubuntu 14.04 LTS Gnome-Shell 3.10.4 Kernel 3.13.0-29 I can't figure out how to get an update history, but this is almost a fresh install. It's about a week old install and this is the 3rd time I've used the Ubuntu Software Update. I am running AMD ATI HD6700 with the proprietary Catalyst drivers. I tried to provide all information that I thought would be useful, if you need any more please let me know. Edit - I believe something went wrong within these updates: Update Log: Start-Date: 2014-06-09 19:07:07 Commandline: aptdaemon role='role-commit-packages' sender=':1.68' Install: libgnome-desktop-3-10:amd64 (3.12.0-0~eugenesan~trusty2) Upgrade: gnome-session-common:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gnome-session-bin:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), gir1.2-gnomedesktop-3.0:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), gnome-session:amd64 (3.9.90-0ubuntu12, 3.12.0-0~eugenesan~trusty10), python-libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libspice-server1:amd64 (0.12.4-0nocelt2, 0.12.4-0nocelt2.02~eugenesan~trusty1), gir1.2-mutter-3.0:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), xserver-xorg-video-qxl:amd64 (0.1.1-0ubuntu3, 0.1.1-0ubuntu3.01), libxml2:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libxml2:i386 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), gnome-desktop3-data:amd64 (3.8.4-0ubuntu3, 3.12.0-0~eugenesan~trusty2), mutter:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), mutter-common:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1), libxml2-utils:amd64 (2.9.1+dfsg1-3ubuntu4.1, 2.9.1+dfsg1-3ubuntu4.2), libmutter0c:amd64 (3.10.4-0ubuntu2, 3.10.4-0ubuntu2.1) End-Date: 2014-06-09 19:07:12 I also installed Citrix Receiver today, following the tutorial here: Citrix Receiver 12.1 on Ubuntu 14.04 64-bit Log Start-Date: 2014-06-09 18:59:06 Commandline: apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386 libxp6:i386 libxpm4:i386 libasound2:i386 Install: libmotif-common:amd64 (2.3.4-5, automatic), libatk1.0-0:i386 (2.10.0-2ubuntu2, automatic), libxft2:i386 (2.3.1-2, automatic), libgraphite2-3:i386 (1.2.4-1ubuntu1, automatic), nspluginviewer:i386 (1.4.4-0ubuntu5, automatic), libpango-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcursor1:i386 (1.1.14-1, automatic), libmotif4:i386 (2.3.4-5), libxm4:amd64 (2.3.4-5, automatic), libxm4:i386 (2.3.4-5, automatic), libxp6:i386 (1.0.2-1ubuntu1), libpangocairo-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libxcb-render0:i386 (1.10-2ubuntu1, automatic), libthai0:i386 (0.1.20-3, automatic), libharfbuzz0b:i386 (0.9.27-1, automatic), libpixman-1-0:i386 (0.30.2-2ubuntu1, automatic), libpangoft2-1.0-0:i386 (1.36.3-1ubuntu1, automatic), libcairo2:i386 (1.13.0~20140204-0ubuntu1, automatic), lib32z1:amd64 (1.2.8.dfsg-1ubuntu1), libjasper1:i386 (1.900.1-14ubuntu3, automatic), libgtk2.0-0:i386 (2.24.23-0ubuntu1.1, automatic), nspluginwrapper:amd64 (1.4.4-0ubuntu5), libuil4:amd64 (2.3.4-5, automatic), libuil4:i386 (2.3.4-5, automatic), libxcb-shm0:i386 (1.10-2ubuntu1, automatic), libxmu6:i386 (1.1.1-1, automatic), libc6-i386:amd64 (2.19-0ubuntu6), libxinerama1:i386 (1.1.3-1, automatic), libgdk-pixbuf2.0-0:i386 (2.30.7-0ubuntu1, automatic), libxcomposite1:i386 (0.4.4-1, automatic), libmrm4:amd64 (2.3.4-5, automatic), libmrm4:i386 (2.3.4-5, automatic), libdatrie1:i386 (0.2.8-1, automatic), libxrandr2:i386 (1.4.2-1, automatic), libxpm4:i386 (3.5.10-1) End-Date: 2014-06-09 18:59:11

    Read the article

  • Visual Studio Extensions

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/18/visual-studio-extensions.aspxAs a product, Visual Studio has been around for a long time. In fact, it’s been 18 years since the first Visual Studio product was launched. In that time, there have been some major changes but perhaps the most important (or at least influential) changes for the course of the product have been in the last few years. While we can argue over what was and wasn’t an important change or what has and hasn’t changed, I want to talk about what I think is the single most important change Microsoft has made to Visual Studio. Specifically, I’m referring to the Visual Studio Gallery (first introduced in Visual Studio 2010) and the ability for third-parties to easily write extensions which can add new functionality to Visual Studio or even change existing functionality. I know Visual Studio had this ability before the Gallery existed, but it was expensive (both from a financial and development resource) perspective for a company or individual to write such an extension. The Visual Studio Gallery changed all of that. As of today, there are over 4000 items in the Gallery. Microsoft itself has over 100 items in the Gallery and more are added all of the time. Why is this such an important feature? Simply put, it allows third-parties (companies such as JetBrains, Telerik, Red Gate, Devart, and DevExpress, just to name a few) to provide enhanced developer productivity experiences directly within the product by providing new functionality or changing existing functionality. However, there is an even more important function that it serves. It also allows Microsoft to do the same. By providing extensions which add new functionality or change existing functionality, Microsoft is not only able to rapidly innovate on new features and changes but to also get those changes into the hands of developers world-wide for feedback. The end result is that these extensions become very robust and often end up becoming part of a later product release. An excellent example of this is the new CodeLens feature of Visual Studio 2013. This is, perhaps, the single most important developer productivity enhancement released in the last decade and already has huge potential. As you can see, out of the box CodeLens supports showing you information about references, unit tests and TFS history.   Fortunately, CodeLens is also accessible to Visual Studio extensions, and Microsoft DevLabs has already written such an extension to show code “health.” This extension shows different code metrics to help make sure your code is maintainable. At this point, you may have already asked yourself, “With over 4000 extensions, how do I find ones that are good?” That’s a really good question. Fortunately, the Visual Studio Gallery has a ratings system in place, which definitely helps but that’s still a lot of extensions to look through. To that end, here is my personal list of favorite extensions. This is something I started back when Visual Studio 2010 was first released, but so much has changed since then that I thought it would be good to provide an updated list for Visual Studio 2013. These are extensions that I have installed and use on a regular basis as a developer that I find indispensible. This list is in no particular order. NuGet Package Manager for Visual Studio 2013 Microsoft CodeLens Code Health Indicator Visual Studio Spell Checker Indent Guides Web Essentials 2013 VSCommands for Visual Studio 2013 Productivity Power Tools (right now this is only for Visual Studio 2012, but it should be updated to support Visual Studio 2013.) Everyone has their own set of favorites, so mine is probably not going to match yours. If there is an extension that you really like, feel free to leave me a comment!

    Read the article

  • Hey Retailers, Are You Ready For The Holiday Season?

    - by Jeri Kelley
    With online holiday spending reaching $35.3 billion in 2011 and American shoppers spending just under $750 on average on their holiday purchases this year, how ready is your business for the 2012 holiday season?   ?? Today’s shoppers do not take their purchases lightly.  They are more connected, interact with more resources to make decisions, diligently compare products and services, seek out the best deals, and ask for input from friends and family.   This holiday season, as consumers browse for apparel, tablets, toys, and much more, they will be bombarded with retailer communication - from emails and commercials to countless search engine results and social recommendations.  With a flurry of activity coming at consumers from every channel and competitor, your success this year will rely on communicating a consistent, personalized message no matter where your customers are shopping.  Here are a few ideas to help with your commerce strategy this holiday season: CONSISTENCY COUNTS FOR MULTICHANNEL SHOPPERS??According to a November 2011 study commissioned by Oracle, “Channel Commerce 2011: The Consumer View,” 54% of consumers in the U.S. and Canada regularly employ two or more channels before they make a purchase.  While each channel has its own unique benefit, user profile, and purpose, it’s critical that your shoppers have a consistent core experience wherever they’re looking for information or making a purchase.  Be sure consumers can consistently search and browse the same product information and receive the same promotions online, on their mobile devices, and in-store.? USE YOUR CUSTOMER’S CONTEXT TO SURFACE RELEVANT CONTENTYour Web site is likely the hub of your holiday activity.  According to a Monetate infographic, 39% of shoppers will visit your Web site directly to find out about the best holiday deals.   Use everything you know about your customers from past purchase data to browsing history to provide a relevant experience at every click, and assemble content in a context that entices shoppers to buy online, or influences an offline purchase.? TAKE ADVANTAGE OF MOBILE BEHAVIOR?Having a mobile program is no longer a choice.   Armed with smartphones and tablets, consumers now have access to more and more product information and can compare products and prices from anywhere.  In fact, approximately 52% of smartphone users will use their device to research products, redeem coupons and use apps to assist in their holiday gift purchase.  At a minimum, be sure your mobile environment has store information, consistent pricing and promotions, and simple checkout capabilities. ARM IN-STORE ASSOCIATES WITH TABLETS?According to RISNews.com, 31% of retailers plan to begin testing tablets in stores in 2012, 22% have already begun such testing and 6% had fully deployed tablets within stores.   Take advantage of this compelling sales tool to get shoppers interacting with videos, user reviews, how-to guides, side-by-side product comparisons, and specs.  Automatically trigger upsell and cross sell suggestions for store associates to recommend for each product or category, build in alerts for promotions, and allow associates to place orders and check inventory from their tablet.  ? WISDOM OF THE CROWDS IS GOOD, BUT WISDOM FROM FRIENDS IS BETTER?Shoppers who grapple with options are looking for recommendations; they’d rather get advice from friends, and they’re more likely to spend more while doing so.    In fact, according to an infographic by Mr. Youth, 66% of social media users made a purchase on Black Friday or Cyber Monday as a direct result of social media interactions with brands or family.   This holiday season, be sure you are leveraging your social channels from Facebook to Pinterest to drive consistent promotions and help your brand to become part of the conversation. So, are you ready for the holidays this year?  

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • User Produtivity Kit - Powerful Packages (Part 1)

    - by [email protected]
    User Productivity Kit provides the ability to create a variety of content types including robust topics on system process and web pages with formatted text and graphics. There are times when you want to enhance content with media types not naively created by User Productivity Kit, media types such as video, custom animations, forms, and more. One method of doing this is to maintain these media files on a web server - separate from the User Productivity Kit player content and link to the files using absolute URLs such as http://myserver/overview.html. While this will get you going, you won't benefit from the content management capabilities of the UPK Developer. Features such as check-in / check-out, history, document properties, folder permissions and more are not available to this external content. Further, if you ever need to move that content to a server with a different name or domain, you'd need to update all your links. UPK version 3.1 introduced a new document type - the package. A package is a group of folders and files that you manage in the Developer library as a single document. These package documents work in the same manner as any other document in the library and you can use all of the collaborative content development features you see with other document types. Packages can be used for anything from single Word documents, PDF files, and graphics to more intricate sets of inter-related files commonly seen with HTML files and their graphics, style sheets, and JavaScript files. The structure of the files and folders within a package will always be preserved so this means that any relative links between files in the package will work. For example, an HTML file containing an image tag with a relative link to a graphic elsewhere in the same package will continue to function properly both when viewed in the Developer and when published to outputs such as the UPK Player. Once you start to use packages, you'll soon discover that there is a lot of existing content that can be re-purposed by placing it into UPK packages. Packages are easily created by selecting File...New...Package. Files can be added in a number of ways including the "Add Files" button, copy & paste from Windows Explorer, and drag & drop. To use one of the files in the package, just create a link to the file in the package you want to target. This is supported throughout the Developer in places such as section & topic concepts, frame links and hyperlinks in web pages. A little more challenging is determining how to structure packages in your library. As I mentioned earlier, a package can contain anything from a single file to dozens of files and folders. So what should you do? You could create a package for each file. You could create one package for all your files. But which one is right? Well, there's not a right and wrong answer to this question. There are advantages and disadvantages to each. The right decision will be influenced by the package files themselves, the structure of the content in the library, the size and working style of the development team, how content is shared between different outlines and more. The first consideration can be assessed the quickest. If the content to be placed in the package is composed of multiple files and those files reference each other, they should be in the same package. There are loads of examples of this type of content. HTML files with graphics and style sheets, HTML files with embedded Flash movies, and Word documents saved as HTML are all examples where the content is composed of multiple files and the files reference each other in some way. Content like this should always be placed in a singe package such that these relative links between the files are preserved and play properly in the UPK Player. In upcoming posts, I'll explain additional considerations.

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Eclipse no longer useful

    - by dgood1
    When I got my Eclipse from the Ubuntu Software Center, it was good and worked fine. I could work on Java projects fine. This week I was required to add ADT and tried the ADT-bundle, assuming it had everthing I needed, seeing that the SDK had more steps. So now, I can create Android apps using the ADT-bundle. I tried to work on my java projects again and I now discovered: I can't run my java projects: "The selection cannot be launched. And there are no recent launches." error. I also believe Eclipse doesn't know it's a java program because it all in black and white. Not the usual green/blue/red/black things when making comments, variables and Strings. I can't make new projects of ANYTHING unless I use the adt-bundle. New project only offers CVS (whatever that is) My perspectives seem limited. I remembered more choices and now I'm limited to [Java], Resource, CVS Repository, debug, Team Sync. I was told to be able to use perspectives to swap between Android and Java developing. Even after the ADT installation using "Install new Software",nothing. I can't uninstall/purge/remove Eclipse via the terminal. I tried removing it then reinstalling it via the Ubuntu Software Cetner. No results other than it's temporary removal. (Possibly unrelated) A large number of repositories are not found when updating Eclipse. (See Step 8 in Summary of what I did...) Although, on checking the versions and installation history, I confirmed Android and Java are installed. It probably just doesn't know it's there. Eclipse Indigo: Version: 3.7.2 Build id: I20110613-1736 Summary of what I did before and during the problem: Downloaded adt-bundle. Attempted instructions from teacher. (Install new Software) (Failed but other than an annoying "can't find repository" during each update, no damage to report) (Fixed) Ran "eclipse" executable from the adt-bundle. Updated Eclipse. (After restart, I noticed the problem) NOTE: other than window arrangement, I had no customizations. Played around with the Windowspreferences and Projectpropertied. Restored to default settings after no results. Tried "apt-get purge eclipse". Couldn't find Eclipse so, nothing happened. Used Software center. No results. Tried swapping workspaces. I tried different folder, deeper folder, renaming. All return the same problem. Deleted adt-bundle (browsed folders then delete). Got Adt-sdk only. Installed. Can't find any changes other than some disk space usage. Of course, I can't make Android apps until I unzip the bundle again. WindowsPreferencesInstall/UpdateAvailable Software Sites, Checked as many repositories as possible, then updated. Still nothing. I'm about to get a second try on uninstalling it, because I think my last action will just be taking up space. But I'll wait for tomorrow, in case the answer will help. Any thoughts?

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • Be the surgeon

    - by Rob Farley
    It’s a phrase I use often, especially when teaching, and I wish I had realised the concept years earlier. (And of course, fits with this month’s T-SQL Tuesday topic, hosted by Argenis Fernandez) When I’m sick enough to go to the doctor, I see a GP. I used to typically see the same guy, but he’s moved on now. However, when he has been able to roughly identify the area of the problem, I get referred to a specialist, sometimes a surgeon. Being a surgeon requires a refined set of skills. It’s why they often don’t like to be called “Doctor”, and prefer the traditional “Mister” (the history is that the doctor used to make the diagnosis, and then hand the patient over to the person who didn’t have a doctorate, but rather was an expert cutter, typically from a background in butchering). But if you ask the surgeon about the pain you have in your leg sometimes, you’ll get told to ask your GP. It’s not that your surgeon isn’t interested – they just don’t know the answer. IT is the same now. That wasn’t something that I really understood when I got out of university. I knew there was a lot to know about IT – I’d just done an honours degree in it. But I also knew that I’d done well in just about all my subjects, and felt like I had a handle on everything. I got into developing, and still felt that having a good level of understanding about every aspect of IT was a good thing. This got me through for the first six or seven years of my career. But then I started to realise that I couldn’t compete. I’d moved into management, and was spending my days running projects, rather than writing code. The kids were getting older. I’d had a bad back injury (ask anyone with chronic pain how it affects  your ability to concentrate, retain information, etc). But most of all, IT was getting larger. I knew kids without lives who knew more than I did. And I felt like I could easily identify people who were better than me in whatever area I could think of. Except writing queries (this was before I discovered technical communities, and people like Paul White and Dave Ballantyne). And so I figured I’d specialise. I wish I’d done it years earlier. Now, I can tell you plenty of people who are better than me at any area you can pick. But there are also more people who might consider listing me in some of their lists too. If I’d stayed the GP, I’d be stuck in management, and finding that there were better managers than me too. If you’re reading this, SQL could well be your thing. But it might not be either. Your thing might not even be in IT. Find out, and then see if you can be a world-beater at it. But it gets even better, because you can find other people to complement the things that you’re not so good at. My company, LobsterPot Solutions, has six people in it at the moment. I’ve hand-picked those six people, along with the one who quit. The great thing about it is that I’ve been able to pick people who don’t necessarily specialise in the same way as me. I don’t write their T-SQL for them – generally they’re good enough at that themselves. But I’m on-hand if needed. Consider Roger Noble, for example. He’s doing stuff in HTML5 and jQuery that I could never dream of doing to create an amazing HTML5 version of PivotViewer. Or Ashley Sewell, a guy who does project management far better than I do. I could go on. My team is brilliant, and I love them to bits. We’re all surgeons, and when we work together, I like to think we’re pretty good! @rob_farley

    Read the article

  • How to undo a changeset using tf.exe rollback

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,Team Foundation Utilities,TFS2010   Oh no! Did you just check in a changeset in to TFS and realized that you need to roll back the changeset because the changes were suppose to go in a different branch? Or did you just accidently merge a wrong changeset in your release branch? There are several ways to undo the damage, Manual: Yes, we all just hate this word but for the record you could manually rollback the changes. Get Specific version on the branch and chose the changeset prior to the one you checked in. After that check out all the files in the changeset and check them in. During the check in you will receive a conflict. At this point choose ‘Keep local changes’ in the conflict resolution window and check in the files. Automated: Yes, we just love it! TFS comes with a very powerful command line utility ‘tf.exe’ that gives you the ability to rollback the effects of one or more changesets to one or more version-controlled items. This command does not remove the changesets from an item's version history. Instead, this command creates in your workspace a set of pending changes that negate the effects of the changesets that you specify. Syntax tf rollback /toversion:VersionSpec ItemSpec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/login:username,[password]] [/noprompt] tf rollback /changeset:ChangesetFrom~ChangesetTo [ItemSpec] [/recursive] [/lock:none|checkin|checkout] [/version:VersionSpec] [/keepmergehistory] [/noprompt] [/login:username,[password]]   I’ll explain this with an example. Your workspace is at the location C:\myWorkspace You want to rollback changeset # 145621 C:\Workspace\MyBranch>tf.exe rollback /changeset:145621 /recursive How do i rollback/undo a series of changesets? You can also rollback a range of changesets by using the following C:\Workspace\MyBranch>tf.exe rollback /changeset:145601~145621 /recursive This will check out the files in the version control and you should be able to see them in the pending changes. Go on check them in to undo the specific changeset that you just rolled back. Do you completely want to get rid of the changeset from all future merges between the two branches? /KeepMergeHistory: This option has an effect only if one or more of the changesets that you are rolling back include a branch or merge change. Specify this option if you want future merges between the same source and the same target to exclude the changes that you are rolling back. Errors “If you get the message ‘Unable to determine the workspace.’ You may be able to correct this by running ‘tf worksapces /collection:TeamProjectCollectionUrl’” you are in the wrong directory. Make sure that you run the ‘tf rollback’ command from the directory of your workspace.   Status Exit Code Description 0 The operation rolled back all items successfully. 1 The operation rolled back at least one item successfully but could not roll back one or more items. 100 The operation could not roll back any items.   To use the command you must have the Read, Check Out, and Check In permissions set to Allow. So, have you been in a rollback undo situation before?   Share this post :

    Read the article

  • Book Review (Book 12) - 20 Master Plots

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for a year. You can read my first book review here, and the entire list is here. The book I chose for May 2012 was:20 Master Plots by Ronald B. Tobias. This is my final book review - at least for this year. I'll explain what I've learned in this book in particular, and in the last twelve months in general. Why I chose this book: Stories and themes are part of software, presenting, and working in teams. This book claims there are only 20 plots, ever. I wanted to find out. What I learned: Probably my most favorite read of the year. Deceptively small, amazingly insightful. The premise is that there are only a few "base" themes, and that once you learn them you can put together an interesting set of stories on most any topic. Yes, the author admits that this number has been different throughout history - some have said 50, others 14, and still others claim only one or two basic plots. This doesn't change the fact that you can build very complex stories from a simple set of circumstances and characters. Be warned - if you read this book it takes away much of the wonder from almost every movie or book you'll read from here on! I loved it. My favorite part is that the author gives you exercises to build stories, right from the start. I've actually used these as the start of a meeting to foster creativity. Amazing stuff. One of my favorite sections of the book deals with plot and story. Plot: The king died, and the queen died. Story: The king died, and the queen died of heartbreak. Add one or two words, and you have the essence of storytelling. A highly recommended read, for all folks of all ages. You'll like it, your spouse will like it, and your kids will like it. I learned to be a better storyteller, and it helped me understand that plots and stories are not just things in books - they are a direct reflection of human nature. That makes me a better manager of myself and others.   And this is the last of the reviews - at least for this year. I probably won't post many more book reviews here, but I will keep up the practice. As a reminder, the goal was to select 12 books that will help you reach your career goals. They don't have to be technical, or even apply directly to your job - but they do need to be books that you mindfully select as getting you closer to what you want to be. Each month, jot down what you learned from the work. And see if it doesn't in fact get you closer to your goals. These readings helped me - I got a promotion this year, and I attribute at least some of that to the things I learned.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Book Review (Book 12) - 20 Master Plots

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for a year. You can read my first book review here, and the entire list is here. The book I chose for May 2012 was:20 Master Plots by Ronald B. Tobias. This is my final book review - at least for this year. I'll explain what I've learned in this book in particular, and in the last twelve months in general. Why I chose this book: Stories and themes are part of software, presenting, and working in teams. This book claims there are only 20 plots, ever. I wanted to find out. What I learned: Probably my most favorite read of the year. Deceptively small, amazingly insightful. The premise is that there are only a few "base" themes, and that once you learn them you can put together an interesting set of stories on most any topic. Yes, the author admits that this number has been different throughout history - some have said 50, others 14, and still others claim only one or two basic plots. This doesn't change the fact that you can build very complex stories from a simple set of circumstances and characters. Be warned - if you read this book it takes away much of the wonder from almost every movie or book you'll read from here on! I loved it. My favorite part is that the author gives you exercises to build stories, right from the start. I've actually used these as the start of a meeting to foster creativity. Amazing stuff. One of my favorite sections of the book deals with plot and story. Plot: The king died, and the queen died. Story: The king died, and the queen died of heartbreak. Add one or two words, and you have the essence of storytelling. A highly recommended read, for all folks of all ages. You'll like it, your spouse will like it, and your kids will like it. I learned to be a better storyteller, and it helped me understand that plots and stories are not just things in books - they are a direct reflection of human nature. That makes me a better manager of myself and others.   And this is the last of the reviews - at least for this year. I probably won't post many more book reviews here, but I will keep up the practice. As a reminder, the goal was to select 12 books that will help you reach your career goals. They don't have to be technical, or even apply directly to your job - but they do need to be books that you mindfully select as getting you closer to what you want to be. Each month, jot down what you learned from the work. And see if it doesn't in fact get you closer to your goals. These readings helped me - I got a promotion this year, and I attribute at least some of that to the things I learned.

    Read the article

  • Stale statistics on a newly created temporary table in a stored procedure can lead to poor performance

    - by sqlworkshops
    When you create a temporary table you expect a new table with no past history (statistics based on past existence), this is not true if you have less than 6 updates to the temporary table. This might lead to poor performance of queries which are sensitive to the content of temporary tables.I was optimizing SQL Server Performance at one of my customers who provides search functionality on their website. They use stored procedure with temporary table for the search. The performance of the search depended on who searched what in the past, option (recompile) by itself had no effect. Sometimes a simple search led to timeout because of non-optimal plan usage due to this behavior. This is not a plan caching issue rather temporary table statistics caching issue, which was part of the temporary object caching feature that was introduced in SQL Server 2005 and is also present in SQL Server 2008 and SQL Server 2012. In this customer case we implemented a workaround to avoid this issue (see below for example for workarounds).When temporary tables are cached, the statistics are not newly created rather cached from the past and updated based on automatic update statistics threshold. Caching temporary tables/objects is good for performance, but caching stale statistics from the past is not optimal.We can work around this issue by disabling temporary table caching by explicitly executing a DDL statement on the temporary table. One possibility is to execute an alter table statement, but this can lead to duplicate constraint name error on concurrent stored procedure execution. The other way to work around this is to create an index.I think there might be many customers in such a situation without knowing that stale statistics are being cached along with temporary table leading to poor performance.Ideal solution is to have more aggressive statistics update when the temporary table has less number of rows when temporary table caching is used. I will open a connect item to report this issue.Meanwhile you can mitigate the issue by creating an index on the temporary table. You can monitor active temporary tables using Windows Server Performance Monitor counter: SQL Server: General Statistics->Active Temp Tables. The script to understand the issue and the workaround is listed below:set nocount onset statistics time offset statistics io offdrop table tab7gocreate table tab7 (c1 int primary key clustered, c2 int, c3 char(200))gocreate index test on tab7(c2, c1, c3)gobegin trandeclare @i intset @i = 1while @i <= 50000begininsert into tab7 values (@i, 1, ‘a’)set @i = @i + 1endcommit trangoinsert into tab7 values (50001, 1, ‘a’)gocheckpointgodrop proc test_slowgocreate proc test_slow @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_slow 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'exec test_slow 2godrop proc test_with_recompilegocreate proc test_with_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_recompile 1godbcc dropcleanbuffersgo–high reads that are not expected for parameter ’2'–low reads on 3rd execution as expected for parameter ’2'exec test_with_recompile 2godrop proc test_with_alter_table_recompilegocreate proc test_with_alter_table_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create a constraint–but this might lead to duplicate constraint name error on concurrent usagealter table #temp1 add constraint test123 unique(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgodbcc dropcleanbuffersset statistics time onset statistics io ongo–high reads as expected for parameter ’1'exec test_with_alter_table_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_alter_table_recompile 2godrop proc test_with_index_recompilegocreate proc test_with_index_recompile @i intasbegindeclare @j intcreate table #temp1 (c1 int primary key)–to avoid caching of temporary tables one can create an indexcreate index test on #temp1(c1)insert into #temp1 (c1) select @iselect @j = t7.c1 from tab7 t7 inner join #temp1 t on (t7.c2 = t.c1)option (recompile)endgoset statistics time onset statistics io ondbcc dropcleanbuffersgo–high reads as expected for parameter ’1'exec test_with_index_recompile 1godbcc dropcleanbuffersgo–low reads as expected for parameter ’2'exec test_with_index_recompile 2go

    Read the article

  • Get to Know a Candidate (3 of 25): Virgil Goode&ndash;Constitution Party

    - by Brian Lanham
    DISCLAIMER: This is not a post about “Romney” or “Obama”. This is not a post for whom I am voting. Information sourced for Wikipedia. Meet Virgil Goode of the Constitution Party Goode was served as a Republican member of the United States House of Representatives from 1997 to 2009. He represented the 5th congressional district of Virginia. Goode was born in Richmond, Virginia, the son of Alice Clara (née Besecker) and Virgil Hamlin Goode. He has spent most of his life in Rocky Mount. Goode graduated with a B.A. from the University of Richmond (Phi Beta Kappa) and with a J.D. from the University of Virginia School of Law. He also is a member of Lambda Chi Alpha Fraternity and served in the Army National Guard from 1969 to 1975. Goode grew up as a Democrat. He entered politics soon after graduating from law school. At the age of 27, he won a special election to the state Senate from a Southside district as an independent after the death of the Democratic incumbent. One of his major campaign focuses at the time was advocacy for the Equal Rights Amendment. Soon after being elected, he joined the Democrats. Goode wore his party ties very loosely. He became famous for his support of the tobacco industry, expressing his fear that "his elderly mother would be denied 'the one last pleasure' of smoking a cigarette on her hospital deathbed." He was an ardent defender of gun rights while being an enthusiastic supporter of L. Douglas Wilder, who later became the first elected black governor in the history of the United States. At the Democratic Party's state political convention in 1985, Goode nominated Wilder for lieutenant governor. However, while governor, Wilder cracked down on the sale of guns in the state. After the 1995 elections resulted in a 20–20 split between Democrats and Republicans in the State Senate, Goode seriously considered voting with the Republicans on organizing the chamber. Had he done so, the State Senate would have been under Republican control for the first time since Reconstruction (the Republicans ultimately won control outright in 1999). Goode's actions at the time "forced his party to share power with Republican lawmakers in the state legislature," which further upset the Democratic Party. Goode is on the ballot in CA, FL, ID, IO, LA, MI, MN, MS, MI, NJ, NM, NY, NV, ND, OH, SC, SD, TN, UT, VA, WA, WI, WY.  He is a write-in candidate in CA, CT, DC, GA, IL, IN, ME, MD, MA, MO, NC, TX, VT, WV Constitution Party This party was founded as the “U.S. Taxpayers’ Party” and considers itself conservative. The party's platform is predicated on the principles of the nation's founding documents. The party puts a large focus on immigration, calling for stricter penalties towards illegal immigrants and a moratorium on legal immigration until all federal subsidies to immigrants are discontinued.The party absorbed the American Independent Party, originally founded for George Wallace's 1968 presidential campaign. The American Independent Party of California has been an affiliate of the Constitution Party since its founding; however, current party leadership is disputed and the issue is in court to resolve this conflict. The Constitution Party has some substantial support from the Christian Right and in 2010 achieved major party status in Colorado. Learn more about Virgil Goode and Constitution Party on Wikipedia.

    Read the article

  • Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) is Available Now !

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle today announced the availability of Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2). It is now available for download on OTN on ALL platforms. This is the first major release since the launch of Enterprise Manager 12c in October of 2011. This is the first time when Enterprise Manager release is available on all platforms simultaneously. This is primarily a stability release which incorporates many of issues and feedback reported by early adopters. In addition, this release contains many new features and enhancements in areas across the board. New Capabilities and Features Enhanced management capabilities for enterprise private clouds: Introduces new capabilities to allow customers to build and manage a Java Platform-as-a-Service (PaaS) cloud based on Oracle Weblogic Server. The new capabilities include guided set up of PaaS Cloud, self-service provisioning, automatic scale out and metering and chargeback. Enhanced lifecycle management capabilities for Oracle WebLogic Server environments: Combining in-context multiple domain, patching and configuration file synchronizations. Integrated Hardware-Software management for Oracle Exalogic Elastic Cloud through features such as rack schematics visualization and integrated monitoring of all hardware and software components. The latest management capabilities for business-critical applications include: Business Application Management: A new Business Application (BA) target type and dashboard with flexible definitions provides a logical view of an application’s business transactions, end-user experiences and the cloud infrastructure the monitored application is running on. Enhanced User Experience Reporting: Oracle Real User Experience Insight has been enhanced to provide reporting capabilities on client-side issues for applications running in the cloud and has been more tightly coupled with Oracle Business Transaction Management to help ensure that real-time user experience and transaction tracing data is provided to users in context. Several key improvements address ease of administration, reporting and extensibility for massively scalable cloud environments including dynamic groups, self-updateable monitoring templates, bulk operations against many events, etc. New and Revised Plug-Ins: Several plug-Ins have been updated as a part of this release resulting in either new versions or revisions. Revised plug-ins contain only bug-fixes and while new plug-ins incorporate both bug fixes as well as new functionality. Plug-In Name Version Enterprise Manager for Oracle Database 12.1.0.2 (revision) Enterprise Manager for Oracle Fusion Middleware 12.1.0.3 (new) Enterprise Manager for Chargeback and Capacity Planning 12.1.0.3 (new) Enterprise Manager for Oracle Fusion Applications 12.1.0.3 (new) Enterprise Manager for Oracle Virtualization 12.1.0.3 (new) Enterprise Manager for Oracle Exadata 12.1.0.3 (new) Enterprise Manager for Oracle Cloud 12.1.0.4 (new) Installation and Upgrade: All major platforms have been released simultaneously (Linux 32 / 64 bit, Solaris (SPARC), Solaris x86-64, IBM AIX 64-bit, and Windows x86-64 (64-bit) ) Enterprise Manager 12.1.0.2 is a complete release that includes both the EM OMS and Agent versions of 12.1.0.2. Installation options available with EM 12.1.0.2: User can do fresh Install or an upgrade from versions EM 10.2.0.5, 11.1, or 12.1.0.2 ( Bundle Patch 1 not mandatory). Upgrading to EM 12.1.0.2 from EM 12.1.0.1 is not a patch application (similar to Bundle Patch 1) but is achieved through a 1-system upgrade. Documentation: Oracle Enterprise Manager Cloud Control Introduction Document provides a broad overview of capabilities and highlights"What's New" in EM 12.1.0.2. All updated Oracle Enterprise Manager documentation can be found on OTN Upgrade Guide Please feel free to ask questions related to the new Oracle Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2) at the Oracle Enterprise Manager Forum . You could also share your feedback at twitter  using hash tag #em12c or at Facebook . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • USDM and Oracle Offer a New Part 11 Compliant Solution for Life Sciences

    - by Michael Snow
    Guest post today provided by Oracle partner, USDM  Regulated Content in WebCenterUSDM and Oracle offer a new Part 11 compliant solution for Life Sciences (White Paper) Life science customers now have the ability to take advantage of all of the benefits of Oracle’s WebCenter Content, a global leader in Enterprise Content Management.   For the past year, USDM has been developing best practice compliance solutions to meet regulated content management requirements for 21 CFR Part 11 in WebCenter Content. USDM has been an expert in ECM for life sciences since 1999 and in 2011, certified that WebCenter was a 21CFR Part 11 compliant content management platform (White Paper).  In addition, USDM has built Validation Accelerators Packs for WebCenter to enable life science organizations to quickly and cost effectively validate this world class solution.With the Part 11 certification, Oracle’s WebCenter now provides regulated life science organizations  the ability to manage REGULATORY content in WebCenter, as well as the ability to take advantage of ALL of the additional functionality of WebCenter, including  a complete, open, and integrated portfolio of portal, web experience management, content management and social networking technology.  Here are a few screen shot examples of Part 11 functionality included in the product: E-Sign, E-Sign Rendor, Meta Data History, Audit Trail Report, and Access Reporting. Gone are the days that life science companies have to spend millions of dollars a year to implement, maintain, and validate ECM systems that no longer meet the ever changing business and regulatory requirements.  Life science companies now have the ability to use WebCenter Content, an ECM system with a substantially lower cost of ownership and unsurpassed functionality.Oracle has been #1 in life sciences because of their ability to develop cost effective, easy-to-use, scalable solutions which help increase insight and efficiency to drive growth for their customers.  Adding a world class ECM solution to this product portfolio allows life science organizations the chance to get rid of costly ECM systems that no longer meet their needs and use WebCenter, part of the Oracle Fusion Technology stack, with their other leading enterprise applications.USDM provides:•    Expertise in Life Science ECM Business Processes•    Prebuilt Life Science Configuration in WebCenter •    Validation Accelerator Packs for WebCenterUSDM is very proud to support Oracle’s expanding commitment to Life Sciences…. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For more information please contact:  [email protected] Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Customizing the Test Status on the TFS 2010 SSRS Stories Overview Report

    - by Bob Hardister
    This post shows how to customize the SQL query used by the Team Foundation Server 2010 SQL Server Reporting Services (SSRS) Stories Overview Report. The objective is to show test status for the current version while including user story status of the current and prior versions.  Why? Because we don’t copy completed user stories into the next release. We only want one instance of a user story for the product because we believe copies can get out of sync when they are supposed to be the same. In the example below, work items for the current version are on the area path root and prior versions are not on the area path root. However, you can use area path or iteration path criteria in the query as suits your needs. In any case, here’s how you do it: 1. Download a copy of the report RDL file as a backup 2. Open the report by clicking the edit down arrow and selecting “Edit in Report Builder” 3. Right click on the dsOverview Dataset and select Dataset Properties 4. Update the following SQL per the comments in the code: Customization 1 of 3 … -- Get the list deliverable workitems that have Test Cases linked DECLARE @TestCases Table (DeliverableID int, TestCaseID int); INSERT @TestCases     SELECT h.ID, flh.TargetWorkItemID     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID                 AND flh.WorkItemLinkTypeSK = @TestedByLinkTypeSK                 AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)                 AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi ON flh.TargetWorkItemID = wi.[System_ID]                  AND wi.[System_WorkItemType] = @TestCase             AND wi.ProjectNodeGUID  = @ProjectGuid              --  Customization 1 of 3: only include test status information when test case area path = root. Added the following 2 statements              AND wi.AreaPath = '{the root area path of the team project}'  …          Customization 2 of 3 … -- Get the Bugs linked to the deliverable workitems directly DECLARE @Bugs Table (ID int, ActiveBugs int, ResolvedBugs int, ClosedBugs int, ProposedBugs int) INSERT @Bugs     SELECT h.ID,         SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,         SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,         SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,         SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed     FROM @Hierarchy h         JOIN FactWorkItemLinkHistory flh             ON flh.SourceWorkItemID = h.ID             AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK         JOIN [CurrentWorkItemView] wi             ON wi.[System_WorkItemType] = @Bug             AND wi.[System_Id] = flh.TargetWorkItemID             AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)             AND wi.[ProjectNodeGUID] = @ProjectGuid              --  Customization 2 of 3: only include test status information when test case area path = root. Added the following statement              AND wi.AreaPath = '{the root area path of the team project}'       GROUP BY h.ID … Customization 2 of 3 … -- Add the Bugs linked to the Test Cases which are linked to the deliverable workitems -- Walks the links from the user stories to test cases (via the tested by link), and then to -- bugs that are linked to the test case. We don't need to join to the test case in the work -- item history view. -- --    [WIT:User Story/Requirement] --> [Link:Tested By]--> [Link:any type] --> [WIT:Bug] INSERT @Bugs SELECT tc.DeliverableID,     SUM (CASE WHEN wi.[System_State] = @Active THEN 1 ELSE 0 END) Active,     SUM (CASE WHEN wi.[System_State] = @Resolved THEN 1 ELSE 0 END) Resolved,     SUM (CASE WHEN wi.[System_State] = @Closed THEN 1 ELSE 0 END) Closed,     SUM (CASE WHEN wi.[System_State] = @Proposed THEN 1 ELSE 0 END) Proposed FROM @TestCases tc     JOIN FactWorkItemLinkHistory flh         ON flh.SourceWorkItemID = tc.TestCaseID         AND flh.RemovedDate = CONVERT(DATETIME, '9999', 126)         AND flh.TeamProjectCollectionSK = @TeamProjectCollectionSK     JOIN [CurrentWorkItemView] wi         ON wi.[System_Id] = flh.TargetWorkItemID         AND wi.[System_WorkItemType] = @Bug         AND wi.[ProjectNodeGUID] = @ProjectGuid         --  Customization 3 of 3: only include test status information when test case area path = root. Added the following statement         AND wi.AreaPath = '{the root area path of the team project}'     GROUP BY tc.DeliverableID … 5. Save the report and you’re all set. Note: you may need to re-apply custom parameter changes like pre-selected sprints.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >