Search Results

Search found 1504 results on 61 pages for 'brian webb'.

Page 57/61 | < Previous Page | 53 54 55 56 57 58 59 60 61  | Next Page >

  • links for 2010-04-28

    - by Bob Rhubart
    Guido Schmutz: Oracle BPM11g available! Oracle ACE Director Guido Schmutz shares his impressions after attending a hands-on workshop conducted by Masons of SOA member Clemens Utschig-Utschig. (tags: oracle otn oracleace bpm soa soasuite) Elena Zannoni : 2010 Collaboration Summit Impressions Elena Zannoni has collected her thoughts on #C10 and shares them in this great blog post. (tags: oracle otn linux architecture collaborate2010) Hajo Normann: BPMN 2.0 in Oracle BPM Suite: The future of BPM starts now "The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends." -- Oracle ACE Director and Masons of SOA member Hajo Normann (tags: oracle otn oracleace bpm soa sca) Brain Dirking: AIIM Best Practice Awards to Two Oracle Customers Brian Dirking's great write-up of the AIIM Awards Banquet, at which the Bureau of Indian Affairs and the Charles Town Police Department were among the winners of the 2010 Carl E. Nelson Best Practices Awards. (tags: oracle otn aiim bpm ecm enterprise2.0) Mark Wilcox: Upcoming Directory Services Live Webcast - Improve Time-to-Market and Reduce Cost with Oracle Directory Services Live Webcast: Improve Time-to-Market and Reduce Cost with Oracle Directory Services Event Date: Thursday, May 27, 2010 Event Time: 10:00 AM Pacific Standard Time / 1:00 Eastern Standard Time (tags: oracle otn webcast security identitymanagement) Celine Beck: Introducing AutoVue Document Print Service Celine Beck offers a detailed overview of Oracle AutoVue. (tags: oracle otn enatarch visualization printing) Vikas Jain: What's new in OWSM 11gR1 PS2 (11.1.1.3.0) ? Vikas Jain shares links to resources relevant to the recently releases patch set for Oracle Web Services Manager 11gR1. (tags: oracle otn soa webservices oswm) @theovanarem: Oracle SOA Suite 11g Release 1 Patch Set 2 Theo Van Arem shares links to several resources relevant to the release of the latest patch set for Oracle SOA Suite 11g. (tags: oracle otn soa soasuite middleware) @vambenepe: Analyzing the VMforce announcement "The new thing is that force.com now supports an additional runtime, in addition to Apex. That new runtime uses the Java language, with the constraint that it is used via the Spring framework. Which is familiar territory to many developers. That’s it." -- William Vambenepe (tags: oracle otn cloud paas)

    Read the article

  • Survey Probes the Project Management Concerns of Financial Services Executives

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Do you wonder what are the top reasons why large projects in the financial industry fail to meet budgets, schedules, and other key performance criteria? Being able to answer this question can provide important insight and value of good project management practices for your organization. According to 400 senior executives who participated in a new survey conducted by the Economist Intelligence Unit and sponsored by Oracle, unrealistic project goals is the main reason for roadblocks to success Other common stumbling blocks are poor alignment between project and organizational goals, inadequate human resources, lack of strong leadership, and unwillingness among team members to point out problems. This survey sample also had a lot to say about the impact of regulatory compliance on the overall portfolio management process. Thirty-nine percent acknowledged that regulations enabled efficient functioning of their businesses. But a similar number said that regulations often require more financial resources than were originally allocated to bring projects in on time. Regulations were seen by 35 percent of the executives as roadblocks to their ability to invest in the organization’s growth and success. These revelations among others are discussed in depth in a new on-demand Webcast titled “Too Good to Fail: Developing Project Management Expertise in Financial Services” now available from Oracle. The Webcast features Brian Gardner, editor of the Economist Intelligence Unit, who presents these findings from this survey along with Guy Barlow, director of industry strategy for Oracle Primavera. Together, they analyze what the numbers mean for project and program managers and the financial services industry. Register today to watch the on-demand Webcast and get a full rundown and analysis of the survey results. Take the Economist Intelligence Unit benchmarking survey and see how your views compare with those of other financial services industry executives in ensuring project success.  Read more in the October Edition of the quarterly Information InDepth EPPM Newsletter

    Read the article

  • PASS Summit 2010 Recap

    - by AjarnMark
    Last week I attended my eighth PASS Summit in nine years, and every year it is a fantastic event!  I was fortunate my first year to have a contact (Bill Graziano (blog | Twitter) from SQLTeam) that I was expecting to meet, and who got me started on a good track of making new contacts.  Each year I have made a few more, and renewed friendships from years past.  Many of the attendees agree that the pure networking opportunities are one of the best benefits of attending the Summit.  And there’s a lot of great technical stuff, too, some of the things that stick out for me this year include… Pre-Con Monday: PowerShell with Allen White (blog | Twitter).  This was the first time that I attended a pre-con.  For those not familiar with the concept, the regular sessions for the conference are 75-90 minutes long.  For an extra fee, you can attend a full-day session on a single topic during a pre- or post-conference training day.  I had been meaning for several months to dive in and learn PowerShell, but just never seemed to find (or make) the time for it, so when I saw this was one of the all-day sessions, and I was planning to be there on Monday anyway, I decided to go for it.  And it was well worth it!  I definitely came out of there with a good foundation to build my own PowerShell scripts, plus several sample scripts that he showed which already cover the first four or five things I was planning to do with PowerShell anyway.  This looks like the right tool for me to build an automated version of our software deployment process, which right now contains many repeated steps.  Thanks Allen! Service Broker with Denny Cherry (blog | Twitter).  I remembered reading Denny’s blog post on Using Service Broker instead of Replication, and ever since then I have been thinking about using this to populate a new reporting-focused Data Repository that we will be building in the near future.  When I saw he was doing this session, I thought it would be great to get more information and be able to ask the author questions.  When I brought this idea back to my boss, he really liked it, as we had previously been discussing doing nightly data loads, with an option to manually trigger a mid-day load if up-to-the-minute data was needed for something.  If we go the Service Broker route, we can keep the Repository current in near real-time.  Hooray! DBA Mythbusters with Paul Randal (blog | Twitter).  Even though I read every one of the posts in Paul’s blog series of the same name, I had to go see the legend in person.  It was great, and I still learned something new! How to Conduct Effective Meetings with Joe Webb (blog | Twitter).  I always like to sit in on a session that Joe does.  I met Joe several years ago when both he and Bill Graziano were on the PASS Board of Directors together, and we have kept in touch.  Joe is very well-spoken and has great experience with both SQL Server and business.  And we could certainly use some pointers at my work (probably yours, too) on making our meetings more effective and to run on-time.  Of course, now that I’m the Chapter Leader for the Professional Development virtual chapter, I also had to sit in on this ProfDev session and recruit Joe to do a presentation or two for the chapter next year. Query Optimization with David DeWitt.  Anyone who has seen Dr. David DeWitt present the 3rd keynote at a PASS Summit over the last three years knows what a great time it is to sit and listen to him make some really complicated and advanced topic easy to understand (although it still makes your head hurt).  It still amazes me that the simple two-table join query from pubs that he used in his example can possibly have 22 million possible physical query plans.  Ouch! Exhibit Hall:  This year I spent more serious time in the exhibit hall than any year past.  I have talked my boss into making a significant (for us) investment in monitoring tools next year, and this was a great opportunity to talk with all the big-hitters.  Readers of mine may recall that I fell in love with the SQL Sentry Power Suite several months ago and wrote a blog entry about it just from the trial version.  Well as things turned out, short-term budget priorities shifted, and we weren’t able to make the purchase then.  I have it in the budget for next year, but since I was going to the Summit, my boss wanted me to look at the other options to see if this was really the one that we wanted.  I spent a couple of hours talking with representatives from Red-Gate, Idera, Confio, and Quest about their offerings, and giving them each the same 3 scenarios that I wanted to be able to accomplish based on the questions and issues that arise in our company.  It was interesting to discover the different approaches or “world view” that each vendor takes to the subject of performance monitoring and troubleshooting.  I may write a separate article that goes into this in more depth, but the product that best aligned with our point of view, and met the current needs we have is still the SQL Sentry Power Suite.  I’m not saying that the others are bad or wrong or anything like that, just that the way they tackled the issue did not align as well with our particular needs as does SQL Sentry’s product.  And that was something I learned too, when you go shopping for these products, you really need to know what you want to get from them.  It’s best if you have a few example scenarios from work that you can use to test out how well each tool fits your particular needs. Overall, another GREAT event.  I can’t wait to get the DVDs so I can sit in on a bunch of other sessions that I couldn’t get to because I was in one of the ones above.  And I can hardly wait until next year!

    Read the article

  • MDX using EXISTING, AGGREGATE, CROSSJOIN and WHERE

    - by James Rogers
    It is a well-published approach to using the EXISTING function to decode AGGREGATE members and nested sub-query filters.  Mosha wrote a good blog on it here and a more recent one here.  The use of EXISTING in these scenarios is very useful and sometimes the only option when dealing with multi-select filters.  However, there are some limitations I have run across when using the EXISTING function against an AGGREGATE member:   The AGGREGATE member must be assigned to the Dimension.Hierarchy being detected by the EXISTING function in the calculated measure. The AGGREGATE member cannot contain a crossjoin from any other dimension or hierarchy or EXISTING will not be able to detect the members in the AGGREGATE member.   Take the following query (from Adventure Works DW 2008):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Date].[Fiscal Weeks].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  This is useful to get a per-week average for another member. Many applications generate queries in this manner (such as Oracle OBIEE).  This query returns the correct result of (4) weeks. Now let's put a twist in it.  What if the querying application submits the query in the following manner:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Customer].[Customer Geography].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]})'   select   {[Week Count]} on columns from   [Adventure Works]     where   [Customer].[Customer Geography].[CM]   Here we are attempting to count the existing fiscal weeks in slicer.  However, the AGGREGATE member is built on a different dimension (in name) than the one EXISTING is trying to detect.  In this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension.   Now another twist, the AGGREGATE member will be named appropriately and contain the hierarchy we are trying to detect with EXISTING but it will be cross-joined with another hierarchy:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   Once again, we are attempting to count the existing fiscal weeks in slicer.  Again, in this case the query returns (174) which is the total number of [Date].[Fiscal Weeks].[Fiscal Week].members defined in the dimension. However, in 2008 R2 this query returns the correct result of 4 and additionally , the following will return the count of existing countries as well (2):   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'   member [Country Count] as 'count(existing([Customer].[Customer Geography].[Country].members))'  member [Date].[Fiscal Weeks].[CM] as 'AGGREGATE({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}*    {[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})'  select   {[Week Count]} on columns from   [Adventure Works]    where   [Date].[Fiscal Weeks].[CM]   2008 R2 seems to work as long as the AGGREGATE member is on at least one of the hierarchies attempting to be detected (i.e. [Date].[Fiscal Weeks] or [Customer].[Customer Geography]). If not, it seems that the engine cannot find a "point of entry" into the aggregate member and ignores it for calculated members.   One way around this would be to put the sets from the AGGREGATE member explicitly in the WHERE clause (slicer).  I realize this is only supported in SSAS 2005 and 2008.  However, after talking with Chris Webb (his blog is here and I highly recommend following his efforts and musings) it is a far more efficient way to filter/slice a query:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   This query returns the correct result of (4) weeks.  Additionally, we can count the cross-join members of the two hierarchies in the slicer:   With   member [Week Count] as 'count(existing([Date].[Fiscal Weeks].[Fiscal Week].members)*existing([Customer].[Customer Geography].[Country].members))'    select   {[Week Count]} on columns from   [Adventure Works]    where   ({[Date].[Fiscal Weeks].[Fiscal Week].&[47]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[48]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[49]&[2004],[Date].[Fiscal Weeks].[Fiscal Week].&[50]&[2004]}   ,{[Customer].[Customer Geography].[Country].&[Australia],[Customer].[Customer Geography].[Country].&[United States]})   We get the correct number of (8) here.

    Read the article

  • Working with Visual Studio Web Development Server and IE6 in XP Mode on Windows 7

    - by Igor Milovanovic
    (Brian Reiter from  thoughtful computing has described this setup in this StackOverflow thread. The credit for the idea is entirely his, I have just extended it with some step by step descriptions and added some links and screenhots.)   If you are forced  to still support Internet Explorer 6, you can setup following combination on your machine to make the development for it less painful. A common problem if you are developing on Windows 7 is that you can’t install IE6 on your machine. (Not that you want that anyway). So you will probably end up working locally with IE8 and FF, and test your IE6 compatibility on a separate machine. This can get quite annoying, because you will have to maintain two different development environments, not have all the tools available, etc.   You can help yourself by installing IE6 in a Windows 7 XP Mode, which is basically just an Windows XP running in a virtual machine.   [1] Windows XP Mode installation   After you have installed and configured your XP mode (remember the security settings like Windows Update and antivirus software), you can add the shortcut to the IE6 in the virtual machine to the “all users” start menu. This shortcut will be replicated to your windows 7 XP mode start menu, and you will be able to seamlessly start your IE 6 as a normal window on your Windows 7 desktop.   [2] Configure IE6 for the Windows 7 installation   If you configure your XP – Mode to use (Shared Networking)  NAT, you can now use IE6 to browse the sites in the internet. (add proxy settings to IE6 if necessary)                       The problem now is that you can’t connect to the webdev server which is running on your local machine. This is because web development server is crippled to allow only local connections for security reasons.   In order to trick webdev in believing that the requests are coming from local machine itself you can use a light weight proxy like privoxy on your host (windows 7) machine and configure the IE6 running in the virtual host.   The first step is to make the host machine (running windows 7) reachable from the virtual machine (running XP). In order to do that, you can install the loopback adapter, and configure it to use an IP which is routable from the virtual machine. In example screenshot (192.168.1.66).   [3] How to install loopback adapter in Windows 7   After installation you can assign a static IP which is routable from the virtual machine (in example 192.168.1.66)                     The next step is to configure privoxy to listen on that IP address (using some not used port, in example, the default port 8118)   Change following line in config.txt:   # #      Suppose you are running Privoxy on an IPv6-capable machine and #      you want it to listen on the IPv6 address of the loopback device: # #        listen-address [::1]:8118 # # listen-address  192.168.1.66:8118   The last step is to configure the IE6 to use Privoxy which is running on your Windows 7 host machine as proxy for all addresses (including localhost)                             And now you can use your Windows7 XP Mode IE6 to connect to your Visual Studio’s webdev web server.                         [4] http://stackoverflow.com/questions/683151/connect-remotely-to-webdev-webserver-exe

    Read the article

  • JCP.Next - Early Adopters of JCP 2.8

    - by Heather VanCura
    JCP.Next is a series of three JSRs (JSR 348, JSR 355 and JSR 358), to be defined through the JCP process itself, with the JCP Executive Committee serving as the Expert Group. The proposed JSRs will modify the JCP's processes  - the Process Document and Java Specification Participation Agreement (JSPA) and will apply to all new JSRs for all Java platforms.   The first - JCP.next.1, or more formally JSR 348, Towards a new version of the Java Community Process - was completed and put into effect in October 2011 as JCP 2.8. This focused on a small number of simple but important changes to make our process more transparent and to enable broader participation. We're already seeing the benefits of these changes as new and existing JSRs adopt the new requirements. The second - JSR 355, Executive Committee Merge, is also Final. You can read the JCP 2.9 Process Document .  As part of the JSR 355 Final Release, the JCP Executive Committee published revisions to the JCP Process Document (version 2.9) and the EC Standing Rules (version 2.2).  The changes went into effect following the 2012 EC Elections in November. The third JSR 358, A major revision of the Java Community Process was submitted in June 2012.  This JSR will modify the Java Specification Participation Agreement (JSPA) as well as the Process Document, and will tackle a large number of complex issues, many of them postponed from JSR 348. For these reasons, the JCP EC (acting as the Expert Group for this JSR), expects to spend a considerable amount of time working on. The JSPA is defined by the JCP as "a one-year, renewable agreement between the Member and Oracle. The success of the Java community depends upon an open and transparent JCP program.  JSR 358, A major revision of the Java Community Process, is now in process and can be followed on java.net. The following JSRs and Spec Leads were the early adopters of JCP 2.8, who voluntarily migrated their JSRs from JCP 2.x to JCP 2.8 or above.  More candidates for 2012 JCP Star Spec Leads! JSR 236, Concurrency Utilities for Java EE (Anthony Lai/Oracle), migrated April 2012 JSR 308, Annotations on Java Types (Michael Ernst, Alex Buckley/Oracle), migrated September 2012 JSR 335, Lambda Expressions for the Java Programming Language (Brian Goetz/Oracle), migrated October 2012 JSR 337, Java SE 8 Release Contents (Mark Reinhold/Oracle) – EG Formation, migrated September 2012 JSR 338, Java Persistence 2.1 (Linda DeMichiel/Oracle), migrated January 2012 JSR 339, JAX-RS 2.0: The Java API for RESTful Web Services (Santiago Pericas-Geertsen, Marek Potociar/Oracle), migrated July 2012 JSR 340, Java Servlet 3.1 Specification (Shing Wai Chan, Rajiv Mordani/Oracle), migrated August 2012 JSR 341, Expression Language 3.0 (Kin-man Chung/Oracle), migrated August 2012 JSR 343, Java Message Service 2.0 (Nigel Deakin/Oracle), migrated March 2012 JSR 344, JavaServer Faces 2.2 (Ed Burns/Oracle), migrated September 2012 JSR 345, Enterprise JavaBeans 3.2 (Marina Vatkina/Oracle), migrated February 2012 JSR 346, Contexts and Dependency Injection for Java EE 1.1 (Pete Muir/RedHat) – migrated December 2011

    Read the article

  • Today's Links (6/20/2011)

    - by Bob Rhubart
    Why your security sucks | Eric Knorr A conversation with InfoWorld security expert Roger Grimes reveals why the latest burst of attacks is just business as usual. JDev 11g R2 - ADF BC Dependency Diagram Feature | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovkis continues his exploration of JDeveloper 11g R2. Mobile Apps Put the Web in Their Rear-view Mirror | Charles Newark-French "Our analysis shows that, for the first time ever, daily time spent in mobile apps surpasses desktop and mobile web consumption," says Newark-French. "This stat is even more remarkable if you consider that it took less than three years for native mobile apps to achieve this level of usage, driven primarily by the popularity of iOS and Android platforms." Vivek Kundra, a public servant who gets stuff done | Craig Newmark Craigslist founder Craig Newmark bids farewell to the nation's first CIO. Weblogic, QBrowser and topics | Eric Elzinga Elzinga says: "Besides using the Weblogic Console to add subscribers to our topics we can also use QBrowser to browse queues and topics on your Weblogic Server." Java EE talks at JAX Conf | Arun Gupta Arun Gupta shares links to several Java EE presentations taking place at this week's Jax Conference in San Jose, CA. Development gotchas and silver bullets | Andy Mulholland Mulholland explains why "Software development has to change to fit with new business practices!" Oracle is Proud Sponsor of Gartner Security and Risk Management Summit 2011 | Troy Kitch Oracle will have a very strong presence at this year’s Gartner Security and Risk Management Summit 2011 in Washington D.C., June 20-23. Database Web Service using Toplink DB Provider | Vishal Jain "With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider," says Jain. Sample Chapter: A Fusion Applications Technical Overview An excerpt from "Managing Oracle Fusion Applications" by Richard Bingham, published by Oracle Press, May 2011. White Paper: Oracle Optimized Solution for Enterprise Cloud Infrastructure This paper provides recommendations and best practices for optimizing virtualization infrastructures when deploying the Oracle Enterprise Cloud Infrastructure. White paper: Oracle Optimized Solution for Lifecycle Content Management Authors Donna Harland and Nick Klosk illustrate how Oracle Enterprise Content Management Suite and Oracle’s Sun Storage Archive Manager work Oracle’s Sun hardware. Bay Area Coherence Special Interest Group Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET - Note that Parking at 475 Sansome Closes at 8:30pm Location: Oracle Office,475 Sansome Street, San Francisco, CA Google Map Speakers: Chris Akker, Solutions Engineer, F5 Paul Cleary, Application Architect, Oracle Alexey Ragozin, Independent Consultant Brian Oliver, Oracle

    Read the article

  • Microsoft MVP Award Nomination

    - by Mark A. Wilson
    I am extremely honored to announce that I have been nominated to receive the Microsoft MVP Award for my contributions in C#! Hold on; I have not won the award yet. But to be nominated is really humbling. Thank you very much! For those of you who may not know, here is a high-level summary of the MVP award: The Microsoft Most Valuable Professional (MVP) Program recognizes and thanks outstanding members of technical communities for their community participation and willingness to help others. The program celebrates the most active community members from around the world who provide invaluable online and offline expertise that enriches the community experience and makes a difference in technical communities featuring Microsoft products. MVPs are credible, technology experts from around the world who inspire others to learn and grow through active technical community participation. While MVPs come from many backgrounds and a wide range of technical communities, they share a passion for technology and a demonstrated willingness to help others. MVPs do this through the books and articles they author, the Web sites they manage, the blogs they maintain, the user groups they participate in, the chats they host or contribute to, the events and training sessions where they present, as well as through the questions they answer in technical newsgroups or message boards. - Microsoft MVP Award Nomination Email I guess I should start my nomination acceptance speech by profusely thanking Microsoft as well as everyone who nominated me. Unfortunately, I’m not completely certain who those people are. While I could guess (in no particular order: Bill J., Brian H., Glen G., and/or Rob Z.), I would much rather update this post accordingly after I know for certain who to properly thank. I certainly don’t want to leave anyone out! Please Help My next task is to provide the MVP Award committee with information and descriptions of my contributions during the past 12 months. For someone who has difficulty remembering what they did just last week, trying to remember something that I did 12 months ago is going to be a real challenge. (Yes, I should do a better job blogging about my activities. I’m just so busy!) Since this is an award about community, I invite and encourage you to participate. Please leave a comment below or send me an email. Help jog my memory by listing anything and everything that you can think of that would apply and/or be important to include in my reply back to the committee. I welcome advice on what to say and how to say it from previous award winners. Again, I greatly appreciate the nomination and welcome any assistance you can provide. Thanks for visiting and till next time, Mark A. Wilson      Mark's Geekswithblogs Blog Enterprise Developers Guild Technorati Tags: Community,Way Off Topic

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • A new SQL, a new Analysis Services, a new Workshop! #ssas #sql2012

    - by Marco Russo (SQLBI)
    One week ago Microsoft SQL Server 2012 finally debuted with a virtual launch event and you can find many intro sessions there (20 minutes each). There is a lot of new content available if you want to learn more about SQL 2012 and in this blog post I’d like to provide a few link to sessions, documents, bits and courses that are available now or very soon. First of all, the release of Analysis Services 2012 has finally released PowerPivot 2012 (many of us called it PowerPivot v2 before this official name) and also the new Data Mining Add-in for Microsoft Office 2010, now available also for Excel 64bit! And, of course, don’t miss the Microsoft SQL Server 2012 Feature Pack, there are a lot of upgrades for both DBAs and developers. I just discovered there is a new LocalDB version of SQL Express that can run in user mode without any setup. Is this the end of SQL CE? But now, back to Analysis Services: if you want some tutorial on Tabular, the Microsoft Virtual Academy has a whole track dedicated to Analysis Services 2012 but you will probably be interested also in the one about Reporting Services 2012. If you think that virtual is good but it’s not enough, there are plenty of conferences in the coming months – these are just those where I and Alberto will deliver some SSAS Tabular presentations: SQLBits X, London, March 29-31, 2012: if you are in London or want a good reason to go, this is the most important SQL Server event in Europe this year, no doubts about it. And not only because of the high number of attendees, but also because there is an impressive number of speakers (excluding me, of course) coming from all over the world. This is an event second only to PASS Summit in Seattle so there are no good reasons to not attend it. Microsoft SQL Server & Business Intelligence Conference 2012, Milan, March 28-29, 2012: this is an Italian conference so the language might be a barrier, but many of us also speak English and the food is good! Just a few seats still available. TechEd North America, Orlando, June 11-14, 2012: you know, this is a big event and it contains everything – if you want to spend a whole day learning the SSAS Tabular model with me and Alberto, don’t miss our pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (be careful, it is on June 10, a nice study-Sunday!). TechEd Europe, Amsterdam, June 26-29, 2012: the European version of TechEd provides almost the same content and you don’t have to go overseas. We also run the same pre-conference day “Using BISM Tabular in Microsoft SQL Server Analysis Services 2012” (in this case, it is on June 25, that’s a regular Monday). I and Alberto will also speak at some user group meeting around Europe during… well, we’re going to travel a lot in the next months. In fact, if you want to get a complete training on SSAS Tabular, you should spend two days with us in one of our SSAS Tabular Workshop! We prepared a 2-day seminar, a very intense one, that start from the simple tabular modeling and cover architecture, DAX, query, advanced modeling, security, deployment, optimization, monitoring, relationships with PowerPivot and Multidimensional… Really, there are a lot of stuffs here! We announced the first dates in Europe and also an online edition optimized for America’s time zone: Apr 16-17, 2012 – Amsterdam, Netherlands Apr 26-27, 2012 – Copenhagen, Denmark May 7-8, 2012 – Online for America’s time zone May 14-15, 2012 – Brussels, Belgium May 21-22, 2012 – Oslo, Norway May 24-25, 2012 – Stockholm, Sweden May 28-29, 2012 – London, United Kingdom May 31-Jun 1, 2012 – Milan, Italy (Italian language) Also Chris Webb will join us in this workshop and in every date you can find who is the speaker on the web site. The course is based on our upcoming book, almost 600 pages (!) about SSAS Tabular, an incredible effort that will be available very soon in a preview (rough cuts from O’Reilly) and will be on the shelf in May. I will provide a link to order it as soon as we have one! And if you think that this is not enough… you’re right! Do you know what is the only thing you can do to optimize your Tabular model? Optimize your DAX code. Learning DAX is easy, mastering DAX requires some knowledge… and our DAX Advanced Workshop will provide exactly the required content. Public classes will be available later this year, by now we just deliver it on demand.

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • ArchBeat Link-o-Rama Top 10 for November 1, 2012

    - by Bob Rhubart
    Hurricane Sandy Edition Power outages in the Cleveland area made it impossible to publish posts on Tuesday and Wednesday. In my neighborhood most are still without power. The sound of howling winds that dominated on Monday and Tuesday has been replaced by the sound of of portable generators. My internet connection was restored only after AT&T U-Verse crewmen hooked up a portable generator to power the relay station up the street. Bear in mind that Cleveland is 500 miles from the Atlantic coast. Mobile Development Platform Strategy Chart: ADF Mobile, WebCenter Sites, Portal, Content and Social "Unlike desktop web focused efforts, the world of mobile has undergone change at a feverish pace," says social enterprise expert John Brunswick. His extensive post charts various resources that will help you keep up. ADF Essentials - The Bare Necessities | Floyd Teter The experiment is over... And now Oracle ACE Director Floyd Teter shares his impressions after spending some time with Oracle ADF Essentials, the free version of Oracle ADF. Expanding the Oracle Enterprise Repository with functional documentation Capgemini middleware specialist Marc Kuijpers shares information on how Oracle Enterprise Repository can be configured "to contain functional assets, i.e. functional designs, use cases and a logical data model" to aid in SOA governance efforts. A review of Oracle SOA Suite 11g Administrator’s Handbook | RedStack "More so than any other single piece of content that I have seen on the topic, it provides the information that a SOA administrator needs to know in order to successfully configure, manage, monitor, troubleshoot and backup an Oracle SOA environment." So says Oracle Fusion Middleware A-Team solution architect Mark Nelson of Oracle SOA Suite 11g Administrator’s Handbook, by Ahmed Aboulnaga and Arun Pareek. Eating our own dog food – Oracle’s internal deployment of Oracle IDM Oracle Fusion Middleware A-Team member Brian Eidelman recommends the recent podcast on Oracle’s internal deployment of Oracle OAM and OID. "This was a big project that involved migrating a bunch of critical, high volume applications to leverage OAM and OID," says Eidelman. "So I suggest you tune in to see and hear more about how we deploy our own software." Thought for the Day "Anyone who says they're not afraid at the time of a hurricane is either a fool or a liar, or a little bit of both." — Anderson Cooper Source: BrainyQuote

    Read the article

  • Deep in the Heart of Texas

    - by Applications User Experience
    Author: Erika Webb, Manager, Fusion Applications UX User Assistance When I was first working in the usability field, the only way I could consider conducting a usability study was to bring a potential user to a lab environment where I could show them whatever I was interested in learning more about and ask them questions. While I hate to reveal just how long I have been working in this field, let's just say that pads of paper and a stopwatch were key tools for any test I conducted. Over the years, I have worked in simple labs with basic video taping equipment and not much else, and I have worked in corporate environments with sophisticated usability labs and state-of-the-art equipment. Years ago, we conducted all usability studies at the location of the user. If we wanted to see if there were any differences between users in New York, Chicago, and Los Angeles, we went to those places to run the test. A lab environment is very useful for many test situations. However, there has always been a debate in the usability field about whether bringing someone into a lab environment, however friendly we make it, somehow intrinsically changes the behavior of the user as compared to having them work in their own environment, at their own desk, and on their own computer. We developed systems to create a portable usability lab, so that we could go to the users that we needed to test.  Do lab environments change user behavior patterns? Then 9/11 hit. You may not remember, but no planes flew for weeks afterwards. Companies all over the world couldn't fly-in employees for meetings. Suddenly, traveling to the location of the users had an additional difficulty. The company I was working for at the time had usability specialists stuck in New York for days before they could finally rent a car and drive home to Colorado. This changed the world pretty suddenly, and technology jumped on the change. Companies offering Internet meeting tools were strugglinguntil no one could travel. The Internet boomed with collaboration tools that enabled people to work together wherever they happened to be. This change in technology has made a huge difference in my world. We use collaborative tools to bring our product concepts and ideas to the user across the Internet. As a global company, we benefit from having users from all over the world inform our designs. We now run usability studies with users all over the world in a single day, a feat we couldn't have accomplished 10 years ago by plane! Other technology companies have started to do more of this type of usability testing, since the tools have improved so dramatically. Plus, in our busy world, it's not always easy to find users who can take the time away from their jobs to come to our labs. reaching users where it is convenient for them greatly improves the odds that people do participate. I manage a team of usability specialists who live in India and California, whlie I live in Colorado. We have wonderful labs that we bring users into to show them our products. But very often, we run our studies remotely. We used to take the lab to the users now we use the labs, but we let the users stay where they are. We gain users who might not have been able to leave work to come to our labs, and they get to use the system they are familiar with. And we gain users nearly anywhere that we can set up an Internet connection, as long as the users have a phone, a broadband connection, and a compatible Web browser (with no pop-up blockers). After we recruit participants in a traditional manner, we send them an invitation to participate through the use of a telephone conference call and Web conferencing tool. At Oracle, we use Oracle Web Conference part of Oracle Collaboration Suite, which enables us to give the user control of the mouse, while we present a prototype or wireframe pictures. We can record the sessions over the Web and phone conference. We send the users instructions, plus tips to ensure that we won't have problems sharing screens. In some cases, when time is tight, we even run a five-minute "test session" with users a day in advance to be sure that we can connect. Prior to the test, we send users a participant script that contains information about the study, including any questionnaires. This is exactly the same script we give to participants who come to the labs. We ask users to print this before the beginning of the session. We generally run these studies by having a usability engineer in our usability labs, so that we can record the session as though the user were in the lab with us. Roughly 80% of our application software usability testing at Oracle is performed using remote methods. The probability of getting a   remote test participant decreases the higher up the person is in the target organization. We have a methodology checklist available to help our usability engineers work through the remote processes.

    Read the article

  • ArchBeat Link-o-Rama for July 3, 2013

    - by Bob Rhubart
    Industrial SOA Chapter 5: Enterprise Service Bus Enterprise Service Bus, the fifth and latest addition to the Industrial SOA article series, answers some of the most important questions surrounding the use of an ESB. Industrial SOA Chapter 4: SOA Maturity The fourth article in the Industrial SOA series, SOA Maturity offers "an exploration of the fundamentals of applying a factory approach to modern service-oriented software development." Using the Exalytics Summary Advisor and Oracle BI Apps 7.9.6.4 | Mark Rittman Oracle ACE Director Mark Rittman's post revisits "the use of the Summary Advisor, with my BI Apps installation bumped-up to version 7.9.6.4, and the Exalytics environment patched up to 11.1.1.6.9, the latest patch release we’ve applied to that environment." Part 1 - 12c Database and WLS - Overview | Steve Felts Steve Felts shares a handy table that "maps the Oracle 12c Database features supported with various combinations of currently available WLS releases, 11g and 12c Drivers, and 11g and 12c Databases." Developers WebCast: Deploy Highly-Available Custom Services on Your Data Grid Products - July 11 Oracle Coherence Sr. Architect Brian Oliver hosts this free July 11 webcast for developers to show you how to "create and deploy customized, highly-available services for your data grid, and how real-time data processing will allow you to provide unmatched end-user experiences." A checklist for OIM go live | Daniel Gralewski FMW A-Team solution architect Daniel Gralewski's list is intended to complement Oracle Identity Manager. His post "provides tips on a few topics that are not part of the documentation." How Many ODI Master Repositories Should We Have? | Christophe Dupupet FMW solution architect Christophe Dupupet provides a simple along with best practices for the architecture of ODI repositories in a corporate environment. Distinguish EA from enterprise wide solution architecture | John Wu My buddy Tony Meyer, who did a great presentation recently at the Cleveland-area Enterprise Architect / Solution Architect Meet-up, recommends this Toolbox article by John Wu. YouTube: Oracle Fusion Applications Developer Tips If you work with Fusion Applications you'll want to check out the tips and tricks for building extensions, customizations, and integrations now available on the new Oracle Fusion Middleware Developer Relations YouTube channel. The CX Factor: Wooing and wowing customers in the digital age "There was a time when 'customer experience' was limited to what happened to you when you walked into a store, restaurant, or other place of business or when you called a business on the telephone. But that was back when you could still smoke on airplanes." Thought for the Day "If you had to identify, in one word, the reason why the human race has not achieved, and never will achieve, its full potential, that word would be 'meetings.' " — Dave Barry (Born July 3, 1947) Source: brainyquote.com

    Read the article

  • Bay Area Coherence Special Interest Group Next Meeting July 21, 2011

    - by csoto
    Date: Thursday, July 21, 2011 Time: 4:30pm - 8:15pm ET (note that Parking at 475 Sansome Closes at 8:30pm) Where: Oracle Office, 475 Sansome Street, San Francisco, CA Google Map We will be providing snacks and beverages. Register! - Registration is required for building security. Presentation Line Up:? 5:10pm - Batch Processing Using Coherence in Oracle Group Policy Administration - Paul Cleary, Oracle Oracle Insurance Policy Administration (OIPA) is a flexible, rules-based policy administration solution that provides full record keeping for all policy lifecycle transactions. One component of OIPA is Cycle processing, which is the batch processing of pending insurance transactions. This presentation introduces OIPA and Cycle processing, describing the unique challenges of processing a high volume of transactions within strict time windows. It then reviews how OIPA uses Oracle Coherence and the Processing Pattern to meet these challenges, describing implementation specifics that highlight the simplicity and robustness of the Processing Pattern. 6:10pm - Secure, Optimize, and Load Balance Coherence with F5 - Chris Akker, F5 F5 Networks, Inc., the global leader in Application Delivery Networking, helps the world’s largest enterprises and service providers realize the full value of virtualization, cloud computing, and on-demand IT. Recently, F5 and Oracle partnered to deliver a novel solution that integrates Oracle Coherence 3.7 with F5 BIG-IP Local Traffic Manager (LTM). This session will introduce F5 and how you can leverage BIG-IP LTM to secure, optimize, and load balance application traffic generated from Coherence*Extend clients across any number of servers in a cluster and to hardware-accelerate CPU-intensive SSL encryption. 7:10pm - Using Oracle Coherence to Enable Database Partitioning and DC Level Fault Tolerance - Alexei Ragozin, Independent Consultant and Brian Oliver, Oracle Partitioning is a very powerful technique for scaling database centric applications. One tricky part of partitioned architecture is routing of requests to the right database. The routing layer (routing table) should know the right database instance for each attribute which may be used for routing (e.g. account id, login, email, etc): it should be fast, it should fault tolerant and it should scale. All the above makes Oracle Coherence a natural choice for implementing such routing tables in partitioned architectures. This presentation will cover synchronization of the grid with multiple databases, conflict resolution, cross cluster replication and other aspects related to implementing robust partitioned architecture. Additional Info:?? - Download Past Presentations: The presentations from the previous meetings of the BACSIG are available for download here. Click on the presentation titles to download the PDF files. - Join the Coherence online community on our Oracle Coherence Users Group on LinkedIn. - Contact BACSIG with any comments, questions, presentation proposals and content suggestions.

    Read the article

  • Don't Miss At Devoxx!!!

    - by Yolande Poirier
    Come by IoT Hack Fest which starts with the session: kickstart your Raspberry Pi and/or Leap Motion project, part II on Tuesday from 9:30am to 12:00pm to learn how to start a project with the Raspberry Pi and Leap Motion. In the afternoon, you can still join a project and create your own project with the help of experts on Raspberry Pi, Leap Motion and other boards.  At the Oracle booth, Java experts will be available  to answer your  questions and demo the new features of the Java Platform, including Java Embedded, JavaFX, Java SE and Java EE. This year, the chess game that was first demoed at JavaOne keynotes last September will be showcased at Devoxx.  Duke is coming to Devoxx this year. You can get your picture taken with Duke on Tuesday, Wednesday and Thursday (Nov. 12-14) from 12:00 to 18:00 Beer bash will be Tuesday from 17:30-19:30 and Wednesday/Thursday from 18:00 to 20:00 at the booth. Oracle is raffling off five Raspberry Pi's and a number of books every day. Make sure to stop by and get your badge scanned to enter the raffle. Raffles are Tuesday at 19:15 and Wednesday/Thursday at 19:45 at the Oracle booth.  The main conference sessions from Oracle Java experts are:  Wednesday 13 November Beyond Beauty: JavaFX, Parallax, Touch, Raspberry Pi, Gyroscopes, and Much More Angela Caicedo, Senior Member, Technical Staff, Oracle Room 7, 12:00–13:00 Lambda: A Peek Under the Hood, Brian Goetz, Software Architect, Oracle Room 8, 12:00–13:00 In Full Flow: Java 8 Lambdas in the Stream, Paul Sandoz, Software Developer, Oracle Room 8, 14:00–15:00 The Modular Java Platform and Project Jigsaw, Mark Reinhold, Chief Architect, Java Platform Group, Oracle, Room 8, 15:10–16:10 The Curious Case of JavaScript on the JVM, Attila Szegedi, Principal Member, Technical Staff, Oracle, Room 5, 16:40–17:40 Is It a Car? Is It a Computer? No, It’s a Raspberry Pi JavaFX Informatics System. Simon Ritter, Principal Technology Evangelist, Oracle Room 7, 16:40–17:40 Thursday 14 November Java EE 7: What’s New in the Java EE Platform Linda DeMichiel, Consulting Member, Technical Staff, Oracle, Room 8, 10:50–11:50 Java Microbenchmark Harness: The Lesser of the Two Evils, Aleksey Shipilev, Principal Member, Technical Staff, Oracle. Room 6, 14:00–15:00 Practical Restful Persistence, Shaun Smith, Senior Principal Product Manager, Oracle Room 8, 17:50–18:50 Friday 15 November Avatar.js, Server-Side JavaScript on the Java Platform, Jean-Francois Denise, Software Developer, Oracle Room 8, 11:50–12:50

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • ArchBeat Link-o-Rama for November 16, 2012

    - by Bob Rhubart
    X.509 Certificate Revocation Checking Using OCSP protocol with Oracle WebLogic Server 12c | Abhijit Patil Abhijit Patil's article focuses on how to use X.509 Certificate Revocation Checking Functionality with the OCSP protocol to validate in-bound certificates. Although this article focuses on inbound OCSP validation using OCSP, Oracle WebLogic Server 12c also supports outbound OCSP validation. Leveraging Oracle Scorecard and Strategy Management for Everyday BI Needs "Oracle Scorecard and Strategy Management (OSSM) is built-upon the premise that a scorecard system should not be separate from the BI system, like many comparable tools are today," says author Kevin McGinely. "Instead of a separate application with its own data, its own data definitions, and its own front-end, Oracle made the choice to integrate OSSM directly into OBIEE." Applying BI for personal productivity recognition and gamification | Capgemini Oracle Blog "It is quite obvious that if you want people to participate you need an appealing and intuitive user interface," says Capgemini's Henk Vermeulen in this interesting exploration of gamification in the enterprise. Build and release OSB projects with Maven | Edwin Biemond "With Maven we are able to build and deploy OSB projects," says Oracle ACE Edwin Biemond. "The artifacts generated by Maven called snaphosts and releases can be automatically uploaded to a software repository. These versioned OSB jars can then be downloaded by the OSB Servers and deployed." Biemond shows you how in this detailed technical post. ADF Generator for Dynamic ADF BC and ADF UI | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis' post is an extension of his OOW12 presentation, "Oracle ADF Implementations Around the Globe: Best Practices," and includes the sample application he promised to share. Service-oriented organizations have a head start in the cloud race | ZDNet ZDNet SOA blogger Joe McKendrick offers a snapshot of a recent report Forrester analyst James Staten. Oracle Fusion Middleware Security: X509 Fallback to Form | Debasish BhattacharyaOracle Fusion Middleware A-Team architect Debasish Bhattacharya shares a solution that resulted from brainstorming with colleagues Chris Johnson and Brian Eidelman. "The solution is not very difficult," says Bhattacharya, "though it needs some additional configurations and coding." It's all presented in this detailed post. Agile Architecture | David Sprott "There is ample evidence that Agile Architecture is a primary contributor to business agility, yet we do not have a well understood architecture management system that integrates with Agile methods," observes David Sprott in this extensive post. Thought for the Day "Operating systems are like underwear — nobody really wants to look at them." — Bill Joy Source: SoftwareQuotes.com

    Read the article

  • How do I get long command lines to wrap to the next line?

    - by BrianH
    Edit It was my .bashrc file. I've copied the same profile from machine to machine, and I used special characters in my $PS1 that are somehow throwing it off. I'm now sticking with the standard bash variables for my $PS1. Thanks to @ændrük for the tip on the .bashrc! ...End Edit... Something I have noticed in Ubuntu for a long time that has been frustrating to me is when I am typing a command at the command line that gets longer (wider) than the terminal width, instead of wrapping to a new line, it goes back to column 1 on the same line and starts over-writing the beginning of my command line. (It doesn't actually overwrite the actual command, but visually, it is overwriting the text that was displayed). It's hard to explain without seeing it, but let's say my terminal was 20 characters wide (Mine is more like 120 characters - but for the sake of an example), and I want to echo the English alphabet. What I type is this: echo abcdefghijklmnopqrstuvwxyz But what my terminal looks like before I hit the key is: pqrstuvwxyzghijklmno When I hit enter, it echos abcdefghijklmnopqrstuvwxyz so I know the command was received properly. It just wrapped my typing after the "o" and started over on the same line. What I would expect to happen, if I typed this command in on a terminal that was only 20 characters wide would be this: echo abcdefghijklmno pqrstuvwxyz Background: I am using bash as my shell, and I have this line in my ~/.bashrc: set -o vi to be able to navigate the command line with VI commands. I am currently using Ubuntu 10.10 server, and connecting to the server with Putty. In any other environment I have worked in, if I type a long command line, it will add a new line underneath the line I am working on when my command gets longer than the terminal width and when I keep typing I can see my command on 2 different lines. But for as long as I can remember using Ubuntu, my long commands only occupy 1 line. This also happens when I am going back to previous commands in the history (I hit Esc, then 'K' to go back to previous commands) - when I get to a previous command that was longer than the terminal width, the command line gets mangled and I cannot tell where I am at in the command. The only work-around I have found to see the entire long command is to hit "Esc-V", which opens up the current command in a VI editor. I don't think I have anything odd in my .bashrc file. I commented out the "set -o vi" line, and I still had the problem. I downloaded a fresh copy of Putty and didn't make any changes to the configuration - I just typed in my host name to connect, and I still have the problem, so I don't think it's anything with Putty (unless I need to make some config changes) Has anyone else had this problem, and can anyone think of how to fix it? Thanks in advance! Brian

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by [email protected]
    By Brian Dayton on March 19, 2010 10:52 PM Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience. My thoughts brought up more questions than answers such as: Context: - HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? - WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? - WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: - DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? - HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.? Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • My Doors - Why Standards Matter to Business

    - by [email protected]
    By Brian Dayton on April 8, 2010 9:27 PM "Standards save money." "Standards accelerate projects." "Standards make better solutions." What do these statements mean to you? You buy technology solutions like Oracle Applications but you're a business person--trying to close the quarter, get performance reviews processed, negotiate a new sourcing contract, etc. When "standards" come up in presentations and discussions do you: - Nod your head politely - Tune out and check your smart phone - Turn to your IT counterpart and say "Bob's all over this standards thing, right Bob?" Here's why standards matter. My wife wants new external doors downstairs, ones that would get more light into the rooms. Am I OK with that? "Uhh, sure...it's a little dark in the kitchen." - 24 hours ago - wife calls to tell me that she's going to the hardware store and may look at doors - 20 hours ago - wife pulls into driveway, informs me that two doors are in the back of her station wagon, ready for me to carry - 19 hours ago - I re-discovered the fact that it's not fun to carry a solid wood door by myself - 5 hours ago - Local handyman, who was at our house anyway, tells me that the doors we bought will likely cost 2-3x the material cost in installation time and labor...the doors are standard but our doorways aren't We could have done more research. I could be more handy. Sure. But the fact is, my 1951 house wasn't built with me in mind. They built what worked and called it a day. The same holds true with a lot of business applications. They were designed and architected for one-time use with one use-case in mind. Today's business climate is different. If you're going to use your processes and technology to differentiate your business you should have at least a working knowledge of: - How standards can benefit your business - Your IT organization's philosophy around standards - Your vendor's track-record around standards...and watch for those who pay lip-service to standards but don't follow through The rallying cry in most IT organizations today is "learn more about the business, drop the acronyms." I'm not advocating that you go out and learn how to code in Java. But I do believe it will help your business and your decision-making process if you meet IT ½...even ¼ of the way there. Epilogue: The door project has been put on hold and yours truly has to return the doors to the hardware store tomorrow.

    Read the article

  • So Much Happening at Devoxx

    - by Tori Wieldt
    Devoxx, the premier Java conference in Europe, has been sold out for a while. The organizers (thanks Stephan and crew!) cap the attendance to make sure all attendees have a great experience, and that speaks volumes about their priorities. The speakers, hackathons, labs, and networking are all first class. The Oracle Technology Network will be there, and if you were smart/lucky enough to get a ticket, come find us and join the fun: IoT Hack Fest Build fun and creative Internet of Things (IoT) applications with Java Embedded, Raspberry Pi and Leap Motion on the University Days (Monday and Tuesday). Learn from top experts Yara & Vinicius Senger and Geert Bevin at two Raspberry Pi & Leap Motion hands-on labs and hacking sessions. Bring your computer. Training and equipment will be provided. Devoxx will also host an Internet of Things shop in the exhibition floor where attendees can purchase Arduino, Raspberry PI and Robot starter kits. Bring your IoT wish list! Video Interviews Yolande Poirier and I will be interviewing members of the Java Community in the back of the Expo hall on Wednesday and Thursday. Videos are posted on Parleys and YouTube/Java. We have a few slots left, so contact me (you can DM @Java) if you want to share your insights or cool new tip or trick with the rest of the developer community. (No commercials, no fluff. Keep it techie and keep it real.)  Oracle Keynote Wednesday morning Mark Reinhold, Chief Java Platform Architect, and Brian Goetz, Java Language Architect will provide an update on Java 8 and beyond. Oracle Booth Drop by the Oracle booth to see old and new friends.  We'll have Java in Action demos and the experts to explain them and answer your questions. We are raffling off Raspberry Pi's each day, so be sure to get your badged scanned. We'll have beer in the booth each evening. Look for @Java in her lab coat.  See you at Devoxx! 

    Read the article

  • Part 2&ndash;Load Testing In The Cloud

    - by Tarun Arora
    Welcome to Part 2, In Part 1 we discussed the advantages of creating a Test Rig in the cloud, the Azure edge and the Test Rig Topology we want to get to. In Part 2, Let’s start by understanding the components of Azure we’ll be making use of followed by manually putting them together to create the test rig, so… let’s get down dirty start setting up the Test Rig.  What Components of Azure will I be using for building the Test Rig in the Cloud? To run the Test Agents we’ll make use of Windows Azure Compute and to enable communication between Test Controller and Test Agents we’ll make use of Windows Azure Connect.  Azure Connect The Test Controller is on premise and the Test Agents are in the cloud (How will they talk?). To enable communication between the two, we’ll make use of Windows Azure Connect. With Windows Azure Connect, you can use a simple user interface to configure IPsec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. With this you can now join Windows Azure role instances to your domain, so that you can use your existing methods for domain authentication, name resolution, or other domain-wide maintenance actions. For more details refer to an overview of Windows Azure connect. A very useful video explaining everything you wanted to know about Windows Azure connect.  Azure Compute Windows Azure compute provides developers a platform to host and manage applications in Microsoft’s data centres across the globe. A Windows Azure application is built from one or more components called ‘roles.’ Roles come in three different types: Web role, Worker role, and Virtual Machine (VM) role, we’ll be using the Worker role to set up the Test Agents. A very nice blog post discussing the difference between the 3 role types. Developers are free to use the .NET framework or other software that runs on Windows with the Worker role or Web role. Developers can also create applications using languages such as PHP and Java. More on Windows Azure Compute. Each Windows Azure compute instance represents a virtual server... Virtual Machine Size CPU Cores Memory Cost Per Hour Extra Small Shared 768 MB $0.04 Small 1 1.75 GB $0.12 Medium 2 3.50 GB $0.24 Large 4 7.00 GB $0.48 Extra Large 8 14.00 GB $0.96   You might want to review the Windows Azure Pricing FAQ. Let’s Get Started building the Test Rig… Configuration Machine Role Comments VM – 1 Domain Controller for Playpit.com On Premise VM – 2 TFS, Test Controller On Premise VM – 3 Test Agent Cloud   In this blog post I would assume that you have the domain, Team Foundation Server and Test Controller Installed and set up already. If not, please refer to the TFS 2010 Installation Guide and this walkthrough on MSDN to set up your Test Controller. You can also download a preconfigured TFS 2010 VM from Brian Keller's blog, Brian also has some great hands on Labs on TFS 2010 that you may want to explore. I. Lets start building VM – 3: The Test Agent Download the Windows Azure SDK and Tools Open Visual Studio and create a new Windows Azure Project using the Cloud Template                   Choose the Worker Role for reasons explained in the earlier post         The WorkerRole.cs implements the Run() and OnStart() methods, no code changes required. You should be able to compile the project and run it in the compute emulator (The compute emulator should have been installed as part of the Windows Azure Toolkit) on your local machine.                   We will only be making changes to WindowsAzureProject, open ServiceDefinition.csdef. Ensure that the vmsize is small (remember the cost chart above). Import the “Connect” module. I am importing the Connect module because I need to join the Worker role VM to the Playpit domain. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect"/> </Imports> </WorkerRole> </ServiceDefinition> Go to the ServiceConfiguration.Cloud.cscfg and note that settings with key ‘Microsoft.WindowsAzure.Plugins.Connect.%%%%’ have been added to the configuration file. This is because you decided to import the connect module. See the config below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration>             Let’s go step by step and understand all the highlighted parameters and where you can find the values for them.       osFamily – By default this is set to 1 (Windows Server 2008 SP2). Change this to 2 if you want the Windows Server 2008 R2 operating system. The Advantage of using osFamily = “2” is that you get Powershell 2.0 rather than Powershell 1.0. In Powershell 2.0 you could simply use “powershell -ExecutionPolicy Unrestricted ./myscript.ps1” and it will work while in Powershell 1.0 you will have to change the registry key by including the following in your command file “reg add HKLM\Software\Microsoft\PowerShell\1\ShellIds\Microsoft.PowerShell /v ExecutionPolicy /d Unrestricted /f” before you can execute any power shell. The other reason you might want to move to os2 is if you wanted IIS 7.5.       Activation Token – To enable communication between the on premise machine and the Windows Azure Worker role VM both need to have the same token. Log on to Windows Azure Management Portal, click on Connect, click on Get Activation Token, this should give you the activation token, copy the activation token to the clipboard and paste it in the configuration file. Note – Later in the blog I’ll be showing you how to install connect on the on premise machine.                       EnableDomainJoin – Set the value to true, ofcourse we want to join the on windows azure worker role VM to the domain.       DomainFQDN, DomainControllerFQDN, DomainAccountName, DomainPassword, DomainOU, Administrators – This information is specific to your domain. I have extracted this information from the ‘service manager’ and ‘Active Directory Users and Computers’. Also, i created a new Domain-OU namely ‘CloudInstances’ so all my cloud instances joined to my domain show up here, this is optional. You can encrypt the DomainPassword – refer to the instructions here. Or hold fire, I’ll be covering that when i come to certificates and encryption in the coming section.       Now once you have filled all this information up, the configuration file should look something like below, <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> </ConfigurationSettings> </Role> </ServiceConfiguration> Next we will be enabling the Remote Desktop module in to the ServiceDefinition.csdef, we could make changes manually or allow a beautiful wizard to help us make changes. I prefer the second option. So right click on the Windows Azure project and choose Publish       Now once you get the publish wizard, if you haven’t already you would be asked to import your Windows Azure subscription, this is simply the Msdn subscription activation key xml. Once you have done click Next to go to the Settings page and check ‘Enable Remote Desktop for all roles’.       As soon as you do that you get another pop up asking you the details for the user that you would be logging in with (make sure you enter a reasonable expiry date, you do not want the user account to expire today). Notice the more information tag at the bottom, click that to get access to the certificate section. See screen shot below.       From the drop down select the option to create a new certificate        In the pop up window enter the friendly name for your certificate. In my case I entered ‘WAC – Test Rig’ and click ok. This will create a new certificate for you. Click on the view button to see the certificate details. Do you see the Thumbprint, this is the value that will go in the config file (very important). Now click on the Copy to File button to copy the certificate, we will need to import the certificate to the windows Azure Management portal later. So, make sure you save it a safe location.                                Click Finish and enter details of the user you would like to create with permissions for remote desktop access, once you have entered the details on the ‘Remote desktop configuration’ screen click on Ok. From the Publish Windows Azure Wizard screen press Cancel. Cancel because we don’t want to publish the role just yet and Yes because we want to save all the changes in the config file.       Now if you go to the ServiceDefinition.csdef file you will see that the RemoteAccess and RemoteForwarder roles have been imported for you. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition"> <WorkerRole name="WorkerRole1" vmsize="Small"> <Imports> <Import moduleName="Diagnostics" /> <Import moduleName="Connect" /> <Import moduleName="RemoteAccess" /> <Import moduleName="RemoteForwarder" /> </Imports> </WorkerRole> </ServiceDefinition> Now go to the ServiceConfiguration.Cloud.cscfg file and you see a whole bunch for setting “Microsoft.WindowsAzure.Plugins.RemoteAccess.%%%” values added for you. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="WindowsAzureProject2" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="2" osVersion="*"> <Role name="WorkerRole1"> <Instances count="1" /> <ConfigurationSettings> <Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.ActivationToken" value="45f55fea-f194-4fbc-b36e-25604faac784" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Refresh" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.WaitForConnectivity" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Upgrade" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.EnableDomainJoin" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainFQDN" value="play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainControllerFQDN" value="WIN-KUDQMQFGQOL.play.pit.com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainAccountName" value="playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainPassword" value="************************" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainOU" value="OU=CloudInstances, DC=Play, DC=Pit, DC=com" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.Administrators" value="Playpit\Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.Connect.DomainSiteName" value="" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.Enabled" value="true" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountUsername" value="Administrator" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountEncryptedPassword" value="MIIBnQYJKoZIhvcNAQcDoIIBjjCCAYoCAQAxggFOMIIBSgIBADAyMB4xHDAaBgNVBAMME1dpbmRvd 3MgQXp1cmUgVG9vbHMCEGa+B46voeO5T305N7TSG9QwDQYJKoZIhvcNAQEBBQAEggEABg4ol5Xol66Ip6QKLbAPWdmD4ae ADZ7aKj6fg4D+ATr0DXBllZHG5Umwf+84Sj2nsPeCyrg3ZDQuxrfhSbdnJwuChKV6ukXdGjX0hlowJu/4dfH4jTJC7sBWS AKaEFU7CxvqYEAL1Hf9VPL5fW6HZVmq1z+qmm4ecGKSTOJ20Fptb463wcXgR8CWGa+1w9xqJ7UmmfGeGeCHQ4QGW0IDSBU6ccg vzF2ug8/FY60K1vrWaCYOhKkxD3YBs8U9X/kOB0yQm2Git0d5tFlIPCBT2AC57bgsAYncXfHvPesI0qs7VZyghk8LVa9g5IqaM Cp6cQ7rmY/dLsKBMkDcdBHuCTAzBgkqhkiG9w0BBwEwFAYIKoZIhvcNAwcECDRVifSXbA43gBApNrp40L1VTVZ1iGag+3O1" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteAccess.AccountExpiration" value="2012-11-27T23:59:59.0000000+00:00" /> <Setting name="Microsoft.WindowsAzure.Plugins.RemoteForwarder.Enabled" value="true" /> </ConfigurationSettings> <Certificates> <Certificate name="Microsoft.WindowsAzure.Plugins.RemoteAccess.PasswordEncryption" thumbprint="AA23016CF0BDFC344400B5B82706B608B92E4217" thumbprintAlgorithm="sha1" /> </Certificates> </Role> </ServiceConfiguration>          Okay let’s look at them one at a time,       Enabled - Yes, we would like to enable Remote Access.       AccountUserName – This is the user name you entered while you were on the publish windows azure role screen, as detailed above.       AccountEncrytedPassword – Try and decode that, the certificate is used to encrypt the password you specified for the user account. Remember earlier i said, either use the instructions or wait and i’ll be showing you encryption, now the user account i am using for rdp has the same password as my domain password, so i can simply copy the value of the AccountEncryptedPassword to the DomainPassword as well.       AccountExpiration – This is the expiration as you specified in the wizard earlier, make sure your account does not expire today.       Remote Forwarder – Check out the documentation, below is how I understand it, -- One role in an application that implements a remote desktop connection must import the RemoteForwarder module. The two modules work together to enable the remote desktop connections to role instances. -- If you have multiple roles defined in the service model, it does not matter which role you add the RemoteForwarder module to, but you must add it to only one of the role definitions.       Certificate – Remember the certificate thumbprint from the wizard, the on premise machine and windows azure role machine that need to speak to each other must have the same thumbprint. More on that when we install Windows Azure connect Endpoints on the on premise machine. As i said earlier, in this blog post, I’ll be showing you the manual process so i won’t be scripting any star up tasks to install the test agent or register the test agent with the TFS Server. I’ll be showing you all this cool stuff in the next blog post, that’s because it’s important to understand the manual side of it, it becomes easier for you to troubleshoot in case something fails. Having said that, the changes we have made are sufficient to spin up the Windows Azure Worker Role aka Test Agent VM, have it connected with the play.pit.com domain and have remote access enabled on it. Before we deploy the Test Agent VM we need to set up Windows Azure Connect on the TFS Server. II. Windows Azure Connect: Setting up Connect on VM – 2 i.e. TFS & Test Controller Glad you made it so far, now to enable communication between the on premise TFS/Test Controller and Azure-ed Test Agent we need to enable communication. We have set up the Azure connect module in the Test Agent configuration, now the connect end points need to be enabled on the on premise machines, let’s have a look at how we can do this. Log on to VM – 2 running the TFS Server and Test Controller Log on to the Windows Azure Management Portal and click on Virtual Network Click on Virtual Network, if you already have a subscription you should see the below screen shot, if not, you would be asked to complete the subscription first        Click on Install Local Endpoints from the top left on the panel and you get a url appended with a token id in it, remember the token i showed you earlier, in theory the token you get here should match the token you added to the Test Agent config file.        Copy the url to the clip board and paste it in IE explorer (important, the installation at present only works out of IE and you need to have cookies enabled in order to complete the installation). As stated in the pop up, you can NOT download and run the software later, you need to run it as is, since it contains a token. Once the installation completes you should see the Windows Azure connect icon in the system tray.                         Right click the Azure Connect icon, choose Diagnostics and refer to this link for diagnostic detail terminology. NOTE – Unfortunately I could not see the Windows Azure connect icon in the system tray, a bit of binging with Google revealed that the azure connect icon is only shown when the ‘Windows Azure Connect Endpoint’ Service is started. So go to services.msc and make sure that the service is started, if not start it, unfortunately again, the service did not start for me on a manual start and i realised that one of the dependant services was disabled, you can look at the service dependencies and start them and then start windows azure connect. Bottom line, you need to start Windows Azure connect service before you can proceed. Please refer here on MSDN for more on Troubleshooting Windows Azure connect. (Follow the next step as well)   Now go back to the Windows Azure Management Portal and from Groups and Roles create a new group, lets call it ‘Test Rig’. Make sure you add the VM – 2 (the TFS Server VM where you just installed the endpoint).       Now if you go back to the Azure Connect icon in the system tray and click ‘Refresh Policy’ you will notice that the disconnected status of the icon should change to ready for connection. III. Importing Certificate in to Windows Azure Management Portal But before that you need to import the certificate you created in Step I in to the Windows Azure Management Portal. Log on to the Windows Azure Management Portal and click on ‘Hosted Services, Storage Accounts & CDN’ and then ‘Management Certificates’ followed by Add Certificates as shown in the screen shot below        Browse to the location where you saved the certificate earlier, remember… Refer to Step I in case you forgot.        Now you should be able to see the imported certificate here, make sure the thumbprint of the certificate matches the one you inserted in the config files        IV. Publish Windows Azure Worker Role aka Test Agent Having completed I, II and III, you are ready to publish the Test Agent VM – 3 to the cloud. Go to Visual Studio and right click the Windows Azure project and select Publish. Verify the infomration in the wizard, from the advanced settings tab, you can also enabled capture of intellitrace or profiling information.         Click Next and Click Publish! From the view menu bar select the Windows Azure Activity Log window.       Now you should be able to see the deployment progress in real time.             In the Windows Azure Management Portal, you should also be able to see the progress of creation of a new Worker Role.       Once the deployment is complete you should be able to RDP (go to run prompt type mstsc and in the pop up the machine name) in to the Test Agent Worker Role VM from the Playpit network using the domain admin user account. In case you are unable to log in to the Test Agent using the domain admin user account it means the process of joining the Test Agent to the domain has failed! But the good news is, because you imported the connect module, you can connect to the Test Agent machine using Windows Azure Management Portal and troubleshoot the reason for failure, you will be able to log in with the user name and password you specified in the config file for the keys ‘RemoteAccess.AccountUsername, RemoteAccess.EncryptedPassword (just that enter the password unencrypted)’, fix it or manually join the machine to the domain. Once you have managed to Join the Test Agent VM to the Domain move to the next step.      So, log in to the Test Agent Worker Role VM with the Playpit Domain Administrator and verify that you can log in, the machine is connected to the domain and the connect service is successfully running. If yes, give your self a pat on the back, you are 80% mission accomplished!         Go to the Windows Azure Management Portal and click on Virtual Network, click on Groups and Roles and click on Test Rig, click Edit Group, the edit the Test Rig group you created earlier. In the Connect to section, click on Add to select the worker role you have just deployed. Also, check the ‘Allow connections between endpoints in the group’ with this you will enable to communication between test controller and test agents and test agents/test agents. Click Save.      Now, you are ready to deploy the Test Agent software on the Worker Role Test Agent VM and configure it to work with the Test Controller. V. Configuring VM – 3: Installing Test Agent and Associating Test Agent to Controller Log in to the Worker Role Test Agent VM that you have just successfully deployed, make sure you log in with the domain administrator account. Download the All Agents software from MSDN, ‘en_visual_studio_agents_2010_x86_x64_dvd_509679.iso’, extract the iso and navigate to where you have extracted the iso. In my case, i have extracted the iso to “C:\Resources\Temp\VsAgentSetup”. Open the Test Agent folder and double click on setup.exe. Once you have installed the Test Agent you should reach the configuration window. If you face any issues installing TFS Test Agent on the VM, refer to the walkthrough on MSDN.       Once you have successfully installed the Test Agent software you will need to configure the test agent. Right click the test agent configuration tool and run as a different user. i.e. an Administrator. This is really to run the configuration wizard with elevated privileges (you might have UAC block something's otherwise).        In the run options, you can select ‘service’ you do not need to run the agent as interactive un less you are running coded UI tests. I have specified the domain administrator to connect to the TFS Test Controller. In real life, i would never do that, i would create a separate test user service account for this purpose. But for the blog post, we are using the most powerful user so that any policies or restrictions don’t block you.        Click the Apply Settings button and you should be all green! If not, the summary usually gives helpful error messages that you can resolve and proceed. As per my experience, you may run in to either a permission or a firewall blocking communication issue.        And now the moment of truth! Go to VM –2 open up Visual Studio and from the Test Menu select Manage Test Controller       Mission Accomplished! You should be able to see the Test Agent that you have just configured here,         VI. Creating and Running Load Tests on your brand new Azure-ed Test Rig I have various blog posts on Performance Testing with Visual Studio Ultimate, you can follow the links and videos below, Blog Posts: - Part 1 – Performance Testing using Visual Studio 2010 Ultimate - Part 2 – Performance Testing using Visual Studio 2010 Ultimate - Part 3 – Performance Testing using Visual Studio 2010 Ultimate Videos: - Test Tools Configuration & Settings in Visual Studio - Why & How to Record Web Performance Tests in Visual Studio Ultimate - Goal Driven Load Testing using Visual Studio Ultimate Now that you have created your load tests, there is one last change you need to make before you can run the tests on your Azure Test Rig, create a new Test settings file, and change the Test Execution method to ‘Remote Execution’ and select the test controller you have configured the Worker Role Test Agent against in our case VM – 2 So, go on, fire off a test run and see the results of the test being executed on the Azur-ed Test Rig. Review and What’s next? A quick recap of the benefits of running the Test Rig in the cloud and what i will be covering in the next blog post AND I would love to hear your feedback! Advantages Utilizing the power of Azure compute to run a heavy virtual user load. Benefiting from the Azure flexibility, destroy Test Agents when not in use, takes < 25 minutes to spin up a new Test Agent. Most important test Network Latency, (network latency and speed of connection are two different things – usually network latency is very hard to test), by placing the Test Agents in Microsoft Data centres around the globe, one can actually test the lag in transferring the bytes not because of a slow connection but because the page has been requested from the other side of the globe. Next Steps The process of spinning up the Test Agents in windows Azure is not 100% automated. I am working on the Worker process and power shell scripts to make the role deployment, unattended install of test agent software and registration of the test agent to the test controller automated. In the next blog post I will show you how to make the complete process unattended and automated. Remember to subscribe to http://feeds.feedburner.com/TarunArora. Hope you enjoyed this post, I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. See you in Part III.   Share this post : CodeProject

    Read the article

  • Community Conversation

    - by ultan o'broin
    Applications User Experience members (Erika Webb, Laurie Pattison, and I) attended the User Assistance Europe Conference in Stockholm, Sweden. We were impressed with the thought leadership and practical application of ideas in Anne Gentle's keynote address "Social Web Strategies for Documentation". After the conference, we spoke with Anne to explore the ideas further. Applications User Experience Senior Director Laurie Pattison (left) with Anne Gentle at the User Assistance Europe Conference In Anne's book called Conversation and Community: The Social Web for Documentation, she explains how user assistance is undergoing a seismic shift. The direction is away from the old print manuals and online help concept towards a web-based, user community-driven solution using social media tools. User experience professionals now have a vast range of such tools to start and nurture this "conversation": blogs, wikis, forums, social networking sites, microblogging systems, image and video sharing sites, virtual worlds, podcasts, instant messaging, mashups, and so on. That user communities are a rich source of user assistance is not a surprise, but the extent of available assistance is. For example, we know from the Consortium for Service Innovation that there has been an 'explosion' of user-generated content on the web. User-initiated community conversations provide as much as 30 times the number of official help desk solutions for consortium members! The growing reliance on user community solutions is clearly a user experience issue. Anne says that user assistance as conversation "means getting closer to users and helping them perform well. User-centered design has been touted as one of the most important ideas developed in the last 20 years of workplace writing. Now writers can take the idea of user-centered design a step further by starting conversations with users and enabling user assistance in interactions." Some of Anne's favorite examples of this paradigm shift from the world of traditional documentation to community conversation include: * Writer Bob Bringhurst's blog about Adobe InDesign and InCopy products and Adobe's community help * The Microsoft Development Network Community Center * ·The former Sun (now Oracle) OpenDS wiki, NetBeans Ruby and other community approaches to engage diverse audiences using screencasts, wikis, and blogs. * Cisco's customer support wiki, EMC's community, as well as Symantec and Intuit's approaches * The efforts of Ubuntu, Mozilla, and the FLOSS community generally Adobe Writer Bob Bringhurst's Blog Oracle is not without a user community conversation too. Besides the community discussions and blogs around documentation offerings, we have the My Oracle Support Community forums, Oracle Technology Network (OTN) communities, wiki, blogs, and so on. We have the great work done by our user groups and customer councils. Employees like David Haimes are reaching out, and enthusiastic non-employee gurus like Chet Justice (OracleNerd), Floyd Teter and Eddie Awad provide great "how-to" information too. But what does this paradigm shift mean for existing technical writers as users turn away from the traditional printable PDF manual deliverables? We asked Anne after the conference. The writer role becomes one of conversation initiator or enabler. The role evolves, along with the process, as the users define their concept of user assistance and terms of engagement with the product instead of having it pre-determined. It is largely a case now of "inventing the job while you're doing it, instead of being hired for it" Anne said. There is less emphasis on formal titles. Anne mentions that her own title "Content Stacker" at OpenStack; others use titles such as "Content Curator" or "Community Lead". However, the role remains one essentially about communications, "but of a new type--interacting with users, moderating, curating content, instead of sitting down to write a manual from start to finish." Clearly then, this role is open to more than professional technical writers. Product managers who write blogs, developers who moderate forums, support professionals who update wikis, rock star programmers with a penchant for YouTube are ideal. Anyone with the product knowledge, empathy for the user, and flair for relationships on the social web can join in. Some even perform these roles already but do not realize it. Anne feels the technical communicator space will move from hiring new community conversation professionals (who are already active in the space through blogging, tweets, wikis, and so on) to retraining some existing writers over time. Our own research reveals that the established proponents of community user assistance even set employee performance objectives for internal content curators about the amount of community content delivered by people outside the organization! To take advantage of the conversations on the web as user assistance, enterprises must first establish where on the spectrum their community lies. "What is the line between community willingness to contribute and the enterprise objectives?" Anne asked. "The relationship with users must be managed and also measured." Anne believes that the process can start with a "just do it" approach. Begin by reaching out to existing user groups, individual bloggers and tweeters, forum posters, early adopter program participants, conference attendees, customer advisory board members, and so on. Use analytical tools to measure the level of conversation about your products and services to show a return on investment (ROI), winning management support. Anne emphasized that success with the community model is dependent on lowering the technical and motivational barriers so that users can readily contribute to the conversation. Simple tools must be provided, and guidelines, if any, must be straightforward but not mandatory. The conversational approach is one where traditional style and branding guides do not necessarily apply. Tools and infrastructure help users to create content easily, to search and find the information online, read it, rate it, translate it, and participate further in the content's evolution. Recognizing contributors by using ratings on forums, giving out Twitter kudos, conference invitations, visits to headquarters, free products, preview releases, and so on, also encourages the adoption of the conversation model. The move to conversation as user assistance is not free, but there is a business ROI. The conversational model means that customer service is enhanced, as user experience moves from a functional to a valued, emotional level. Studies show a positive correlation between loyalty and financial performance (Consortium for Service Innovation, 2010), and as customer experience and loyalty become key differentiators, user experience professionals cannot explore the model's possibilities. The digital universe (measured at 1.2 million petabytes in 2010) is doubling every 12 to 18 months, and 70 percent of that universe consists of user-generated content (IDC, 2010). Conversation as user assistance cannot be ignored but must be embraced. It is a time to manage for abundance, not scarcity. Besides, the conversation approach certainly sounds more interesting, rewarding, and fun than the traditional model! I would like to thank Anne for her time and thoughts, and recommend that all user assistance professionals read her book. You can follow Anne on Twitter at: http://www.twitter.com/annegentle. Oracle's Acrolinx IQ deployment was used to author this article.

    Read the article

  • General monitoring for SQL Server Analysis Services using Performance Monitor

    - by Testas
    A recent customer engagement required a setup of a monitoring solution for SSAS, due to the time restrictions placed upon this, native Windows Performance Monitor (Perfmon) and SQL Server Profiler Monitoring Tools was used as using a third party tool would have meant the customer providing an additional monitoring server that was not available.I wanted to outline the performance monitoring counters that was used to monitor the system on which SSAS was running. Due to the slow query performance that was occurring during certain scenarios, perfmon was used to establish if any pressure was being placed on the Disk, CPU or Memory subsystem when concurrent connections access the same query, and Profiler to pinpoint how the query was being managed within SSAS, profiler I will leave for another blogThis guide is not designed to provide a definitive list of what should be used when monitoring SSAS, different situations may require the addition or removal of counters as presented by the situation. However I hope that it serves as a good basis for starting your monitoring of SSAS. I would also like to acknowledge Chris Webb’s awesome chapters from “Expert Cube Development” that also helped shape my monitoring strategy:http://cwebbbi.spaces.live.com/blog/cns!7B84B0F2C239489A!6657.entrySimulating ConnectionsTo simulate the additional connections to the SSAS server whilst monitoring, I used ascmd to simulate multiple connections to the typical and worse performing queries that were identified by the customer. A similar sript can be downloaded from codeplex at http://www.codeplex.com/SQLSrvAnalysisSrvcs.     File name: ASCMD_StressTestingScripts.zip. Performance MonitorWithin performance monitor,  a counter log was created that contained the list of counters below. The important point to note when running the counter log is that the RUN AS property within the counter log properties should be changed to an account that has rights to the SSAS instance when monitoring MSAS counters. Failure to do so means that the counter log runs under the system account, no errors or warning are given while running the counter log, and it is not until you need to view the MSAS counters that they will not be displayed if run under the default account that has no right to SSAS. If your connection simulation takes hours, this could prove quite frustrating if not done beforehand JThe counters used……  Object Counter Instance Justification System Processor Queue legnth N/A Indicates how many threads are waiting for execution against the processor. If this counter is consistently higher than around 5 when processor utilization approaches 100%, then this is a good indication that there is more work (active threads) available (ready for execution) than the machine's processors are able to handle. System Context Switches/sec N/A Measures how frequently the processor has to switch from user- to kernel-mode to handle a request from a thread running in user mode. The heavier the workload running on your machine, the higher this counter will generally be, but over long term the value of this counter should remain fairly constant. If this counter suddenly starts increasing however, it may be an indicating of a malfunctioning device, especially if the Processor\Interrupts/sec\(_Total) counter on your machine shows a similar unexplained increase Process % Processor Time sqlservr Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process % Processor Time msmdsrv Definately should be used if Processor\% Processor Time\(_Total) is maxing at 100% to assess the effect of the SQL Server process on the processor Process Working Set sqlservr If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Process Working Set msmdsrv If the Memory\Available bytes counter is decreaing this counter can be run to indicate if the process is consuming larger and larger amounts of RAM. Process(instance)\Working Set measures the size of the working set for each process, which indicates the number of allocated pages the process can address without generating a page fault. Processor % Processor Time _Total and individual cores measures the total utilization of your processor by all running processes. If multi-proc then be mindful only an average is provided Processor % Privileged Time _Total To see how the OS is handling basic IO requests. If kernel mode utilization is high, your machine is likely underpowered as it's too busy handling basic OS housekeeping functions to be able to effectively run other applications. Processor % User Time _Total To see how the applications is interacting from a processor perspective, a high percentage utilisation determine that the server is dealing with too many apps and may require increasing thje hardware or scaling out Processor Interrupts/sec _Total  The average rate, in incidents per second, at which the processor received and serviced hardware interrupts. Shoulr be consistant over time but a sudden unexplained increase could indicate a device malfunction which can be confirmed using the System\Context Switches/sec counter Memory Pages/sec N/A Indicates the rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays, this is the primary counter to watch for indication of possible insufficient RAM to meet your server's needs. A good idea here is to configure a perfmon alert that triggers when the number of pages per second exceeds 50 per paging disk on your system. May also want to see the configuration of the page file on the Server Memory Available Mbytes N/A is the amount of physical memory, in bytes, available to processes running on the computer. if this counter is greater than 10% of the actual RAM in your machine then you probably have more than enough RAM. monitor it regularly to see if any downward trend develops, and set an alert to trigger if it drops below 2% of the installed RAM. Physical Disk Disk Transfers/sec for each physical disk If it goes above 10 disk I/Os per second then you've got poor response time for your disk. Physical Disk Idle Time _total If Disk Transfers/sec is above  25 disk I/Os per second use this counter. which measures the percent time that your hard disk is idle during the measurement interval, and if you see this counter fall below 20% then you've likely got read/write requests queuing up for your disk which is unable to service these requests in a timely fashion. Physical Disk Disk queue legnth For the OLAP and SQL physical disk A value that is consistently less than 2 means that the disk system is handling the IO requests against the physical disk Network Interface Bytes Total/sec For the NIC Should be monitored over a period of time to see if there is anb increase/decrease in network utilisation Network Interface Current Bandwidth For the NIC is an estimate of the current bandwidth of the network interface in bits per second (BPS). MSAS 2005: Memory Memory Limit High KB N/A Shows (as a percentage) the high memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Limit Low KB N/A Shows (as a percentage) the low memory limit configured for SSAS in C:\Program Files\Microsoft SQL Server\MSAS10.MSSQLSERVER\OLAP\Config\msmdsrv.ini MSAS 2005: Memory Memory Usage KB N/A Displays the memory usage of the server process. MSAS 2005: Memory File Store KB N/A Displays the amount of memory that is reserved for the Cache. Note if total memory limit in the msmdsrv.ini is set to 0, no memory is reserved for the cache MSAS 2005: Storage Engine Query Queries from Cache Direct / sec N/A Displays the rate of queries answered from the cache directly MSAS 2005: Storage Engine Query Queries from Cache Filtered / Sec N/A Displays the Rate of queries answered by filtering existing cache entry. MSAS 2005: Storage Engine Query Queries from File / Sec N/A Displays the Rate of queries answered from files. MSAS 2005: Storage Engine Query Average time /query N/A Displays the average time of a query MSAS 2005: Connection Current connections N/A Displays the number of connections against the SSAS instance MSAS 2005: Connection Requests / sec N/A Displays the rate of query requests per second MSAS 2005: Locks Current Lock Waits N/A Displays thhe number of connections waiting on a lock MSAS 2005: Threads Query Pool job queue Length N/A The number of queries in the job queue MSAS 2005:Proc Aggregations Temp file bytes written/sec N/A Shows the number of bytes of data processed in a temporary file MSAS 2005:Proc Aggregations Temp file rows written/sec N/A Shows the number of bytes of data processed in a temporary file 

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61  | Next Page >