Search Results

Search found 29534 results on 1182 pages for 'bill best'.

Page 24/1182 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Measuring Social Media Efforts

    - by David Dorf
    So you're on the bandwagon and you've created a Facebook page, you're tweeting everyday, and maybe you've even got a YouTube channel. Now what? After you put any program in place, you need to measure, set new goals, then execute and this is no different. But how does one measure social media efforts? First, I guess we need some goals. Typical ones might be to acquire customers, engage them, then convert them. So that translates to: Increase Facebook fans and Twitter followers Increase comments/posting and retweets Increase redemption of offers via Facebook and Twitter Counting fans and followers is easy, and tracking the redemption of coupons isn't that hard either, but measuring engagement is a tough one. How do you know whether your fans are reading your posts, and whether your posts have any meaning to them? For Facebook, the fan page administrator has access to analytics called Facebook Insights. There you can check weekly metrics such as total fans, new fans, lost fans, demographics of fans, number of postings, numbers clicks, etc. Not nearly as comprehensive as Google Analytics, but well on its way. For Twitter, getting information is a little tougher. Again, its easy to track followers and you can use tools like TweetMeme to encourage and track retweets. An interesting website called WeFollow tries to measure influence for certain topics. For example, the top three influencers for the topic "retail" are retailweek, retailwire, and retailerdaily. Other notables are #10 BestBuy, #11 GapOfficial, #12 JeffPR, and #17 OracleRetail. I assume influence is calculated based on number of followers, number of retweets, frequency of tweets, and perhaps depth of dialogs. If you want to get serious about monitoring and measuring social marketing efforts, you'd be wise to invest in a strong tool. Several are listed on this wiki, including big ones like Radian6, Nielsen, Omniture, and Buzzient. Buzzient might be particularly interesting because its integrated with Oracle CRM OnDemand -- see the demo. As always, I'm interested in hearing how others approach goal setting and monitoring of social media efforts, so feel free to post comments.

    Read the article

  • SQL Server Management Data Warehouse - quick tour on setting health monitoring policies

    - by ssqa.net
    Profiler, Perfmon, DMVs & scripts are legendary tools for a DBA to monitor the SQL arena. In line with these tools SQL Server 2008 throws a powerful stream with policy based management (PBM) framework & management data warehouse (MDW) methods, which is a relational database that contains the data that is collected from a server that is a data collection target. This data is used to generate the reports for the System Data collection sets, and can also be used to create custom reports. .....(read more)

    Read the article

  • Practical Performance Monitoring and Tuning Event

    - by Andrew Kelly
      For any of you who may be interested or know of someone in the market for a performance Monitoring and Tuning class I have just the ticket for you. It’s a 3 day event that will be held in Atlanta Ga. on January 25th to the 27th 2011. For those of you that know me or have been to my sessions you realize I like to provide more than just classroom theory and like to teach real world and above all practical methodology when it comes to performance in SQL Server. This class covers all the essentials...(read more)

    Read the article

  • Easy Listening = CRM On Demand Podcasts

    - by Anne
    OK, here's my NEW favorite resource for CRM On Demand info -- podcasts! Specifically, the CRM On Demand Podcast site -- signed, sealed, and delivered with humor and know-how. Yes, I admit, I know the cast of characters. But let's face it, sometimes dealing with software is just soooo dry! Not so when discussed by the two main commentators, Louis Peters and Robert Davidson, whom someone once referred to as CRM On Demand's "Click and Clack." (Thought that was too good not to pass along!) Anyhow, another huge plus about the site is the option to listen OR to read. Out walking my dog or doing the dishes? Just turn up the podcast. Listening to music or watching TV? I'll read Louis's entertaining write-ups to glean great info about CRM On Demand in a very short period of time. So that you get a better understanding of why I like this site so much, here's a sampling of what's discussed: Five Things about Books of Business As Louis Peters put it in his entry, when you see "Five Things" in the title, "you'll know you're going to get some concrete advice that you can put to work right away." Well, Louis and Robert do just that, pointing you in the right direction when using Books of Business to segment data. Moving to Indexed Fields - A Rough Guide (only an article, not a podcast) I've read all about performance and even helped develop material around it. But nowhere have I heard indexed custom fields referred to as "super heroes." Louis and Robert use imaginative language to describe the process for moving your data to indexed fields for optimal performance. Data Access QA from the Forums I think that everyone would admit that data access and visibility is the most difficult topic to understand in CRM On Demand. Following up on their previous podcast on the same topic, Louis and Robert answer a few key questions from the many postings on the Oracle CRM On Demand forums. And I bet that the scenarios match many companies' business requirements...maybe even yours! We Need to Talk About Adoption Another expert, Tim Koehler, joins Louis to talk about how to drive user adoption: aligning product usage with business results, communicating why and how to use the product, getting feedback on usability, and so on. Hope I've made my point -- turn to these podcasts to hear knowledgeable folks discuss CRM On Demand tips and tricks in entertaining ways. One podcast is even called "SaaS Talk"!

    Read the article

  • Please recommend citations for source code documentation standards

    - by Aerik
    I'm trying to convince another group in my company that they need to provide more documentation in their source code (they want to hand off the code to my group) but they're treating it as a "nice to have". In my view, it's a necessity. I've run a source code analysis tool and it's showing about 10% comment lines - but looking at the source code, most of that is coming from entire functions that the author has commented out. Can anyone provide some authoritative citations / references for documentation / comment standards for source code? (In case it matters, we're a C# house, with a little Matlab thrown in).

    Read the article

  • BPM best practice by David Read and Niall Commiskey

    - by JuergenKress
    At our SOA Community Workspace (SOA Community membership required) you can find best practice documents for BPM Implementations. Please make sure that your BPM experts and architects read this documents if you start or work on a BPM project. The material was created based on the experience with large BPM implementations: 11g-Runtime-Overview-v1.pptx Advanced-BPM-Session1-v2.pptx Error-Handling-v4.pptx BPM-MessageRecovery-Final.doc Also we can support you with your BPM project on-side. Please contact us if you need BPM support! SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: BPM,Niall Commiskey,David Read,BPM best practice,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • UPK Customer Success Story: The City and County of San Francisco

    - by karen.rihs(at)oracle.com
    The value of UPK during an upgrade is a hot topic and was a primary focus during our latest customer roundtable featuring The City and County of San Francisco: Leveraging UPK to Accelerate Your PeopleSoft Upgrade. As the Change Management Analyst for their PeopleSoft 9.0 HCM project (Project eMerge), Jan Crosbie-Taylor provided a unique perspective on how they're utilizing UPK and UPK pre-built content early on to successfully manage change for thousands of city and county employees and retirees as they move to this new release. With the first phase of the project going live next September, it's important to the City and County of San Francisco to 1) ensure that the various constituents are brought along with the project team, and 2) focus on the end user aspects of the implementation, including training. Here are some highlights on how UPK and UPK pre-built content are helping them accomplish this: As a former documentation manager, Jan really appreciates the power of UPK as a single source content creation tool. It saves them time by streamlining the documentation creation process, enabling them to record content once, then repurpose it multiple times. With regard to change management, UPK has enabled them to educate the project team and gain critical buy in and support by familiarizing users with the application early on through User Experience Workshops and by promoting UPK at meetings whenever possible. UPK has helped create awareness for the project, making the project real to users. They are taking advantage of UPK pre-built content to: Educate the project team and subject matter experts on how PeopleSoft 9.0 works as delivered Create a guide/storyboard for their own recording Save time/effort and create consistency by enhancing their recorded content with text and conceptual information from the pre-built content Create PeopleSoft Help for their development databases by publishing and integrating the UPK pre-built content into the application help menu Look ahead to the next release of PeopleTools, comparing the differences to help the team evaluate which version to use with their implemtentation When it comes time for training, they will be utilizing UPK in the classroom, eliminating the time and cost of maintaining training databases. Instructors will be able to carry all training content on a thumb drive, allowing them to easily provide consistent training at their many locations, regardless of the environment. Post go-live, they will deploy the same UPK content to provide just-in-time, in-application support for the entire system via the PeopleSoft Help menu and their PeopleSoft Enterprise Portal. Users will already be comfortable with UPK as a source of help, having been exposed to it during classroom training. They are also using UPK for a non-Oracle application called JobAps, an online job application solution used by many government organizations. Jan found UPK's object recognition to be excellent, yet it's been incredibly easy for her to change text or a field name if needed. Please take time to listen to this recording. The City and County of San Francisco's UPK story is very exciting, and Jan shared so many great examples of how they're taking advantage of UPK and UPK pre-built content early on in their project. We hope others will be able to incorporate these into their projects. Many thanks to Jan for taking the time to share her experiences and creative uses of UPK with us! - Karen Rihs, Oracle UPK Outbound Product Management

    Read the article

  • Combining multiple sprites vs separate sprites

    - by david oliver
    I have a character which can hold ten types of weapons. Should I: Create ten sets of animations for the character with each weapon Create animations for each weapon, and programmatically draw them on the character Option 1 is simpler in general, but requires more work on the artist, and results in larger game size. Option 2, to me, is a programming nightmare... Whats the better practice in general? Thanks.

    Read the article

  • Oracle collaborates with leading IT vendors on Cloud Management Standards

    - by Anand Akela
    During the last couple of days, two key specifications for cloud management standards have been announced. Oracle collaborated with leading technology vendors from the IT industry on both of these cloud management specifications. One of the specifications focuses "Infrastructure as a Service" ( IaaS )  cloud service model , while the other specification announced today focuses on "Platform as a Service" ( PaaS ) cloud service model. Please see The NIST Definition of Cloud Computing to learn more about IaaS and PaaS . Earlier today Oracle , CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG   announced the Cloud Application Management for Platforms (CAMP) specification that will be submitted to Organization for the Advancement of Structured Information Standards (OASIS) for development of an industry standard, in an effort to help ensure interoperability for deploying and managing applications across cloud environments.  Typical PaaS architecture - Source : CAMP specification The CAMP specification defines the artifacts and APIs that need to be offered by a PaaS cloud to manage the building, running, administration, monitoring and patching of applications in the cloud. Its purpose is to enable interoperability among self-service interfaces to PaaS clouds by defining artifacts and formats that can be used with any conforming cloud and enable independent vendors to create tools and services that interact with any conforming cloud using the defined interfaces. Cloud vendors can use these interfaces to develop new PaaS offerings that will interact with independently developed tools and components. In a separate cloud standards announcement yesterday, the Distributed Management Task Force ( DMTF ), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, released the new Cloud Infrastructure Management Interface (CIMI) specification. Oracle collaborated with various technology vendors and industry organizations on this specification. CIMI standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers, enabling users to manage their cloud infrastructure use easily and without complexity. DMTF developed CIMI as a self-service interface for infrastructure clouds ( IaaS focus ) , allowing users to dynamically provision, configure and administer their cloud usage with a high-level interface that greatly simplifies cloud systems management. Mark Carlson, Principal Cloud Strategist at Oracle provides more details about CAMP  and CIMI his blog . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Microsoft SQL Server High-Availability Videos and Q&A Log

    - by KKline
    You Want Videos? We Got Videos! I always enjoy getting the chance to catch up with author, consultant, and Microsoft Clustering MVP Allan Hirt . Allan and I recently presented two sessions covering an overview of high availability in Microsoft SQL Server and, the following week, a demo of how to implement several different kinds of high availability techniques including database mirroring, transactional replication, and Windows clustering services. You can see videos of these presentations at the...(read more)

    Read the article

  • How common are "bandage" fixes?

    - by gablin
    Imagine the following scenario: You've detected that your (or someone else's) program has a bug - a function produces the wrong result when given a particular input. You examine the code and can't find anything wrong: it just seem to bog out when given this input. You can now do one of two things: you either examine the code further until you've found the actual cause; or you slap on a bandage by adding an if statement checking if the input is this particular input - if it is, return the expected value. To me, applying the bandage would be completely unacceptable. If the code is behaving unexpectingly on this input, what other input that you've missed will it react strangely to? It just doesn't seem like a fix at all - you're just shoveling the problem under the rug. As I wouldn't even consider doing this, I'm surprised at how often the professors and books keep reminding us about how applying "bandage" fixes is not a good idea. So this makes me wonder: just how common are these kinds of "fixes"?

    Read the article

  • Tom Kyte Webcast on Oracle Maximum Availability Architecture Best Practices - Thursday, April 12 @ 10:00 AM PDT

    - by jgelhaus
    Date: Thursday, April 12, 2012 Time: 10:00 AM PDT Update Your Knowledge with Oracle Expert Tom Kyte Data is one of the most critical assets of any organization with many operations depending on having complete and accurate data available 24/7. By implementing Oracle’s Maximum Availability Architecture (MAA), organizations can minimize the cost and risk associated with downtime. Oracle’s MAA best practices extend beyond Oracle Database to span a broad range of products, including Oracle Exadata and Oracle Database Appliance. Join Oracle expert Tom Kyte for this Live Webcast to learn how to: Protect your systems from planned and unplanned downtime Achieve the highest quality of service at the lowest cost Eliminate idle redundancy in the data center Register today and ask Tom your questions around availability best practices.

    Read the article

  • UPK Content State

    - by peter.maravelias
    State is an editable property for communicating the status of a document in the UPK library. This is particularly helpful when working with other authors in a development team. Authors can assign a state to any document using the values that are defined in the master list. The default master list of State values includes Not Started, Draft, In Review, and Final (in the language installed on the server). Administrators can customize the list by adding, deleting, or renaming the values as well as sequencing the values as they will appear on the assignment list from the Properties pane. Let us know if or how you are using UPK Content States in your development efforts!

    Read the article

  • Japan Welcomes Oracle Enterprise Manager 12c

    - by Anand Akela
    Following Oracle’s grand unveiling of Oracle Enterprise Manager 12c at Oracle OpenWorld 2011 in San Francisco, Oracle Japan just completed their launch for the product. Leng Tan, Oracle VP of Products, delivered the keynote with collaboration from a number of key partners in the region. From left to right: Leng Tan, VP of Products, Oracle; Shinyashiki-san, Assistant General Manager, NEC; Fuketa-san, General Manager, HITACHI; Fujii-san, General Manager, Fujitsu; Misawa-san, VP of Alliances, Oracle Japan NEC, Hitachi and Fujitsu have been among Oracle’s most active partners in the Japan region. They have received key awards from Oracle Japan for their efforts. NEC received the partner of the year award for 2010 and 2011. Hitachi received the partner of the year award for Oracle Enterprise Manager in 2011. Fujitsu received awards in the areas of Database and Oracle Exadata in 2011. All three partners were active participants in Oracle Enterprise Manager 12c beta program. According to Hirai-san, the technical lead at the event, there were over 200 attendees. “The event was so well-attended; there was no room to stand.” Said Hirai-san. Hirai-san demonstrating Oracle Enterprise Manager 12c at the Oracle Japan launch Here’s the highlight of the presentations made by the Oracle partners during this launch. NEC has developed an Oracle Enterprise Manager Plug-in for iStorage (NEC SAN Storage product). Additionally, NEC’s WebSAM Invariant Analyzer management tool is now capable of integrating with Oracle Enterprise Manager HITACHI demonstrated monitoring capabilities for Oracle Exadata through Oracle Enterprise Manager in their JP1 system management tool Fujitsu’s Oracle Enterprise Manager 10g adapter for their SystemWalker tool has now been enhanced to work with Oracle Enterprise Manager 12c. Following a very successful launch in Japan, Oracle’s Total Cloud Control road show and additional Oracle Enterprise Manager 12c launches continue in the EMEA and Asia Pacific regions. This week Sushil Kumar, VP of Product Strategy and Business Development is scheduled to deliver the keynotes at several cities in India. Also this week, Richard Sarwal, SVP of Products, is scheduled to deliver a keynote at the DOAG conference in Nuremburg, Germany. Richard is also delivering the Oracle Enterprise Manger 12c launch event keynote in Paris on November 18th. Check out our event schedule for Oracle Enterprise Manager 12c events across the globe! For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter   Facebook YouTube Linkedin

    Read the article

  • Install and upgrade strategies for Oracle Enterprise Manager 12c - Upcoming Webcasts with live demos

    - by Anand Akela
    At Oracle Open World 2011, we launched the Oracle Enterprise Manager 12c , the only complete cloud management solution for your enterprise cloud. With the new release of Oracle Enterprise Manager Cloud Control 12c, the installation and upgrade process has been enhanced to provide a fast and smooth install experience. In the upcoming webcasts, Oracle Enterprise Manager experts will discuss the installation and upgrade strategies for Oracle Enterprise Manager Cloud Control 12c . These webcasts will include live demonstrations of the install and upgrade processes. In the Webcast on November 17th, we will cover the installation steps and provide recommendations to setup a new Oracle Enterprise Manager Cloud Control 12c environment. We'll also provide a live demonstration of the complete installation process.   Upgrading your Oracle Enterprise Manager environment can be a challenging and complex task especially with large environments consisting of hundreds or thousands of targets. In the webcast on November 18th, we'll describe key facts that administrators must know before upgrading their Enterprise Manager system as well as introduce the different approaches for an upgrade. We'll also walk you through the key steps for upgrading an existing Enterprise Manager 11g (or 10g) Grid Control to Oracle Enterprise Manager Cloud Control 12c. In addition to the live webcasts on Oracle Enterprise Manager Cloud Control 12c install and upgrade processes, please consider attending the replay of  Oracle Enterprise Manager Ops Center webcast with live Q&A . Schedule and registration links of upcoming webcasts  :- Topics Schedule Oracle Enterprise Manager Ops Center: Global Systems Management Made Easy (Replay) November 17 10 a.m PT December 1 10 a.m PT Oracle Enterprise Manager Cloud Control 12c Installation Overview November 17 8 a.m PT Upgrade Smoothly to Oracle Enterprise Manager Cloud Control 12c November 18 8 a.m PT For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter   Facebook YouTube Linkedin

    Read the article

  • Deleting Team Project in Team Foundation Server 2010

    - by Hosam Kamel
    I’m seeing a lot of people still using some old ways ported from TFS 2008 to delete a team project like TFSDeleteProject utility.   In TFS 2010 the administration tasks are made very easy to help you in a lot of administration stuff, for the deletion point specially you can navigate to the Administration Console then Select Team Project Collection Select the project collection contains the project you want to delete then navigate to Team Projects. Select the project then click Delete, you will have the option to delete any external artifacts and workspace too.   Hope it helps. Originally posted at "Hosam Kamel| Developer & Platform Evangelist"

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • Oracle User Communities and Enterprise Manager

    - by Anand Akela
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Contributed by Joe Dimmer, Senior Business Development Manager, Oracle Enterprise Manager Heightened interest and adoption of Oracle Enterprise Manager has led to keen interest in “manageability” within the user group community.  In response, user groups are equipping their membership with the right tools for implementation and use manageability through education opportunities and Special Interest Groups.  Manageability is increasingly viewed not only as a means to enable the Oracle environment to become a competitive business advantage for organizations, but also as a means to advance the individual careers of those who embrace enterprise management.  Two Oracle user groups – the Independent Oracle User Group (IOUG) and the United Kingdom Oracle User Group (UKOUG) – each have Special Interest Groups where manageability is prominently featured.  There are also efforts underway to establish similarly charted SIGs that will be reported in future blogs.  The good news is, there’s a lot of news! First off, the IOUG will be hosting a Summer Series of live webcasts:  “Configuring and Managing a Private Cloud with Enterprise Manager 12c” by Kai Yu of Dell, Inc.              Wednesday, June 20th from Noon – 1 PM CDT , Click here for details & registration “What is User Experience Monitoring and What is Not? A case study of Oracle Global IT’s implementation of Enterprise Manager 12c and RUEI” by Eric Tran Le of Oracle            Wednesday, July 18th from Noon – 1 PM CDT , Click here for details & registration “Shed some light on the ‘bumps in the night’ with Enterprise Manager 12c” by David Start of Johnson Controls            Wednesday, August 22nd from Noon – 1 PM CDT, Click here for details & registration   In addition, the UKOUG Availability and Infrastructure Management (AIM) SIG is hosting its next meeting on Tuesday, July 3rd at the Met in Leeds where EM 12c Cloud Management will be presented.  Click here for details & registration.  In future posts from Joe, look for news related to the following: ·         IOUG Community Page and Newsletter devoted to manageability ·         Full day of manageability featured during Oracle OpenWorld 2012 “SIG Sunday” ·         Happenings from other regional User Groups that feature manageability Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • New TFS Template Available - "Agile Dev in a Waterfall Environment"–GovDev

    - by Hosam Kamel
      Microsoft Team Foundation Server (TFS) 2010 is the collaboration platform at the core of Microsoft’s application lifecycle management solution. In addition to core features like source control, build automation and work-item tracking, TFS enables teams to align projects with industry processes such as Agile, Scrum and CMMi via the use of customable XML Process Templates. Since 2005, TFS has been a welcomed addition to the Microsoft developer tool line-up by Government Agencies of all sizes and missions. However, many government development teams consistently struggle with leveraging an iterative development process all while providing the structure, visibility and status reporting that is required by many Government, waterfall-centric, project methodologies. GovDev is an open source, TFS Process Template that combines the formality of CMMi/Waterfall with the flexibility of Agile/Iterative: The GovDev for TFS Accelerator also implements two new custom reports to support the customized process and provide the real-time visibility across the lifecycle with full traceability and drill down to tasks, tests and code: The TFS Accelerator contains: A custom TFS process template that implements a requirements centric, yet iterative process with extreme traceability throughout the lifecycle. A custom “Requirements Traceability Report” that provides a single view of traceability for the project.   Within the Traceability Report, you can also view live status indicators and “click-through” to the individual assets (even changesets). A custom report that focuses on “Contributions by Team Member” tracking things like “number of check-ins” and “Net lines added”.  Fully integrated documentation on the entire process and features. For a 45min demo of GovDev, visit: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032508359&culture=en-us Download it from Codeplex here.     Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Oracle Enterprise Manager 12c Integration With Oracle Enterprise Manager Ops Center 11g

    - by Scott Elvington
    In a blog entry earlier this year, we announced the availability of the Ops Center 11g plug-in for Enterprise Manager 12c. In this article I will walk you through the process of deploying the plug-in on your existing Enterprise Manager agents and show you some of the capabilities the plug-in provides. We'll also look at the integration from the Ops Center perspective. I will show you how to set up the connection to Enterprise Manager and give an overview of the information that is available. Installing and Configuring the Ops Center Plug-in The plug-in is available for download from the Self Update page (Setup ? Extensibility ? Self Update). The plug-in name is “Ops Center Infrastructure stack”. Once you have downloaded the plug-in you can navigate to the Plug-In management page (Setup ? Extensibility ? Plug-ins) to begin deployment. The plug-in must first be deployed on the Management Server. You will need to provide the repository password of the SYS user in order to deploy the plug-in to the Management Server. There are a few pre-requisites that need to be completed on the Ops Center side before the plug-in can be deployed and configured on the desired Enterprise Manager agents. Any servers, whether physical or virtual, for which you wish to see metrics and alerts need to be managed by Ops Center. This means that the Operating System needs to have an Ops Center management agent installed as a minimum. The plug-in can provide even more value when Ops Center is also managing the other “layers of the stack”, for example the service processor, the blade chassis or the XSCF of an M-Series server. The more information that Ops Center has about the stack, the more information that will be visible within Enterprise Manager via the plug-in. In order to access the information within Ops Center, the plug-in requires a user to connect as. This user does not require any particular Ops Center permissions or roles, it simply needs to exist. You can create a specific “EMPlugin” user within Ops Center or use an existing user. Oracle recommends creating a specific, non-privileged user account within Ops Center for this purpose. From the Ops Center Administration section, select Enterprise Controller, click the Users tab and finally click the Add User icon to create the desired user account. For the purpose of this article I have discovered and managed the OS and service processor of the server where my Enterprise Manager 12c installation is hosted. With the plug-in deployed to the Management Server and the setup done within Ops Center, we're now ready to deploy the plug-in to the agents and configure the targets to communicate with the Ops Center Enterprise Controller. From the Setup menu select Add Targets then Add Targets Manually. Select the bottom radio button “Add Targets Manually by Specifying Target Monitoring Properties”, select Infrastructure Stack from the Target Type dropdown and finally, select the Monitoring Agent where you wish to deploy the plug-in. Click the Add Manually.... button and fill in the details for the new target using the appropriate hostname for your Enterprise Controller and the user and password details for the plug-in access user. After the target has been added to the agent you will need to allow a few minutes for the initial data collection to complete. Once completed you can see the new target in the All Targets list. All metric collections are enabled by default except one. To enable Infrastructure Stack Alarms collection, navigate to the newly added target and then to Target ? Monitoring ? Metric and Collection Settings. There you can click the “Disabled” link under Collection Schedule to enable collection and set your desired collection frequency. By default, a Warning level alert in Ops Center will equate to a Warning level event in Enterprise Manager and a Critical alert will equate to a Critical event. This mapping can be altered in the Metric and Collection Settings also. The default incident rules in Enterprise Manager only create incidents from Critical events so keep this in mind in case you want to see incidents generated for Warning or Info level alerts from Ops Center. Also, because Enterprise Manager already monitors the OS through it's Host target type, the plug-in does not pull OS alerts from Ops Center so as to prevent duplication. In addition to alert propagation, the plug-in also provides data for several reports detailing the topology and configuration of the stack as well as any hardware sensor data that is available. These are available from the Information Publisher Reports. Navigate there from the Enterprise ? Reports menu or directly from the Infrastructure Stack target of interest. As an example, here is a sample of the Hardware Sensors report showing some of the available sensor data. The report can also be exported to a CSV file format if desired. Connecting Ops Center to Enterprise Manager Repository For an Enterprise Manager user, the plug-in provides a deeper visibility to the state of the infrastructure underlying the databases and middleware. On the Ops Center side, there is also a greater visibility to the targets running on the infrastructure. To set up the Ops Center data collection, just navigate to the Administration section and select the Grid Control link. Select the Configure/Connect action from the right-hand menu and complete the wizard forms to enable the connection to the Enterprise Manager repository and UI. Be sure to use the sysman account when configuring the database connection. Once the job completes and the initial data synchronization is done you will see new Target tabs on your OS assets. The new tab lists all the Enterprise Manager targets and any alerts, availability and performance data specific to the selected target. It is also possible to use the GoTo icon to launch the Enterprise Manager BUI in context of the specific target or alert to drill into more detail. Hopefully this brief overview of the integration between Enterprise Manager and Ops Center has provided a jumpstart to getting a more complete view of the full stack of your enterprise systems.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Best Practices - updated: which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains). This is an updated and enlarged version of the post on this topic originally posted October 2012. One frequent question "what type of domain should I use to run applications?" There used to be a simple answer: "run applications in guest domains in almost all cases", but now there are more things to consider. Enhancements to Oracle VM Server for SPARC and introduction of systems like the current SPARC servers including the T4 and T5 systems, the Oracle SuperCluster T5-8 and Oracle SuperCluster M6-32 provide scale and performance much higher than the original servers that ran domains. Single-CPU performance, I/O capacity, memory sizes, are much larger now, and far more demanding applications are now being hosted in logical domains. The general advice continues to be "use guest domains in almost all cases", meaning, "use virtual I/O rather than physical I/O", unless there is a specific reason to use the other domain types. The sections below will discuss the criteria for choosing between domain types. Review: division of labor and types of domain Oracle VM Server for SPARC offloads management and I/O functionality from the hypervisor to domains (also called virtual machines), providing a modern alternative to older VM architectures that use a "thick", monolithic hypervisor. This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, further improving reliability and security. Oracle VM Server for SPARC defines the following types of domain, each with their own roles: Control domain - management control point for the server, runs the logical domain daemon and constraints engine, and is used to configure domains and manage resources. The control domain is the first domain to boot on a power-up, is always an I/O domain, and is usually a service domain as well. It doesn't have to be, but there's no reason to not leverage it for virtual I/O services. There is one control domain per T-series system, and one per Physical Domain (PDom) on an M5-32 or M6-32 system. M5 and M6 systems can be physically domained, with logical domains within the physical ones. I/O domain - a domain that has been assigned physical I/O devices. The devices may be one more more PCIe root complexes (in which case the domain is also called a root complex domain). The domain has native access to all the devices on the assigned PCIe buses. The devices can be any device type supported by Solaris on the hardware platform. a SR-IOV (Single-Root I/O Virtualization) function. SR-IOV lets a physical device (also called a physical function) or PF) be subdivided into multiple virtual functions (VFs) which can be individually assigned directly to domains. SR-IOV devices currently can be Ethernet or InfiniBand devices. direct I/O ownership of one or more PCI devices residing in a PCIe bus slot. The domain has direct access to the individual devices An I/O domain has native performance and functionality for the devices it owns, unmediated by any virtualization layer. It may also have virtual devices. Service domain - a domain that provides virtual network and disk devices to guest domains. The services are defined by commands that are run in the control domain. It usually is an I/O domain as well, in order for it to have devices to virtualize and serve out. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Device considerations Consider the following when choosing between virtual devices and physical devices: Virtual devices provide the best flexibility - they can be dynamically added to and removed from a running domain, and you can have a large number of them up to a per-domain device limit. Virtual devices are compatible with live migration - domains that exclusively have virtual devices can be live migrated between servers supporting domains. On the other hand: Physical devices provide the best performance - in fact, native "bare metal" performance. Virtual devices approach physical device throughput and latency, especially with virtual network devices that can now saturate 10GbE links, but physical devices are still faster. Physical I/O devices do not add load to service domains - all the I/O goes directly from the I/O domain to the device, while virtual I/O goes through service domains, which must be provided sufficient CPU and memory capacity. Physical I/O devices can be other than network and disk - we virtualize network, disk, and serial console, but physical devices can be the wide range of attachable certified devices, including things like tape and CDROM/DVD devices. In some cases the lines are now blurred: virtual devices have better performance than previously: starting with Oracle VM Server for SPARC 3.1 there is near-native virtual network performance. There is more flexibility with physical devices than before: SR-IOV devices can now be dynamically reconfigured on domains. Tradeoffs one used to have to make are now relaxed: you can often have the flexibility of virtual I/O with performance that previously required physical I/O. You can have the performance and isolation of SR-IOV with the ability to dynamically reconfigure it, just like with virtual devices. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI buses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain that is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure, as described in Availability Best Practices - Avoiding Single Points of Failure . Guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device does not result in an application outage. This also permits "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O buses, so there is more I/O capacity that can be used for applications. Increased server capacity made it attractive to run more vertically-scaled applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the Oracle SuperCluster engineered systems mentioned previously. In those engineered systems, I/O domains are used for high performance applications with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. Not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O to guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm command must be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. For reference, an excellent guide to secure deployment of domains by Stefan Hinker is at Secure Deployment of Oracle VM Server for SPARC. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. They should be considered the default domain type to use unless there is a specific requirement that mandates an I/O domain. I/O domains can be used for applications with the highest performance requirements. Single Root I/O Virtualization (SR-IOV) makes this more attractive by giving direct I/O access to more domains, and by permitting dynamic reconfiguration of SR-IOV devices. Today's larger systems provide multiple PCIe buses - for example, 16 buses on the T5-8 - making it possible to configure multiple I/O domains each owning their own bus. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so interruption of service in one service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. Oracle SuperCluster uses the control domain for applications, but it is an exception. It's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity servers that run Oracle VM Server for SPARC are attractive for applications with the most demanding resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide peak performance for critical applications. That said, the improved virtual device performance in Oracle VM Server means that the default choice should still be guest domains with virtual I/O.

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by joycsharp
    I just released an open source project at codeplex, which includes a set of T-4 templates to enable you to build logical layers (i.e. DAL/BLL) with just few clicks! The logical layers implemented here are  based on Entity Framework 4.0, ASP.NET Web Form Data Bound control friendly and fully unit testable. In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

  • Team Leaders & Authors - Manage and Report Workflow using "Print an Outline" in UPK

    - by [email protected]
    Did you know you can "print an outline?" You can print any outline or portion of an outline. Why might you want to "print an outline" in UPK... Have you ever wondered how many topics you have recorded, how many of your topics are ready for review, or even better, how many topics are complete! Do you need to report your project status to management? Maybe you just like to have a copy of your outline to refer to during development. Included in this output is the outline structure as well as the layout defined in the Details View of the Outline Editor. To print an outline, you must open either a module or section in the Outline Editor. A set of default data columns is automatically included in the output; however, you can configure which columns you want to appear in the report by switching to the Details view and customizing the columns. (To learn more about customizing your columns refer to the Add and Remove Columns section of the Content Development.pdf guide) To print an outline from the Outline Editor: 1. Open a module or section document in the Outline Editor. 2. Expand the documents to display the details that you want included in the report. 3. On the File menu, choose Print and use the toolbar icons to print, view, or save the report to a file. Personally, I opt to save my outline in Microsoft Excel. Using the delivered features of Microsoft Excel you can add columns of information, such as development notes, to your outline or you can graph and chart your Project status. As mentioned above you can configure what columns you want to appear in the outline. When utilizing the Print an Outline feature in conjunction with the Managing Workflow features of the UPK Multi-user instance you as a Team Lead or Author can better report project status. Read more about Managing Workflow below. Managing Workflow: The Properties toolpane contains special properties that allow authors to track document status or State as well as assign Document Ownership. Assign Content State The State property is an editable property for communicating the status of a document. This is particularly helpful when collaborating with other authors in a development team. Authors can assign a state to documents from the master list defined by the administrator. The default list of States includes (blank), Not Started, Draft, In Review, and Final. Administrators can customize the list by adding, deleting or renaming the values. To assign a State value to a document: 1. Make sure you are working online. 2. Display the Properties toolpane. 3. Select the document(s) to which you want to assign a state. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the State cell. 5. Select a value from the list. Assign Document Ownership In many enterprises, multiple authors often work together developing content in a team environment. Team leaders typically handle large projects by assigning specific development responsibilities to authors. The Owner property allows team leaders and authors to assign documents to themselves and other authors to track who is responsible for a specific document. You view and change document assignments for a document using the Owner property in the Properties toolpane. To assign a document owner: 1. Make sure you are working online. 2. On the View menu, choose Properties. 3. Select the document(s) to which you want to assign document responsibility. Note: You can select multiple documents using the standard Windows selection keys (CTRL+click and SHIFT+click). 4. In the Workflow category, click in the Owner cell. 5. Select a name from the list. Is anyone out there already using this feature? Share your ideas with the group. Those of you new to this feature, give it a test drive and let us know what you think. - Kathryn Lustenberger, Oracle UPK & Tutor Outbound Product Management

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >