Search Results

Search found 3617 results on 145 pages for 'physical wear'.

Page 103/145 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • Managing Database Clusters - A Whole Lot Simpler

    - by mat.keep(at)oracle.com
    Clustered computing brings with it many benefits: high performance, high availability, scalable infrastructure, etc.  But it also brings with it more complexity.Why ?  Well, by its very nature, there are more "moving parts" to monitor and manage (from physical, virtual and logical hosts) to fault detection and failover software to redundant networking components - the list goes on.  And a cluster that isn't effectively provisioned and managed will cause more downtime than the standalone systems it is designed to improve upon.  Not so great....When it comes to the database industry, analysts already estimate that 50% of a typical database's Total Cost of Ownership is attributable to staffing and downtime costs.  These costs will only increase if a database cluster is to hard to properly administer.Over the past 9 months, monitoring and management has been a major focus in the development of the MySQL Cluster database, and on Tuesday 12th January, the product team will be presenting the output of that development in a new webinar.Even if you can't make the date, it is still worth registering so you will receive automatic notification when the on-demand replay is availableIn the webinar, the team will cover:    * NDBINFO: released with MySQL Cluster 7.1, NDBINFO presents real-time status and usage statistics, providing developers and DBAs with a simple means of pro-actively monitoring and optimizing database performance and availability.    * MySQL Cluster Manager (MCM): available as part of the commercial MySQL Cluster Carrier Grade Edition, MCM simplifies the creation and management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one!    * MySQL Cluster Advisors & Graphs: part of the MySQL Enterprise Monitor and available in the commercial MySQL Cluster Carrier Grade Edition, the Enterprise Advisor includes automated best practice rules that alert on key performance and availability metrics from MySQL Cluster data nodes.You'll also learn how you can get started evaluating and using all of these tools to simplify MySQL Cluster management.This session will last round an hour and will include interactive Q&A throughout. You can learn more about MySQL Cluster Manager from this whitepaper and on-line demonstration.  You can also download the packages from eDelivery (just select "MySQL Database" as the product pack, select your platform, click "Go" and then scroll down to get the software).While managing clusters will never be easy, the webinar will show hou how it just got a whole lot simpler !

    Read the article

  • Best Practices for Building a Virtualized SPARC Computing Environment

    - by Scott Elvington
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle just published Best Practices for Building a Virtualized SPARC Computing Environment, a white paper that provides guidance on the complete hardware and software stack for deploying and managing your physical and virtual SPARC infrastructure. The solution is based on Oracle SPARC T4 servers, Oracle Solaris 11 with Oracle VM for SPARC 2.2, Sun ZFS storage appliances, Sun 10GbE 72 port switches and Oracle Enterprise Manager Ops Center 12c. The paper emphasizes the value and importance of planning the resources (compute, network and storage) that will comprise the virtualized environment to achieve the desired capacity, performance and availability characteristics. The document also details numerous operational best practices that will help you deliver on those characteristics with unique capabilities provided by Enterprise Manager Ops Center including policy-based guest placement, pool resource balancing and automated guest recovery in the event of server failure. Plenty of references to supplementary documentation are included to help point you to additional resources. Whether you’re building the first stages of your private cloud or a general-purpose virtualized SPARC computing environment, these documented best practices will help ensure success. Please join Phil Bullinger and Steve Wilson from Oracle to learn more about breakthrough efficiency in private cloud infrastructure and how SPARC based virtualization can help you get started on your cloud journey. Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • The Importance of a Security Assessment - by Michael Terra, Oracle

    - by Darin Pendergraft
    Today's Blog was written by Michael Terra, who was the Subject Matter Expert for the recently announced Oracle Online Security Assessment. You can take the Online Assessment here: Take the Online Assessment Over the past decade, IT Security has become a recognized and respected Business discipline.  Several factors have contributed to IT Security becoming a core business and organizational enabler including, but not limited to, increased external threats and increased regulatory pressure. Security is also viewed as a key enabler for strategic corporate activities such as mergers and acquisitions.Now, the challenge for senior security professionals is to develop an ongoing dialogue within their organizations about the importance of information security and how it can impact their organization's strategic objectives/mission. The importance of conducting regular “Security Assessments” across the IT and physical infrastructure has become increasingly important. Security standards and frameworks, such as the international standard ISO 27001, are increasingly being adopted by organizations and their business partners as proof of their security posture and “Security Assessments” are a great way to ensure a continued alignment to these frameworks.Oracle offers a number of different security assessment covering a broad range of technologies. Some of these are short engagements conducted for free with our strategic customers and partners. Others are longer term paid engagements delivered by Oracle Consulting Services or one of our partners. The goal of a security assessment, (also known as a security audit or security review), is to ensure that necessary security controls are integrated into the design and implementation of a project, application or technology.  A properly completed security assessment should provide documentation outlining any security gaps that exist in an infrastructure and the associated risks for those gaps. With that knowledge, an organization can choose to either mitigate, transfer, avoid or accept the risk. One example of an Oracle offering is a Security Readiness Assessment:The Oracle Security Readiness Assessment is a practical security architecture review focused on aligning an organization’s enterprise security architecture to their business principals and strategic objectives. The service will establish a multi-phase security architecture roadmap focused on supporting new and existing business initiatives.Offering OverviewThe Security Readiness Assessment will: Define an organization’s current security posture and provide a roadmap to a desired future state architecture by mapping  security solutions to business goals Incorporate commonly accepted security architecture concepts to streamline an organization’s security vision from strategy to implementation Define the people, process and technology implications of the desired future state architecture The objective is to deliver cohesive, best practice security architectures spanning multiple domains that are unique and specific to the context of your organization. Offering DetailsThe Oracle Security Readiness Assessment is a multi-stage process with a dedicated Oracle Security team supporting your organization.  During the course of this free engagement, the team will focus on the following: Review your current business operating model and supporting IT security structures and processes Partner with your organization to establish a future state security architecture leveraging Oracle’s reference architectures, capability maps, and best practices Provide guidance and recommendations on governance practices for the rollout and adoption of your future state security architecture Create an initial business case for the adoption of the future state security architecture If you are interested in finding out more, ask your Sales Consultant or Account Manager for details.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • Earthquake Locator - Live Demo and Source Code

    - by Bobby Diaz
    Quick Links Live Demo Source Code I finally got a live demo up and running!  I signed up for a shared hosting account over at discountasp.net so I could post a working version of the Earthquake Locator application, but ran into a few minor issues related to RIA Services.  Thankfully, Tim Heuer had already encountered and explained all of the problems I had along with solutions to these and other common pitfalls.  You can find his blog post here.  The ones that got me were the default authentication tag being set to Windows instead of Forms, needed to add the <baseAddressPrefixFilters> tag since I was running on a shared server using host headers, and finally the Multiple Authentication Schemes settings in the IIS7 Manager.   To get the demo application ready, I pulled down local copies of the earthquake data feeds that the application can use instead of pulling from the USGS web site.  I basically added the feed URL as an app setting in the web.config:       <appSettings>         <!-- USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/ -->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml" />-->         <!--<add key="FeedUrl"             value="~/Demo/1day-M2.5.xml" />-->         <add key="FeedUrl"              value="~/Demo/7day-M2.5.xml" />     </appSettings> You will need to do the same if you want to run from local copies of the feed data.  I also made the following minor changes to the EarthquakeService class so that it gets the FeedUrl from the web.config:       private static readonly string FeedUrl = ConfigurationManager.AppSettings["FeedUrl"];       /// <summary>     /// Gets the feed at the specified URL.     /// </summary>     /// <param name="url">The URL.</param>     /// <returns>A <see cref="SyndicationFeed"/> object.</returns>     public static SyndicationFeed GetFeed(String url)     {         SyndicationFeed feed = null;           if ( !String.IsNullOrEmpty(url) && url.StartsWith("~") )         {             // resolve virtual path to physical file system             url = System.Web.HttpContext.Current.Server.MapPath(url);         }           try         {             log.Debug("Loading RSS feed: " + url);               using ( var reader = XmlReader.Create(url) )             {                 feed = SyndicationFeed.Load(reader);             }         }         catch ( Exception ex )         {             log.Error("Error occurred while loading RSS feed: " + url, ex);         }           return feed;     } You can now view the live demo or download the source code here, but be sure you have WCF RIA Services installed before running the application locally and make sure the FeedUrl is pointing to a valid location.  Please let me know if you have any comments or if you run into any issues with the code.   Enjoy!

    Read the article

  • SQL SERVER – Weekend Project – Visiting Friend’s Company – Koenig Solutions

    - by pinaldave
    I have decided to do some interesting experiments every weekend and share it next week as a weekend project on the blog. Many times our business lives and personal lives are very separate, however this post will talk about one instance where my two lives connect. This weekend I visited my friend’s company. My friend owns Koenig, so of course I am very interested so see that they are doing well.  I have been very impressed this year, as they have expanded into multiple cities and are offering more and more classes about Business Intelligence, Project Management, networking, and much more. Koenig Solutions originally were a company that focused on training IT professionals – from topics like databases and even English language courses.  As the company grew more popular, Koenig began their blog to keep fans updated, and gradually have added more and more courses. I am very happy for my friend’s success, but as a technology enthusiast I am also pleased with Koenig Solutions’ success.  Whenever anyone in our field improves, the field as a whole does better.  Koenig offers high quality classes on a variety of topics at a variety of levels, so anyone can benefit from browsing this blog. I am a big fan of technology (obviously), and I feel blessed to have gotten in on the “ground floor,” even though there are some people out there who think technology has advanced as far as possible – I believe they will be proven wrong.  And that is why I think companies like Koenig Solutions are so important – they are providing training and support in a quickly growing field, and providing job skills in this tough economy. I believe this particular post really highlights how I, and Koenig, feel about the IT industry.  It is quickly expanding, and job opportunities are sure to abound – but how can the average person get started in this exciting field?  This post emphasizes that knowledge is power – know what interests you in the IT field, get an education, and continue your training even after you’ve gotten your foot in the door. Koenig Solutions provides what I feel is one of the most important services in the modern world – in person training.  They obviously offer many online courses, but you can also set up physical, face-to-face training through their website.  As I mentioned before, they offer a wide variety of classes that cater to nearly every IT skill you can think of.  If you have more specific needs, they also offer one of the best English language training courses.  English is turning into the language of technology, so these courses can ensure that you are keeping up the pace. Koenig Solutions and I agree about how important training can be, and even better – they provide some of the best training around.  We share ideals and I am very happy see the success of my friend. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, SQLAuthority Author Visit, T SQL, Technology Tagged: Developer Training

    Read the article

  • Advanced TSQL Tuning: Why Internals Knowledge Matters

    - by Paul White
    There is much more to query tuning than reducing logical reads and adding covering nonclustered indexes.  Query tuning is not complete as soon as the query returns results quickly in the development or test environments.  In production, your query will compete for memory, CPU, locks, I/O and other resources on the server.  Today’s entry looks at some tuning considerations that are often overlooked, and shows how deep internals knowledge can help you write better TSQL. As always, we’ll need some example data.  In fact, we are going to use three tables today, each of which is structured like this: Each table has 50,000 rows made up of an INTEGER id column and a padding column containing 3,999 characters in every row.  The only difference between the three tables is in the type of the padding column: the first table uses CHAR(3999), the second uses VARCHAR(MAX), and the third uses the deprecated TEXT type.  A script to create a database with the three tables and load the sample data follows: USE master; GO IF DB_ID('SortTest') IS NOT NULL DROP DATABASE SortTest; GO CREATE DATABASE SortTest COLLATE LATIN1_GENERAL_BIN; GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest', SIZE = 3GB, MAXSIZE = 3GB ); GO ALTER DATABASE SortTest MODIFY FILE ( NAME = 'SortTest_log', SIZE = 256MB, MAXSIZE = 1GB, FILEGROWTH = 128MB ); GO ALTER DATABASE SortTest SET ALLOW_SNAPSHOT_ISOLATION OFF ; ALTER DATABASE SortTest SET AUTO_CLOSE OFF ; ALTER DATABASE SortTest SET AUTO_CREATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_SHRINK OFF ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS ON ; ALTER DATABASE SortTest SET AUTO_UPDATE_STATISTICS_ASYNC ON ; ALTER DATABASE SortTest SET PARAMETERIZATION SIMPLE ; ALTER DATABASE SortTest SET READ_COMMITTED_SNAPSHOT OFF ; ALTER DATABASE SortTest SET MULTI_USER ; ALTER DATABASE SortTest SET RECOVERY SIMPLE ; USE SortTest; GO CREATE TABLE dbo.TestCHAR ( id INTEGER IDENTITY (1,1) NOT NULL, padding CHAR(3999) NOT NULL,   CONSTRAINT [PK dbo.TestCHAR (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestMAX ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAX (id)] PRIMARY KEY CLUSTERED (id), ) ; CREATE TABLE dbo.TestTEXT ( id INTEGER IDENTITY (1,1) NOT NULL, padding TEXT NOT NULL,   CONSTRAINT [PK dbo.TestTEXT (id)] PRIMARY KEY CLUSTERED (id), ) ; -- ============= -- Load TestCHAR (about 3s) -- ============= INSERT INTO dbo.TestCHAR WITH (TABLOCKX) ( padding ) SELECT padding = REPLICATE(CHAR(65 + (Data.n % 26)), 3999) FROM ( SELECT TOP (50000) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) - 1 FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) AS Data ORDER BY Data.n ASC ; -- ============ -- Load TestMAX (about 3s) -- ============ INSERT INTO dbo.TestMAX WITH (TABLOCKX) ( padding ) SELECT CONVERT(VARCHAR(MAX), padding) FROM dbo.TestCHAR ORDER BY id ; -- ============= -- Load TestTEXT (about 5s) -- ============= INSERT INTO dbo.TestTEXT WITH (TABLOCKX) ( padding ) SELECT CONVERT(TEXT, padding) FROM dbo.TestCHAR ORDER BY id ; -- ========== -- Space used -- ========== -- EXECUTE sys.sp_spaceused @objname = 'dbo.TestCHAR'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAX'; EXECUTE sys.sp_spaceused @objname = 'dbo.TestTEXT'; ; CHECKPOINT ; That takes around 15 seconds to run, and shows the space allocated to each table in its output: To illustrate the points I want to make today, the example task we are going to set ourselves is to return a random set of 150 rows from each table.  The basic shape of the test query is the same for each of the three test tables: SELECT TOP (150) T.id, T.padding FROM dbo.Test AS T ORDER BY NEWID() OPTION (MAXDOP 1) ; Test 1 – CHAR(3999) Running the template query shown above using the TestCHAR table as the target, we find that the query takes around 5 seconds to return its results.  This seems slow, considering that the table only has 50,000 rows.  Working on the assumption that generating a GUID for each row is a CPU-intensive operation, we might try enabling parallelism to see if that speeds up the response time.  Running the query again (but without the MAXDOP 1 hint) on a machine with eight logical processors, the query now takes 10 seconds to execute – twice as long as when run serially. Rather than attempting further guesses at the cause of the slowness, let’s go back to serial execution and add some monitoring.  The script below monitors STATISTICS IO output and the amount of tempdb used by the test query.  We will also run a Profiler trace to capture any warnings generated during query execution. DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TC.id, TC.padding FROM dbo.TestCHAR AS TC ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; Let’s take a closer look at the statistics and query plan generated from this: Following the flow of the data from right to left, we see the expected 50,000 rows emerging from the Clustered Index Scan, with a total estimated size of around 191MB.  The Compute Scalar adds a column containing a random GUID (generated from the NEWID() function call) for each row.  With this extra column in place, the size of the data arriving at the Sort operator is estimated to be 192MB. Sort is a blocking operator – it has to examine all of the rows on its input before it can produce its first row of output (the last row received might sort first).  This characteristic means that Sort requires a memory grant – memory allocated for the query’s use by SQL Server just before execution starts.  In this case, the Sort is the only memory-consuming operator in the plan, so it has access to the full 243MB (248,696KB) of memory reserved by SQL Server for this query execution. Notice that the memory grant is significantly larger than the expected size of the data to be sorted.  SQL Server uses a number of techniques to speed up sorting, some of which sacrifice size for comparison speed.  Sorts typically require a very large number of comparisons, so this is usually a very effective optimization.  One of the drawbacks is that it is not possible to exactly predict the sort space needed, as it depends on the data itself.  SQL Server takes an educated guess based on data types, sizes, and the number of rows expected, but the algorithm is not perfect. In spite of the large memory grant, the Profiler trace shows a Sort Warning event (indicating that the sort ran out of memory), and the tempdb usage monitor shows that 195MB of tempdb space was used – all of that for system use.  The 195MB represents physical write activity on tempdb, because SQL Server strictly enforces memory grants – a query cannot ‘cheat’ and effectively gain extra memory by spilling to tempdb pages that reside in memory.  Anyway, the key point here is that it takes a while to write 195MB to disk, and this is the main reason that the query takes 5 seconds overall. If you are wondering why using parallelism made the problem worse, consider that eight threads of execution result in eight concurrent partial sorts, each receiving one eighth of the memory grant.  The eight sorts all spilled to tempdb, resulting in inefficiencies as the spilled sorts competed for disk resources.  More importantly, there are specific problems at the point where the eight partial results are combined, but I’ll cover that in a future post. CHAR(3999) Performance Summary: 5 seconds elapsed time 243MB memory grant 195MB tempdb usage 192MB estimated sort set 25,043 logical reads Sort Warning Test 2 – VARCHAR(MAX) We’ll now run exactly the same test (with the additional monitoring) on the table using a VARCHAR(MAX) padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TM.id, TM.padding FROM dbo.TestMAX AS TM ORDER BY NEWID() OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query takes around 8 seconds to complete (3 seconds longer than Test 1).  Notice that the estimated row and data sizes are very slightly larger, and the overall memory grant has also increased very slightly to 245MB.  The most marked difference is in the amount of tempdb space used – this query wrote almost 391MB of sort run data to the physical tempdb file.  Don’t draw any general conclusions about VARCHAR(MAX) versus CHAR from this – I chose the length of the data specifically to expose this edge case.  In most cases, VARCHAR(MAX) performs very similarly to CHAR – I just wanted to make test 2 a bit more exciting. MAX Performance Summary: 8 seconds elapsed time 245MB memory grant 391MB tempdb usage 193MB estimated sort set 25,043 logical reads Sort warning Test 3 – TEXT The same test again, but using the deprecated TEXT data type for the padding column: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) TT.id, TT.padding FROM dbo.TestTEXT AS TT ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; This time the query runs in 500ms.  If you look at the metrics we have been checking so far, it’s not hard to understand why: TEXT Performance Summary: 0.5 seconds elapsed time 9MB memory grant 5MB tempdb usage 5MB estimated sort set 207 logical reads 596 LOB logical reads Sort warning SQL Server’s memory grant algorithm still underestimates the memory needed to perform the sorting operation, but the size of the data to sort is so much smaller (5MB versus 193MB previously) that the spilled sort doesn’t matter very much.  Why is the data size so much smaller?  The query still produces the correct results – including the large amount of data held in the padding column – so what magic is being performed here? TEXT versus MAX Storage The answer lies in how columns of the TEXT data type are stored.  By default, TEXT data is stored off-row in separate LOB pages – which explains why this is the first query we have seen that records LOB logical reads in its STATISTICS IO output.  You may recall from my last post that LOB data leaves an in-row pointer to the separate storage structure holding the LOB data. SQL Server can see that the full LOB value is not required by the query plan until results are returned, so instead of passing the full LOB value down the plan from the Clustered Index Scan, it passes the small in-row structure instead.  SQL Server estimates that each row coming from the scan will be 79 bytes long – 11 bytes for row overhead, 4 bytes for the integer id column, and 64 bytes for the LOB pointer (in fact the pointer is rather smaller – usually 16 bytes – but the details of that don’t really matter right now). OK, so this query is much more efficient because it is sorting a very much smaller data set – SQL Server delays retrieving the LOB data itself until after the Sort starts producing its 150 rows.  The question that normally arises at this point is: Why doesn’t SQL Server use the same trick when the padding column is defined as VARCHAR(MAX)? The answer is connected with the fact that if the actual size of the VARCHAR(MAX) data is 8000 bytes or less, it is usually stored in-row in exactly the same way as for a VARCHAR(8000) column – MAX data only moves off-row into LOB storage when it exceeds 8000 bytes.  The default behaviour of the TEXT type is to be stored off-row by default, unless the ‘text in row’ table option is set suitably and there is room on the page.  There is an analogous (but opposite) setting to control the storage of MAX data – the ‘large value types out of row’ table option.  By enabling this option for a table, MAX data will be stored off-row (in a LOB structure) instead of in-row.  SQL Server Books Online has good coverage of both options in the topic In Row Data. The MAXOOR Table The essential difference, then, is that MAX defaults to in-row storage, and TEXT defaults to off-row (LOB) storage.  You might be thinking that we could get the same benefits seen for the TEXT data type by storing the VARCHAR(MAX) values off row – so let’s look at that option now.  This script creates a fourth table, with the VARCHAR(MAX) data stored off-row in LOB pages: CREATE TABLE dbo.TestMAXOOR ( id INTEGER IDENTITY (1,1) NOT NULL, padding VARCHAR(MAX) NOT NULL,   CONSTRAINT [PK dbo.TestMAXOOR (id)] PRIMARY KEY CLUSTERED (id), ) ; EXECUTE sys.sp_tableoption @TableNamePattern = N'dbo.TestMAXOOR', @OptionName = 'large value types out of row', @OptionValue = 'true' ; SELECT large_value_types_out_of_row FROM sys.tables WHERE [schema_id] = SCHEMA_ID(N'dbo') AND name = N'TestMAXOOR' ; INSERT INTO dbo.TestMAXOOR WITH (TABLOCKX) ( padding ) SELECT SPACE(0) FROM dbo.TestCHAR ORDER BY id ; UPDATE TM WITH (TABLOCK) SET padding.WRITE (TC.padding, NULL, NULL) FROM dbo.TestMAXOOR AS TM JOIN dbo.TestCHAR AS TC ON TC.id = TM.id ; EXECUTE sys.sp_spaceused @objname = 'dbo.TestMAXOOR' ; CHECKPOINT ; Test 4 – MAXOOR We can now re-run our test on the MAXOOR (MAX out of row) table: DECLARE @read BIGINT, @write BIGINT ; SELECT @read = SUM(num_of_bytes_read), @write = SUM(num_of_bytes_written) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; SET STATISTICS IO ON ; SELECT TOP (150) MO.id, MO.padding FROM dbo.TestMAXOOR AS MO ORDER BY NEWID() OPTION (MAXDOP 1, RECOMPILE) ; SET STATISTICS IO OFF ; SELECT tempdb_read_MB = (SUM(num_of_bytes_read) - @read) / 1024. / 1024., tempdb_write_MB = (SUM(num_of_bytes_written) - @write) / 1024. / 1024., internal_use_MB = ( SELECT internal_objects_alloc_page_count / 128.0 FROM sys.dm_db_task_space_usage WHERE session_id = @@SPID ) FROM tempdb.sys.database_files AS DBF JOIN sys.dm_io_virtual_file_stats(2, NULL) AS FS ON FS.file_id = DBF.file_id WHERE DBF.type_desc = 'ROWS' ; TEXT Performance Summary: 0.3 seconds elapsed time 245MB memory grant 0MB tempdb usage 193MB estimated sort set 207 logical reads 446 LOB logical reads No sort warning The query runs very quickly – slightly faster than Test 3, and without spilling the sort to tempdb (there is no sort warning in the trace, and the monitoring query shows zero tempdb usage by this query).  SQL Server is passing the in-row pointer structure down the plan and only looking up the LOB value on the output side of the sort. The Hidden Problem There is still a huge problem with this query though – it requires a 245MB memory grant.  No wonder the sort doesn’t spill to tempdb now – 245MB is about 20 times more memory than this query actually requires to sort 50,000 records containing LOB data pointers.  Notice that the estimated row and data sizes in the plan are the same as in test 2 (where the MAX data was stored in-row). The optimizer assumes that MAX data is stored in-row, regardless of the sp_tableoption setting ‘large value types out of row’.  Why?  Because this option is dynamic – changing it does not immediately force all MAX data in the table in-row or off-row, only when data is added or actually changed.  SQL Server does not keep statistics to show how much MAX or TEXT data is currently in-row, and how much is stored in LOB pages.  This is an annoying limitation, and one which I hope will be addressed in a future version of the product. So why should we worry about this?  Excessive memory grants reduce concurrency and may result in queries waiting on the RESOURCE_SEMAPHORE wait type while they wait for memory they do not need.  245MB is an awful lot of memory, especially on 32-bit versions where memory grants cannot use AWE-mapped memory.  Even on a 64-bit server with plenty of memory, do you really want a single query to consume 0.25GB of memory unnecessarily?  That’s 32,000 8KB pages that might be put to much better use. The Solution The answer is not to use the TEXT data type for the padding column.  That solution happens to have better performance characteristics for this specific query, but it still results in a spilled sort, and it is hard to recommend the use of a data type which is scheduled for removal.  I hope it is clear to you that the fundamental problem here is that SQL Server sorts the whole set arriving at a Sort operator.  Clearly, it is not efficient to sort the whole table in memory just to return 150 rows in a random order. The TEXT example was more efficient because it dramatically reduced the size of the set that needed to be sorted.  We can do the same thing by selecting 150 unique keys from the table at random (sorting by NEWID() for example) and only then retrieving the large padding column values for just the 150 rows we need.  The following script implements that idea for all four tables: SET STATISTICS IO ON ; WITH TestTable AS ( SELECT * FROM dbo.TestCHAR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id = ANY (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAX ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestTEXT ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; WITH TestTable AS ( SELECT * FROM dbo.TestMAXOOR ), TopKeys AS ( SELECT TOP (150) id FROM TestTable ORDER BY NEWID() ) SELECT T1.id, T1.padding FROM TestTable AS T1 WHERE T1.id IN (SELECT id FROM TopKeys) OPTION (MAXDOP 1) ; SET STATISTICS IO OFF ; All four queries now return results in much less than a second, with memory grants between 6 and 12MB, and without spilling to tempdb.  The small remaining inefficiency is in reading the id column values from the clustered primary key index.  As a clustered index, it contains all the in-row data at its leaf.  The CHAR and VARCHAR(MAX) tables store the padding column in-row, so id values are separated by a 3999-character column, plus row overhead.  The TEXT and MAXOOR tables store the padding values off-row, so id values in the clustered index leaf are separated by the much-smaller off-row pointer structure.  This difference is reflected in the number of logical page reads performed by the four queries: Table 'TestCHAR' logical reads 25511 lob logical reads 000 Table 'TestMAX'. logical reads 25511 lob logical reads 000 Table 'TestTEXT' logical reads 00412 lob logical reads 597 Table 'TestMAXOOR' logical reads 00413 lob logical reads 446 We can increase the density of the id values by creating a separate nonclustered index on the id column only.  This is the same key as the clustered index, of course, but the nonclustered index will not include the rest of the in-row column data. CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestCHAR (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAX (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestTEXT (id); CREATE UNIQUE NONCLUSTERED INDEX uq1 ON dbo.TestMAXOOR (id); The four queries can now use the very dense nonclustered index to quickly scan the id values, sort them by NEWID(), select the 150 ids we want, and then look up the padding data.  The logical reads with the new indexes in place are: Table 'TestCHAR' logical reads 835 lob logical reads 0 Table 'TestMAX' logical reads 835 lob logical reads 0 Table 'TestTEXT' logical reads 686 lob logical reads 597 Table 'TestMAXOOR' logical reads 686 lob logical reads 448 With the new index, all four queries use the same query plan (click to enlarge): Performance Summary: 0.3 seconds elapsed time 6MB memory grant 0MB tempdb usage 1MB sort set 835 logical reads (CHAR, MAX) 686 logical reads (TEXT, MAXOOR) 597 LOB logical reads (TEXT) 448 LOB logical reads (MAXOOR) No sort warning I’ll leave it as an exercise for the reader to work out why trying to eliminate the Key Lookup by adding the padding column to the new nonclustered indexes would be a daft idea Conclusion This post is not about tuning queries that access columns containing big strings.  It isn’t about the internal differences between TEXT and MAX data types either.  It isn’t even about the cool use of UPDATE .WRITE used in the MAXOOR table load.  No, this post is about something else: Many developers might not have tuned our starting example query at all – 5 seconds isn’t that bad, and the original query plan looks reasonable at first glance.  Perhaps the NEWID() function would have been blamed for ‘just being slow’ – who knows.  5 seconds isn’t awful – unless your users expect sub-second responses – but using 250MB of memory and writing 200MB to tempdb certainly is!  If ten sessions ran that query at the same time in production that’s 2.5GB of memory usage and 2GB hitting tempdb.  Of course, not all queries can be rewritten to avoid large memory grants and sort spills using the key-lookup technique in this post, but that’s not the point either. The point of this post is that a basic understanding of execution plans is not enough.  Tuning for logical reads and adding covering indexes is not enough.  If you want to produce high-quality, scalable TSQL that won’t get you paged as soon as it hits production, you need a deep understanding of execution plans, and as much accurate, deep knowledge about SQL Server as you can lay your hands on.  The advanced database developer has a wide range of tools to use in writing queries that perform well in a range of circumstances. By the way, the examples in this post were written for SQL Server 2008.  They will run on 2005 and demonstrate the same principles, but you won’t get the same figures I did because 2005 had a rather nasty bug in the Top N Sort operator.  Fair warning: if you do decide to run the scripts on a 2005 instance (particularly the parallel query) do it before you head out for lunch… This post is dedicated to the people of Christchurch, New Zealand. © 2011 Paul White email: @[email protected] twitter: @SQL_Kiwi

    Read the article

  • AutoVue at the Oracle Asset Lifecycle Management Summit

    - by celine.beck
    I recently had the opportunity to attend and present the integration between AutoVue and Primavera P6 during the Oracle ALM Summit, which was held in March at Redwood Shores, on Oracle Headquarters grounds. The ALM Summit brought together over 300 Oracle maintenance practitioners who endured the foggy and rainy San Francisco weather to attend the 4th edition of this Oracle-driven conference. Attendees have roles in maintenance management and IT. Following a general session, Ralph Rio from ARC Advisory Group provided a very interesting keynote session discussing Asset Management directions, both in the short and long run. An interesting point that Ralph raised is that most organizations have done a good job at improving performance at the design / build, operate and maintain and portfolio management phases by leveraging solutions like Asset Lifecycle Management and Project & Portfolio management solutions; however, there seem to be room for improvement in between those phases, when information flows from one group to the other, during the data handover phase or when time comes to update / modify drawings to reflect the reality of physical assets. This is where AutoVue comes into play. By integrating with enterprise applications like content management systems, asset lifecycle management applications and project management solutions, AutoVue can be a real-process enabler, streamlining information flows from concept/design to decommissioning and ensuring that all project stakeholders have access to asset information and engineering data throughout the asset lifecycle. AutoVue's built-in digital annotation capabilities allows maintenance workers and technicians to report changes in configuration and visually capture the delta between as-built and as-maintained versions of asset documents. This information can then be easily handed over to engineers who can identify changes and incorporate these modifications into the drawings during the next round of document revisions. PPL Power Generation, an electric utilities headquarted in Allentown, Pennsylvania discussed this usage of AutoVue during an interesting Webcast around AutoVue's role in the Utilities space. After the keynote sessions, participants broke off into product-centric tracks around Oracle's Asset Lifecycle Management solutions (E-Business Suite, PeopleSoft, and JD Edwards). The second day of the conference was the occasion for us to present the integration between AutoVue and Primavera P6 to the Maintenance Summit audience. The presentation was a great success and generated much discussion with partners and customers during breaks. People seemed highly interested in learning more about our plans for integrating AutoVue and Primavera P6 with Oracle's ALM solutions...stay tune for further information on the subject!

    Read the article

  • Oracle VM Deep Dives

    - by rickramsey
    "With IT staff now tasked to deliver on-demand services, datacenter virtualization requirements have gone beyond simple consolidation and cost reduction. Simply provisioning and delivering an operating environment falls short. IT organizations must rapidly deliver services, such as infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS). Virtualization solutions need to be application-driven and enable:" "Easier deployment and management of business critical applications" "Rapid and automated provisioning of the entire application stack inside the virtual machine" "Integrated management of the complete stack including the VM and the applications running inside the VM." Application Driven Virtualization, an Oracle white paper That was published in August of 2011. The new release of Oracle VM Server delivers significant virtual networking performance improvements, among other things. If you're not sure how virtual networks work or how to use them, these two articles by Greg King and friends might help. Looking Under the Hood at Virtual Networking by Greg King Oracle VM Server for x86 lets you create logical networks out of physical Ethernet ports, bonded ports, VLAN segments, virtual MAC addresses (VNICs), and network channels. You can then assign channels (or "roles") to each logical network so that it handles the type of traffic you want it to. Greg King explains how you go about doing this, and how Oracle VM Server for x86 implements the network infrastructure you configured. He also describes how the VM interacts with paravirtualized guest operating systems, hardware virtualized operating systems, and VLANs. Finally, he provides an example that shows you how it all looks from the VM Manager view, the logical view, and the command line view of Oracle VM Server for x86. Fundamental Concepts of VLAN Networks by Greg King and Don Smerker Oracle VM Server for x86 supports a wide range of options in network design, varying in complexity from a single network to configurations that include network bonds, VLANS, bridges, and multiple networks connecting the Oracle VM servers and guests. You can create separate networks to isolate traffic, or you can configure a single network for multiple roles. Network design depends on many factors, including the number and type of network interfaces, reliability and performance goals, the number of Oracle VM servers and guests, and the anticipated workload. The Oracle VM Manager GUI presents four different ways to create an Oracle VM network: Bonds and ports VLANs Both bond/ports and VLANS A local network This article focuses the second option, designing a complex Oracle VM network infrastructure using only VLANs, and it steps through the concepts needed to create a robust network infrastructure for your Oracle VM servers and guests. More Resources Virtual Networking for Dummies Download Oracle VM Server for x86 Find technical resources for Oracle VM Server for x86 -Rick Follow me on: Blog | Facebook | Twitter | Personal Twitter | YouTube | The Great Peruvian Novel

    Read the article

  • OBIEE 11.1.1 - How to configure HTTP compression / caching on Oracle BI Mobile app

    - by Ahmed Awan
     Applies to: OBIEE 11.1.1.5 Supported Physical Devices and OS: The Oracle BI Mobile application with HTTP compression / caching configurations is tested on following devices: iPhone 4S, 4, 3GS. iPad 2 and 1. Note these devices must be running the latest version of the iOS version, i.e. iOS 4.2.1 / iOS 5 is also supported. Configuring Pre-requisites: Prior to configuration, the Oracle Web tier software must be installed on server, as described in product documentation i.e. Enterprise Deployment Guide for Oracle Business Intelligence in Section 3.2, "Installing Oracle HTTP Server." The steps for configuring the compression and caching on Oracle HTTP Server are described in this PA blog at http://blogs.oracle.com/pa/entry/obiee_11g_user_interface_ui and in support Doc ID 1312299.1. Configuration Steps in Oracle BI Mobile application: 1. Download the BI Mobile app from the Apple iTunes App Store. The link is http://itunes.apple.com/us/app/oracle-business-intelligence/id434559909?mt=8 . 2. Add Server for example http://pew801.us.oracle.com:7777/analytics/ , here is how your “Server Setting” screen should look like on your OBI Mobile app:                                 Performance Gain Test (using Oracle® HTTP Server with OBIEE) The test with/without HTTP compression / caching was conducted on iPhone 4S / iPad 2 to measure the throughput (i.e. total bytes received) for Oracle® Business Intelligence Enterprise Edition. Below table shows the throughput comparison before and after using HTTP compression / caching for SampleApp using “QuickStart” dashboard accessing reports i.e. Overview, Details, Published Reporting and Scorecard. Testing shows that total bytes received were reduced from 2.3 MB to 723 KB. a. Test Results > Without HTTP Compression / Caching setting - Total Throughput (in Bytes) captured below: Total Bytes Statistics:        b. Test Results > With HTTP Compression / Caching settings - Total Throughput (in Bytes) captured below: Total Bytes Statistics:      

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Automatic Standby Recreation for Data Guard

    - by pablo.boixeda(at)oracle.com
    Hi,Unfortunately sometimes a Standby Instance needs to be recreated. This can happen for many reasons such as lost archive logs, standby data files, failover, among others.This is why we wanted to have one script to recreate standby instances in an easy way.This script recreates the standby considering some prereqs:-Database Version should be at least 11gR1-Dummy instance started on the standby node (Seeking to improve this so it won't be needed)-Broker configuration hasn't been removed-In our case we have two TNSNAMES files, one for the Standby creation (using SID) and the other one for production using service names (including broker service name)-Some environment variables set up by the environment db script (like ORACLE_HOME, PATH...)-The directory tree should not have been modified in the stanby hostWe are currently using it on our 11gR2 Data Guard tests.Any improvements will be welcome! Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} #!/bin/ksh ###    NOMBRE / VERSION ###       recrea_dg.sh   v.1.00 ### ###    DESCRIPCION ###       reacreacion de la Standby ### ###    DEVUELVE ###       0 Creacion de STANDBY correcta ###       1 Fallo ### ###    NOTAS ###       Este shell script NO DEBE MODIFICARSE. ###       Todas las variables y constantes necesarias se toman del entorno. ### ###    MODIFICADO POR:    FECHA:        COMENTARIOS: ###    ---------------    ----------    ------------------------------------- ###      Oracle           15/02/2011    Creacion. ### ### ### Cargar entorno ### V_ADMIN_DIR=`dirname $0` . ${V_ADMIN_DIR}/entorno_bd.sh 1>>/dev/null if [ $? -ne 0 ] then   echo "Error Loading the environment."   exit 1 fi V_RET=0 V_DATE=`/bin/date` V_DATE_F=`/bin/date +%Y%m%d_%H%M%S` V_LOGFILE=${V_TRAZAS}/recrea_dg_${V_DATE_F}.log exec 4>&1 tee ${V_FICH_LOG} >&4 |& exec 1>&p 2>&1 ### ### Variables para Recrear el Data Guard ### V_DB_BR=`echo ${V_DB_NAME}|tr '[:lower:]' '[:upper:]'` if [ "${ORACLE_SID}" = "${V_DB_NAME}01" ] then         V_LOCAL_BR=${V_DB_BR}'01'         V_REMOTE_BR=${V_DB_BR}'02' else         V_LOCAL_BR=${V_DB_BR}'02'         V_REMOTE_BR=${V_DB_BR}'01' fi echo " Getting local instance ROLE ${ORACLE_SID} ..." sqlplus -s /nolog 1>>/dev/null 2>&1 <<-! whenever sqlerror exit 1 connect / as sysdba variable salida number declare   v_database_role v\$database.database_role%type; begin   select database_role into v_database_role from v\$database;   :salida := case v_database_role        when 'PRIMARY' then 2        when 'PHYSICAL STANDBY' then 3        else 4      end; end; / exit :salida ! case $? in 1) echo " ERROR: Cannot get instance ROLE ." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; 2) echo " Local Instance with PRIMARY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=PRIMARY ;; 3) echo " Local Instance with PHYSICAL STANDBY role." | tee -a ${V_LOGFILE}   2>&1    V_DB_ROLE_LCL=STANDBY ;; *) echo " ERROR: UNKNOWN ROLE." | tee -a ${V_LOGFILE}   2>&1    V_RET=1 ;; esac if [ "${V_DB_ROLE_LCL}" = "PRIMARY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_REMOTE_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_LOCAL_BR}         V_STANDBY=${V_REMOTE_BR} fi if [ "${V_DB_ROLE_LCL}" = "STANDBY" ] then         echo "####################################################################" | tee -a ${V_LOGFILE}   2>&1         echo "${V_DATE} - Reacreating  STANDBY Instance." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "DATAFILES, CONTROL FILES, REDO LOGS and ARCHIVE LOGS in standby instance ${V_LOCAL_BR} will be removed" | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         V_PRIMARY=${V_REMOTE_BR}         V_STANDBY=${V_LOCAL_BR} fi # Cargamos las variables de los hosts # Cargamos las variables de los hosts PRY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_PRIMARY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` SBY_HOST=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',host_name from v\\$instance; EOF` echo "el HOST primary es: ${PRY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "el HOST standby es: ${SBY_HOST}" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## V_DATE=`/bin/date` echo "${V_DATE} - Shutting down Standby instance" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Paramos la instancia STANDBY ## SBY_STATUS=`sqlplus  /nolog << EOF | grep KEEP | sed 's/KEEP//;s/[   ]//g' connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba select 'KEEP',status from v\\$instance; EOF` if [ ${SBY_STATUS} = 'STARTED' ] || [ ${SBY_STATUS} = 'MOUNTED' ] || [ ${SBY_STATUS} = 'OPEN' ] then         echo "${V_DATE} - Standby instance shutdown in progress..." | tee -a ${V_LOGFILE}   2>&1         echo "" | tee -a ${V_LOGFILE}   2>&1         echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1         sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         shutdown abort         ! fi V_DATE=`/bin/date` echo "" echo "${V_DATE} - Standby instance stopped" | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ## ## Eliminamos los ficheros de la base de datos ## V_SBY_SID=`echo ${V_STANDBY}|tr '[:upper:]' '[:lower:]'` V_PRY_SID=`echo ${V_PRIMARY}|tr '[:upper:]' '[:lower:]'` ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/data/*.dbf ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch/*.arc ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.ctl ssh ${SBY_HOST} rm /opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/*.rdo ## ## Startup nomount stby instance ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting  DUMMY Standby Instance " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${SBY_HOST} touch /home/oracle/init_dg.ora ssh ${SBY_HOST} 'echo "DB_NAME='${V_DB_NAME}'">>/home/oracle/init_dg.ora' ssh ${SBY_HOST} touch /home/oracle/start_dummy.sh ssh ${SBY_HOST} 'echo "ORACLE_HOME=/opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_HOME">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "PATH=\$ORACLE_HOME/bin:\$PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export PATH">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "ORACLE_SID='${V_SBY_SID}'">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "export ORACLE_SID">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "sqlplus -s /nolog <<-!" >>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      whenever sqlerror exit 1 ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      connect / as sysdba ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "      startup nomount pfile='\''/home/oracle/init_dg.ora'\''">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'echo "! ">>/home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'chmod 744 /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'sh /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/start_dummy.sh' ssh ${SBY_HOST} 'rm /home/oracle/init_dg.ora' ## ## TNSNAMES change, specific for RMAN duplicate ## V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Setting up TNSNAMES in PRIMARY host " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.inst  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' V_DATE=`/bin/date` echo "" | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Starting STANDBY creation with RMAN.. " | tee -a ${V_LOGFILE}   2>&1 echo "" | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************" | tee -a ${V_LOGFILE}   2>&1 rman<<-! >>${V_LOGFILE} connect target sys/${V_DB_PWD}@${V_PRIMARY} connect auxiliary sys/${V_DB_PWD}@${V_STANDBY} run { allocate channel prmy1 type disk; allocate channel prmy2 type disk; allocate channel prmy3 type disk; allocate channel prmy4 type disk; allocate auxiliary channel stby type disk; duplicate target database for standby from active database dorecover spfile parameter_value_convert '${V_PRY_SID}','${V_SBY_SID}' set control_files='/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/ctl/control01.ctl','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/redo/control02.ctl' set db_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set log_file_name_convert='/opt/oracle/db/db${V_DB_NAME}/${V_PRY_SID}/','/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/' set 'db_unique_name'='${V_SBY_SID}' set log_archive_config='DG_CONFIG=(${V_PRIMARY},${V_STANDBY})' set fal_client='${V_STANDBY}' set fal_server='${V_PRIMARY}' set log_archive_dest_1='LOCATION=/opt/oracle/db/db${V_DB_NAME}/${V_SBY_SID}/arch DB_UNIQUE_NAME=${V_SBY_SID} MANDATORY VALID_FOR=(ALL_LOGFILES,ALL_ROLES)' set log_archive_dest_2='SERVICE="${V_PRIMARY}"','SYNC AFFIRM DB_UNIQUE_NAME=${V_PRY_SID} DELAY=0 MAX_FAILURE=0 REOPEN=300 REGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)' nofilenamecheck ; } ! V_DATE=`/bin/date` if [ $? -ne 0 ] then         echo ""         echo "${V_DATE} - Error creating STANDBY instance"         echo ""         echo "********************************************************************************" else         echo ""         echo "${V_DATE} - STANDBY instance created SUCCESSFULLY "         echo ""         echo "********************************************************************************" fi sqlplus -s /nolog 1>>/dev/null 2>&1 <<-!         whenever sqlerror exit 1         connect sys/${V_DB_PWD}@${V_STANDBY} as sysdba         alter system set local_listener='(ADDRESS=(PROTOCOL=TCP)(HOST=${SBY_HOST})(PORT=1544))' scope=both;         alter system set service_names='${V_DB_NAME}.eu.roca.net,${V_SBY_SID}.eu.roca.net,${V_SBY_SID}_DGMGRL.eu.roca.net' scope=both;         alter database recover managed standby database using current logfile disconnect from session;         alter system set dg_broker_start=true scope=both; ! ## ## TNSNAMES change, back to Production Mode ## V_DATE=`/bin/date` echo " " | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} - Restoring TNSNAMES in PRIMARY "  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 ssh ${PRY_HOST} 'cp /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora.prod  /opt/oracle/db/db'${V_DB_NAME}'/soft/db11.2.0.2/network/admin/tnsnames.ora' echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "${V_DATE} -  Waiting for media recovery before check the DATA GUARD Broker"  | tee -a ${V_LOGFILE}   2>&1 echo ""  | tee -a ${V_LOGFILE}   2>&1 echo "********************************************************************************"  | tee -a ${V_LOGFILE}   2>&1 sleep 200 dgmgrl <<-! | grep SUCCESS 1>/dev/null 2>&1     connect ${V_DB_USR}/${V_DB_PWD}@${V_STANDBY}     show configuration verbose; ! if [ $? -ne 0 ] ; then         echo "       ERROR: El status del Broker no es SUCCESS" | tee -a ${V_LOGFILE}   2>&1 ;         V_RET=1 else          echo "      DATA GUARD OK " | tee -a ${V_LOGFILE}   2>&1 ; Normal 0 21 false false false ES X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}         V_RET=0 fi Hope it helps.

    Read the article

  • Plan Operator Tuesday round-up

    - by Rob Farley
    Eighteen posts for T-SQL Tuesday #43 this month, discussing Plan Operators. I put them together and made the following clickable plan. It’s 1000px wide, so I hope you have a monitor wide enough. Let me explain this plan for you (people’s names are the links to the articles on their blogs – the same links as in the plan above). It was clearly a SELECT statement. Wayne Sheffield (@dbawayne) wrote about that, so we start with a SELECT physical operator, leveraging the logical operator Wayne Sheffield. The SELECT operator calls the Paul White operator, discussed by Jason Brimhall (@sqlrnnr) in his post. The Paul White operator is quite remarkable, and can consume three streams of data. Let’s look at those streams. The first pulls data from a Table Scan – Boris Hristov (@borishristov)’s post – using parallel threads (Bradley Ball – @sqlballs) that pull the data eagerly through a Table Spool (Oliver Asmus – @oliverasmus). A scalar operation is also performed on it, thanks to Jeffrey Verheul (@devjef)’s Compute Scalar operator. The second stream of data applies Evil (I figured that must mean a procedural TVF, but could’ve been anything), courtesy of Jason Strate (@stratesql). It performs this Evil on the merging of parallel streams (Steve Jones – @way0utwest), which suck data out of a Switch (Paul White – @sql_kiwi). This Switch operator is consuming data from up to four lookups, thanks to Kalen Delaney (@sqlqueen), Rick Krueger (@dataogre), Mickey Stuewe (@sqlmickey) and Kathi Kellenberger (@auntkathi). Unfortunately Kathi’s name is a bit long and has been truncated, just like in real plans. The last stream performs a join of two others via a Nested Loop (Matan Yungman – @matanyungman). One pulls data from a Spool (my post – @rob_farley) populated from a Table Scan (Jon Morisi). The other applies a catchall operator (the catchall is because Tamera Clark (@tameraclark) didn’t specify any particular operator, and a catchall is what gets shown when SSMS doesn’t know what to show. Surprisingly, it’s showing the yellow one, which is about cursors. Hopefully that’s not what Tamera planned, but anyway...) to the output from an Index Seek operator (Sebastian Meine – @sqlity). Lastly, I think everyone put in 110% effort, so that’s what all the operators cost. That didn’t leave anything for me, unfortunately, but that’s okay. Also, because he decided to use the Paul White operator, Jason Brimhall gets 0%, and his 110% was given to Paul’s Switch operator post. I hope you’ve enjoyed this T-SQL Tuesday, and have learned something extra about Plan Operators. Keep your eye out for next month’s one by watching the Twitter Hashtag #tsql2sday, and why not contribute a post to the party? Big thanks to Adam Machanic as usual for starting all this. @rob_farley

    Read the article

  • NEC Corporation uPD720200 USB 3.0 controller doesn't run at full speed

    - by Radek Zyskowski
    I have fresh install of Ubuntu 10.10. I have external HD on USB 3.0. Trying to connect this via PCI Express NEC controller. dmesg: [ 8966.820078] usb 6-3: new high speed USB device using xhci_hcd and address 0 [ 8966.839831] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.840580] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.841329] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.842079] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.843343] scsi8 : usb-storage 6-3:1.0 [ 8967.847144] scsi 8:0:0:0: Direct-Access SAMSUNG HD204UI 1AQ1 PQ: 0 ANSI: 5 [ 8967.847589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 8967.847923] sd 8:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 8967.848341] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.850959] sd 8:0:0:0: [sdb] Write Protect is off [ 8967.850963] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 8967.850966] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.851818] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.852365] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.852370] sdb: sdb1 [ 8967.871315] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.871853] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.871856] sd 8:0:0:0: [sdb] Attached SCSI disk [ 8967.950728] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.951355] sd 8:0:0:0: [sdb] Sense Key : Recovered Error [current] [descriptor] [ 8967.951361] Descriptor sense data with sense descriptors (in hex): [ 8967.951363] 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00 [ 8967.951375] 00 00 00 00 00 50 [ 8967.951380] sd 8:0:0:0: [sdb] ASC=0x4 ASCQ=0x1d [ 8968.790076] xhci_hcd 0000:02:00.0: HC died; cleaning up [ 8968.790076] usb 6-3: USB disconnect, address 2 [ 8999.008554] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008558] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK [ 8999.008562] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 39 00 00 3e 00 [ 8999.008573] end_request: I/O error, dev sdb, sector 1953535801 [ 8999.008578] Buffer I/O error on device sdb1, logical block 1953535738 [ 8999.008582] Buffer I/O error on device sdb1, logical block 1953535739 [ 8999.008585] Buffer I/O error on device sdb1, logical block 1953535740 [ 8999.008589] Buffer I/O error on device sdb1, logical block 1953535741 [ 8999.008592] Buffer I/O error on device sdb1, logical block 1953535742 [ 8999.008595] Buffer I/O error on device sdb1, logical block 1953535743 [ 8999.008600] Buffer I/O error on device sdb1, logical block 1953535744 [ 8999.008603] Buffer I/O error on device sdb1, logical block 1953535745 [ 8999.008606] Buffer I/O error on device sdb1, logical block 1953535746 [ 8999.008609] Buffer I/O error on device sdb1, logical block 1953535747 [ 8999.008642] scsi 8:0:0:0: rejecting I/O to offline device [ 8999.008747] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008749] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 8999.008752] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 77 00 00 3e 00 [ 8999.008760] end_request: I/O error, dev sdb, sector 1953535863 sudo lspci -v 2:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Physical Slot: 32 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fe9fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd If I try to put into this controller any USB 2.0, it works fine. But USB 3.0 nope. Any idea?

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 29 (sys.dm_os_buffer_descriptors)

    - by Tamarick Hill
    The sys.dm_os_buffer_descriptors Dynamic Management View gives you a look into the data pages that are currently in your SQL Server buffer pool. Just in case you are not familiar with some of the internals to SQL Server and how the engine works, SQL Server only works with objects that are in memory (buffer pool). When an object such as a table needs to be read and it does not exist in the buffer pool, SQL Server will read (copy) the necessary data page(s) from disk into the buffer pool and cache it. Caching takes place so that it can be reused again and prevents the need of expensive physical reads. To better illustrate this DMV, lets query it against our AdventureWorks2012 database and view the result set. SELECT * FROM sys.dm_os_buffer_descriptors WHERE database_id = db_id('AdventureWorks2012') The first column returned from this result set is the database_id column which identifies the specific database for a given row. The file_id column represents the file that a particular buffer descriptor belongs to. The page_id column represents the ID for the data page within the buffer. The page_level column represents the index level of the data page. Next we have the allocation_unit_id column which identifies a unique allocation unit. An allocation unit is basically a set of data pages. The page_type column tells us exactly what type of page is in the buffer pool. From my screen shot above you see I have 3 distinct type of Pages in my buffer pool, Index, Data, and IAM pages. Index pages are pages that are used to build the Root and Intermediate levels of a B-Tree. A Data page would represent the actual leaf pages of a clustered index which contain the actual data for the table. Without getting into too much detail, an IAM page is Index Allocation Map page which track GAM (Global Allocation Map) pages which in turn track extents on your system. The row_count column details how many data rows are present on a given page. The free_space_in_bytes tells you how much of a given data page is still available, remember pages are 8K in size. The is_modified signifies whether or not a page has been changed since it has been read into memory, .ie a dirty page. The numa_node column represents the Nonuniform memory access node for the buffer. Lastly is the read_microsec column which tells you how many microseconds it took for a data page to be read (copied) into the buffer pool. This is a great DMV for use when you are tracking down a memory issue or if you just want to have a look at what type of pages are currently in your buffer pool. For more information about this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms173442.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • How can I get my progress reviewed as a solo junior developer

    - by Oliver Hyde
    I am currently working for a 2 person company, as the solo primary developer. My boss gets the clients, mocks up some png design templates and hands them over to me. This system has been working fine and i'm really enjoying it. The types of projects I work on are for small - medium sized businesses and they usually want a CMS system. Developed from scratch i'll build a customised backend for the client to add/edit/remove categories, tags, products etc and then output them to the front end according to the design template handed to me. As time has gone on, the projects have increased in complexity, with shopping cart / ordering features and other common e-commerce type features. Again, this system has been working fine and i'm really enjoying it. My issue is my personal development as a programmer. I spend a lot of my spare time reading programming blogs, checking through stackexchange, reading suggested programming books (currently on 'The Pragmatic Programmer', really good so far), doing brain exercises (lumosity.com and khanacademy math problems), doing lots of physical exercise and other personal development type activities. I can't help but feel though, that I'm missing out on feedback, critique. My boss is great and never holds back on praise in regards to my work, but he unfortunately is either to busy to check my code, or to be honest, I don't think it's one of his specialties and so can't provide feedback. I want to know what i'm doing wrong and what i'm doing right. Should I be putting that much logic in the controller, am I modulating my code enough etc. So what I have done is developed a little 'Family Budgeting' app and tried to do it as cleanly and effectively as I currently know how. What i'm wanting to know is, is there somewhere I can submit this app, and have some seasoned developers provide feedback. It's not just a subsection of my code like 'codereview.stackexchange' appears to require, it's my entire workflow that I want critiqued. I know this is a lot to ask, and I expect the main advice given will be to look for a job within a team, which is certainly something I will look into later down the track, but for now I want to persist with my current employment situation, but just don't want to develop too many bad habits. Let me know if I can provide any further information to help clarify, or if this isn't the right place for this type of question I apologise in advance. Didn't want to use reddit as I felt this community fosters more well thought out responses.

    Read the article

  • Customer Support Spotlight: Clemson University

    - by cwarticki
    I've begun a Customer Support Spotlight series that highlights our wonderful customers and Oracle loyalists.  A week ago I visited Clemson University.  As I travel to visit and educate our customers, I provide many useful tips/tricks and support best practices (as found on my blog and twitter). Most of all, I always discover an Oracle gem who deserves recognition for their hard work and advocacy. Meet George Manley.  George is a Storage Engineer who has worked in Clemson's Data Center all through college, partially in the Hardware Architecture group and partially in the Storage group. George and the rest of the Storage Team work with most all of the storage technologies that they have here at Clemson. This includes a wide array of different vendors' disk arrays, with the most of them being Oracle/Sun 2540's.  He also works with SAM/QFS, ACSLS, and our SL8500 Tape Libraries (all three Oracle/Sun products). (pictured L to R, Matt Schoger (Oracle), Mark Flores (Oracle) and George Manley) George was kind enough to take us for a data center tour.  It was amazing.  I rarely get to see the inside of data centers, and this one was massive. Clemson Computing and Information Technology’s physical resources include the main data center located in the Information Technology Center at the Innovation Campus and Technology Park. The core of Clemson’s computing infrastructure, the data center has 21,000 sq ft of raised floor and is powered by a 14MW substation. The ITC power capacity is 4.5MW.  The data center is the home of both enterprise and HPC systems, and is staffed by CCIT staff on a 24 hour basis from a state of the art network operations center within the ITC. A smaller business continuance data center is located on the main campus.  The data center serves a wide variety of purposes including HPC (supercomputing) resources which are shared with other Universities throughout the state, the state's medicaid processing system, and nearly all other needs for Clemson University. Yes, that's no typo (14,256 cores and 37TB of memory!!! Thanks for the tour George and thank you very much for your time.  The tour was fantastic. I enjoyed getting to know your team and I look forward to many successes from Clemson using Oracle products. -Chris WartickiGlobal Customer Management

    Read the article

  • WebLogic 12c hands-on bootcamps for partners – free of charge

    - by JuergenKress
    For all WebLogic Partner Community members we offer our WebLogic 12c hands-on Bootcamps – free of charge! If you want to become an Application Grid Specialized: Register Here! Country Date Location Registration Germany 3-5 April 2012 Oracle Düsseldorf Click here France 24-26 April 2012 Oracle Colombes Click here Spain 08-10 May 2012 Oracle Madrid Click here Netherlands 22-24 May 2012 Oracle Amsterdam Click here United Kingdom 06-08 June 2012 Oracle Reading Click here Italy 19-21 June 2012 Oracle Cinisello Balsamo Click here Portugal 10-12 July 2012 Oracle Lisbon Click here Skill requirements Attendees need to have the following skills as this is required by the product-set and to make sure they get the most out of the training: Basic knowledge in Java and JavaEE Understanding the Application Server concept Basic knowledge in older releases of WebLogic Server would be beneficial Member of WebLogic Partner Community for registration please vist http://www.oracle.com/partners/goto/wls-emea Hardware requirements Every participant works on his own notebook. The minimal hardware requirements are: 4Gb physical RAM (we will boot the image with 2Gb RAM) dual core CPU 15 GB HD Software requirements Please install Oracle VM VirtualBox 4.1.8 Follow-up and certification With the workshop registration you agree to the following next steps Follow-up training attend and pass the Oracle Application Grid Certified Implementation Specialist Registration For details and registration please visit Register Here Free WebLogic Certification (Free assessment voucher to become certified) For all WebLogic experts, we offer free vouchers worth $195 for the Oracle Application Grid Certified Implementation Specialist assessment. To demonstrate your WebLogic knowledge you first have to pass the free online assessment Oracle Application Grid PreSales Specialist. For free vouchers, please send an e-mail with the screenshot of your Oracle Application Grid PreSales certirficate to [email protected] including your Name, Company, E-mail and Country. Note: This offer is limited to partners from Europe Middle East and Africa. Partners from other countries please contact your Oracle partner manager. WebLogic Specialization To become specialized in Application Grid, please make sure that you access the: Application Grid Specialization Guide Application Grid Specialization Checklist If you have any questions please contact the Oracle Partner Business Center. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic training,Java training,WebLogic 12c,WebLogic Community,OPN,WebLogic,Oracle,WebLogic Certification,Apps Grid Certification,Implementation Specialist,Apps Grid Specialist,Oracle education

    Read the article

  • JRockit Virtual Edition Debug Key

    - by changjae.lee
    There are a few keys that can help the debugging of the JRVE env in console. you can type in each keys in JRVE console to see what's happening under the hood. key '0' : System information key '5' : Enable shutdown key '7' : Start JRockit Management Server (port 7091) key '8' : Statistics Counters key '9' : Full Thread Dump key '0' : Status of Debug-key Below is the sample out from each keys. Debug-key '1' pressed ============ JRockitVE System Information ============ JRockitVE version : 11.1.1.3.0-67-131044 Kernel version : 6.1.0.0-97-131024 JVM version : R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 Hypervisor version : Xen 3.4.0 Boot state : 0x007effff Uptime : 0 days 02:04:31 CPU : uniprocessor @2327 Mhz CPU usage : 0% ctx/s: 285 preempt/s: 0 migrations/s: 0 Physical pages : 82379/261121 (321/1020 MB) Network info : 10.179.97.64 (10.179.97.64/255.255.254.0) GateWay : 10.179.96.1 MAC address : 00:16:3e:7e:dc:78 Boot options : vfsCwd : /application/user_projects/domains/wlsve_domain mainArgs : java -javaagent:/jrockitve/services/sshd/sshd.jar -cp /jrockitve/jrockit/lib/tools.jar:/jrockitve/lib/common.jar:/application/patch_wls1032/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/application/wlserver_10.3/server/lib/weblogic.jar -Dweblogic.Name=WlsveAdmin -Dweblogic.Domain=wlsve_domain -Dweblogic.management.username=weblogic -Dweblogic.management.password=welcome1 -Dweblogic.management.GenerateDefaultConfig=true weblogic.Server consLog : /jrockitve/log/jrockitve.log mounts : ext2 / dev0; posixLocale : en_US posixTimezone : Asia/Seoul posixEncoding : ISO-8859-1 Local disk : Size: 1024M, Used: 728M, Free: 295M ======================================================== Debug-key '5' pressed Shutdown enabled. Debug-key '7' pressed [JRockit] Management server already started. Ignoring request. Debug-key '8' pressed Starting stat recording Debug-key '8' pressed ========= Statistics Counters for the last second ========= dev.eth0_rx.cnt : 22 packets dev.eth0_rx_bytes.cnt : 2704 bytes dev.net_interrupts.cnt : 22 interrupts evt.timer_ticks.cnt : 123 ticks hyper.priv_entries.cnt : 144 entries schedule.context_switches.cnt : 271 switches schedule.idle_cpu_time.cnt : 997318849 nanoseconds schedule.idle_cpu_time_0.cnt : 997318849 nanoseconds schedule.total_cpu_time.cnt : 1000031757 nanoseconds time.system_time.cnt : 1000 ns time.timer_updates.cnt : 123 updates time.wallclock_time.cnt : 1000 ns ======================================= Debug-key '9' pressed ===== FULL THREAD DUMP =============== Fri Jun 4 08:22:12 2010 BEA JRockit(R) R27.6.6-28_o-125824-1.6.0_17-20091214-2104-linux-ia32 "Main Thread" id=1 idx=0x4 tid=1 prio=5 alive, in native, waiting -- Waiting for notification on: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method) at java/lang/Object.wait(J)V(Native Method) at java/lang/Object.wait(Object.java:485) at weblogic/t3/srvr/T3Srvr.waitForDeath(T3Srvr.java:919) ^-- Lock released while waiting: weblogic/t3/srvr/T3Srvr@0x646ede8[fat lock] at weblogic/t3/srvr/T3Srvr.run(T3Srvr.java:479) at weblogic/Server.main(Server.java:67) at jrockit/vm/RNI.c2java(IIIII)V(Native Method) -- end of trace "(Signal Handler)" id=2 idx=0x8 tid=2 prio=5 alive, in native, daemon Open lock chains ================ Chain 1: "ExecuteThread: '0' for queue: 'weblogic.socket.Muxer'" id=23 idx=0x50 tid=20 waiting for java/lang/String@0x630c588 held by: "ExecuteThread: '1' for queue: 'weblogic.socket.Muxer'" id=24 idx=0x54 tid=21 (active) ===== END OF THREAD DUMP =============== Debug-key '0' pressed Debug-keys enabled Happy Cloud Walking :)

    Read the article

  • Sony VAIO with Insyde H2O EFI bios will not boot into GRUB EFI

    - by Rohan Dhruva
    I bought a new Sony Vaio S series laptop. It uses Insyde H2O BIOS EFI, and trying to install Linux on it is driving me crazy. root@kubuntu:~# parted /dev/sda print Model: ATA Hitachi HTS72756 (scsi) Disk /dev/sda: 640GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 274MB 273MB fat32 EFI system partition hidden 2 274MB 20.8GB 20.6GB ntfs Basic data partition hidden, diag 3 20.8GB 21.1GB 273MB fat32 EFI system partition boot 4 21.1GB 21.3GB 134MB Microsoft reserved partition msftres 5 21.3GB 342GB 320GB ntfs Basic data partition 6 342GB 358GB 16.1GB ext4 Basic data partition 7 358GB 374GB 16.1GB ntfs Basic data partition 8 374GB 640GB 266GB ntfs Basic data partition What is surprising is that there are 2 EFI system partitions on the disk. The sda2 partition is a 20gb recovery partition which loads windows with a basic recovery interface. This is accessible by pressing the "ASSIST" button as opposed to the normal power button. I presume that the sda1 EFI System Partition (ESP) loads into this recovery. The sda3 ESP has more fleshed out entries for Microsoft Windows, which actually goes into Windows 7 (as confirmed by bcdedit.exe on Windows). Ubuntu is installed on sda6, and while installation I chose sda3 as my boot partition. The installer correctly created a sda3/EFI/ubuntu/grubx64.efi application. The real problem: for the life of me, I can't set it to be the default! I tried creating a sda3/startup.nsh which called grubx64.efi, but it didn't help -- on rebooting, the system still boots into windows. I tried using efibootmgr, and that shows as it it worked: root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager root@kubuntu:~# efibootmgr --create --gpt --disk /dev/sda --part 3 --write-signature --label "GRUB2" --loader "\\EFI\\ubuntu\\grubx64.efi" BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 root@kubuntu:~# efibootmgr BootCurrent: 0000 BootOrder: 0002,0000,0001 Boot0000* EFI USB Device Boot0001* Windows Boot Manager Boot0002* GRUB2 However, on rebooting, as you guessed, the machine rebooted directly back into Windows. The only things I can think of are: The sda1 partition is somehow being used Overwrite /EFI/Boot/bootx64.efi and /EFI/Microsoft/Boot/bootmgfw.efi with grubx64.efi [but this seems really radical]. Can anyone please help me out? Thanks -- any help is greatly appreciated, as this issue is driving me crazy!

    Read the article

  • Character Stats and Power

    - by Stephen Furlani
    I'm making an RPG game system and I'm having a hard time deciding on doing detailed or abstract character statistics. These statistics define the character's natural - not learned - abilities. For example: Mass Effect: 0 (None that I can see) X20 (Xtreme Dungeon Mastery): 1 "STAT" Diablo: 4 "Strength, Magic, Dexterity, Vitality" Pendragon: 5 "SIZ, STR, DEX, CON, APP" Dungeons & Dragons (3.x, 4e): 6 "Str, Dex, Con, Wis, Int, Cha" Fallout 3: 7 "S.P.E.C.I.A.L." RIFTS: 8 "IQ, ME, MA, PS, PP, PE, PB, Spd" Warhammer Fantasy Roleplay (1st ed?): 12-ish "WS, BS, S, T, Ag, Int, WP, Fel, A, Mag, IP, FP" HERO (5th ed): 14 "Str, Dex, Con, Body, Int, Ego, Pre, Com, PD, ED, Spd, Rec, END, STUN" The more stats, the more complex and detailed your character becomes. This comes with a trade-off however, because you usually only have limited resources to describe your character. D&D made this infamous with the whole min/max-ing thing where strong characters were typically not also smart. But also, a character with a high Str typically also has high Con, Defenses, Hit Points/Health. Without high numbers in all those other stats, they might as well not be strong since they wouldn't hold up well in hand-to-hand combat. So things like that force trade-offs within the category of strength. So my original (now rejected) idea was to force players into deciding between offensive and defensive stats: Might / Body Dexterity / Speed Wit / Wisdom Heart Soul But this left some stat's without "opposites" (or opposites that were easily defined). I'm leaning more towards the following: Body (Physical Prowess) Mind (Mental Prowess) Heart (Social Prowess) Soul (Spiritual Prowess) This will define a character with just 4 numbers. Everything else gets based off of these numbers, which means they're pretty important. There won't, however, be ways of describing characters who are fast, but not strong or smart, but absent minded. Instead of defining the character with these numbers, they'll be detailing their character by buying skills and powers like these: Quickness Add a +2 Bonus to Body Rolls when Dodging. for a character that wants to be faster, or the following for a big, tough character Body Building Add a +2 Bonus to Body Rolls when Lifting, Pushing, or Throwing objects. [EDIT - removed subjectiveness] So my actual questions is what are some pitfalls with a small stat list and a large amount of descriptive powers? Is this more difficult to port cross-platform (pen&paper, PC) for example? Are there examples of this being done well/poorly? Thanks,

    Read the article

  • Silverlight Cream for June 19, 2011 -- #1109

    - by Dave Campbell
    In this Issue: Kunal Chowdhury(-2-), Oren Gal, Rudi Grobler, Stephen Price, Erno de Weerd, Joost van Schaik, WindowsPhoneGeek, Andrea Boschin, and Vikram Pendse. Above the Fold: Silverlight: "Multiple Page Printing in Silverlight4 - Part 3 - Printing Driving Directions" Oren Gal WP7: "Prototyping Windows Phone 7 Applications using SketchFlow" Vikram Pendse Shoutouts: Not Silverlight, but darned cool... Michael Crump has just what you need to get going with Kinect: The busy developers guide to the Kinect SDK Beta Rudi Grobler replies to a few questions about how he gets great WP7 screenshots: Screenshot Tools for WP7 From SilverlightCream.com: Windows Phone 7 (Mango) Tutorial - 14 - Detecting Network Information of the Device Squeaking in just under the posting wire with 2 more WP7.1 posts is Kunal Chowdhury ... first up is this one on grabbing the mobile operator and othe rnetwork info in WP7.1 Windows Phone 7 (Mango) Tutorial - 15 - Detecting Device Information Kunal Chowdhury's latest is on using the DeviceStatus class in WP7.1 to detect device information such as is there is a physical keyboard installed, Memory Usage, Total Memory, etc. Multiple Page Printing in Silverlight4 - Part 3 - Printing Driving Directions Oren Gal has the final episode in his Multiple Page Printing Tutorial Trilogy up... and this is *way* cool... Printing the driving directions. AgFx hidden gem - PhoneApplicationFrameEx Rudi Grobler continues his previous post about AgFX with this one talking about the PhoneApplicationFrameEx class inside AgFx.Controls.Phone.dll.. a RootFrame replacement. Binding to ActualHeight or ActualWidth Stephen Price's latest XAML snippet is about Binding to ActualHeight or ActualWidth... you've probably tried to without luck... check out the workaround. Windows Phone 7: Drawing graphics for your application with Inkscape – Part I: Tiles Erno de Weerd decided to try the 'free' route to Drawing graphics for his WP7 app, and has part 1 of a tutorial series on doing that with Inkscape. Mogade powered Live Tile high score service for Windows Phone 7 Joost van Schaik expounds on his "Catch 'em Birds" WP7 game in the Marketplace... specifically the online leaderboard using the services of Mogade. Building a Reusable ICommand implementation for Windows Phone Mango MVVM apps WindowsPhoneGeek's latest post is discussing the ICommand interface available in WP7.1, and he demontstrates how to implement a reusable ICommand Implementation and how to use it. A TCP Server with Reactive Extensions Andrea Boschin is back posting about Rx, and promises this post *will be* Silverlight related eventually :) First up though is a socket server using Rx. Prototyping Windows Phone 7 Applications using SketchFlow Vikram Pendse has a tutorial up for prototyping your WP7* apps in Sketchflow including a 5 minute video Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • How can I convince cowboy programmers to use source control?

    - by P.Brian.Mackey
    UPDATE I work on a small team of devs, 4 guys. They have all used source control. Most of them can't stand source control and instead choose not to use it. I strongly believe source control is a necessary part of professional development. Several issues make it very difficult to convince them to use source control: The team is not used to using TFS. I've had 2 training sessions, but was only allotted 1 hour which is insufficient. Team members directly modify code on the server. This keeps code out of sync. Requiring comparison just to be sure you are working with the latest code. And complex merge problems arise Time estimates offered by developers exclude time required to fix any of these problems. So, if I say nono it will take 10x longer...I have to constantly explain these issues and risk myself because now management may perceive me as "slow". The physical files on the server differ in unknown ways over ~100 files. Merging requires knowledge of the project at hand and, therefore, developer cooperation which I am not able to obtain. Other projects are falling out of sync. Developers continue to have a distrust of source control and therefore compound the issue by not using source control. Developers argue that using source control is wasteful because merging is error prone and difficult. This is a difficult point to argue, because when source control is being so badly mis-used and source control continually bypassed, it is error prone indeed. Therefore, the evidence "speaks for itself" in their view. Developers argue that directly modifying server code, bypassing TFS saves time. This is also difficult to argue. Because the merge required to synchronize the code to start with is time consuming. Multiply this by the 10+ projects we manage. Permanent files are often stored in the same directory as the web project. So publishing (full publish) erases these files that are not in source control. This also drives distrust for source control. Because "publishing breaks the project". Fixing this (moving stored files out of the solution subfolders) takes a great deal of time and debugging as these locations are not set in web.config and often exist across multiple code points. So, the culture persists itself. Bad practice begets more bad practice. Bad solutions drive new hacks to "fix" much deeper, much more time consuming problems. Servers, hard drive space are extremely difficult to come by. Yet, user expectations are rising. What can be done in this situation?

    Read the article

  • grub problem after grub-repair

    - by alireza
    I had two OS,win 7 and Ubuntu, I was working with win and went out of the room and when came back I saw Grub Rescue!!! I booted live and did grub_repaire and it still says multiple active partitions.operating system not found I feel that it can not find the ubuntu hard drive at all! ! ubuntu@ubuntu:/dev$ sudo fdisk -l Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xde771741 Device Boot Start End Blocks Id System /dev/sda1 * 61 61 0 0 Empty /dev/sda2 2048 30146559 15072256 27 Hidden NTFS WinRE /dev/sda3 * 30146560 30351359 102400 7 HPFS/NTFS/exFAT /dev/sda4 30351360 334483299 152065970 7 HPFS/NTFS/exFAT ubuntu@ubuntu:/dev$ ls autofs dvdrw loop4 ppp ram5 shm tty15 tty29 tty42 tty56 ttyS10 ttyS24 uinput vcsa1 block ecryptfs loop5 psaux ram6 snapshot tty16 tty3 tty43 tty57 ttyS11 ttyS25 urandom vcsa2 bsg fb0 loop6 ptmx ram7 snd tty17 tty30 tty44 tty58 ttyS12 ttyS26 usbmon0 vcsa3 btrfs-control fd loop7 pts ram8 sr0 tty18 tty31 tty45 tty59 ttyS13 ttyS27 usbmon1 vcsa4 bus full loop-control ram0 ram9 stderr tty19 tty32 tty46 tty6 ttyS14 ttyS28 usbmon2 vcsa5 cdrom fuse mapper ram1 random stdin tty2 tty33 tty47 tty60 ttyS15 ttyS29 v4l vcsa6 cdrw fw0 mcelog ram10 rfkill stdout tty20 tty34 tty48 tty61 ttyS16 ttyS3 vcs vcsa7 char hpet mei ram11 rtc tty tty21 tty35 tty49 tty62 ttyS17 ttyS30 vcs1 vga_arbiter console input mem ram12 rtc0 tty0 tty22 tty36 tty5 tty63 ttyS18 ttyS31 vcs2 video0 core kmsg net ram13 sda tty1 tty23 tty37 tty50 tty7 ttyS19 ttyS4 vcs3 zero cpu log network_latency ram14 sda2 tty10 tty24 tty38 tty51 tty8 ttyS2 ttyS5 vcs4 cpu_dma_latency loop0 network_throughput ram15 sda3 tty11 tty25 tty39 tty52 tty9 ttyS20 ttyS6 vcs5 disk loop1 null ram2 sda4 tty12 tty26 tty4 tty53 ttyprintk ttyS21 ttyS7 vcs6 dri loop2 oldmem ram3 sg0 tty13 tty27 tty40 tty54 ttyS0 ttyS22 ttyS8 vcs7 dvd loop3 port ram4 sg1 tty14 tty28 tty41 tty55 ttyS1 ttyS23 ttyS9 vcsa ubuntu@ubuntu:/dev$ someone said { Remove the boot flag from sda1, by typing in a terminal: sudo fdisk /dev/sda1 Then press the 'a' key, and 'Enter'. Then press the '1' key, and 'Enter'. Then press the 'w' key, and 'Enter'. } but after grub-repair I got this log: http://paste.ubuntu.com/1371297/ then I try sudo fdisk /dev/sda1 and But It says unable to open /dev/sda1! which is right as sda1 is not in dev folder (look at the picture!) Can anybody help me? I need to work on my windows for the rest of the semester and don't care of my ubuntu. thank you

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >