Search Results

Search found 1706 results on 69 pages for 'distributed'.

Page 48/69 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Java: Do something on event in SQL Database?

    - by wretrOvian
    Hello I'm building an application with distributed parts. Meaning, while one part (writer) maybe inserting, updating information to a database, the other part (reader) is reading off and acting on that information. Now, i wish to trigger an action event in the reader and reload information from the DB whenever i insert something from the writer. Is there a simple way about this? Would this be a good idea? : // READER while(true) { connect(); // reload info from DB executeQuery("select * from foo"); disconnect(); }

    Read the article

  • Game network physics collision

    - by Jonas Byström
    How to simulating two client-controlled vehicles colliding (sensibly) in a typical client/server setup for a network game? I did read this eminent blog post on how to do distributed network physics in general (without traditional client prediction), but this question is specifically on how to handle collisions of owned objects. Example Say client A is 20 ms ahead of server, client B 300 ms ahead of server (counting both latency and maximum jitter). This means that when the two vehicles collide, both clients will see the other as 320 ms behind - in the opposite direction of the velocity of the other vehicle. Head-to-head on a Swedish highway means a difference of 16 meters/17.5 yards! What not to try It is virtually impossible to extrapolate the positions, since I also have very complex vehicles with joints and bodies all over, which in turn have linear and angular positions, velocities and accelerations, not to mention states from user input.

    Read the article

  • Squid logs on mongodb

    - by user306241
    Hi, I'm planning to log my squid instances to a mongodb, but the actual problem is that we have a huge traffic to be logged, every access authenticated with user/pass. Eventually we have to make some reports based on logs. I was thinking to insert the logs distributed by months and by users, so my collection will look like this: {month: 'april', users: [{user: 'loop0', logs: [{timestamp: 12345678.9, url: 'http://stackoverflow.com/question/ask', ... }]}] So if I want to generate my reports based on the month of april I just have to get the right month instead of looking in zillions of lines to fetch the lines that timestamp match between April, 1 and April, 30. Of course this type of insert will be slower than just insert the log line directly. So my question is: is there a best way to do this? Nowadays we have around 12 million lines of log by day.

    Read the article

  • How to schedule hundreds of thousands of tasks?

    - by wehriam
    We have hundreds of thousands of tasks that need to be run at a variety of arbitrary intervals, some every hour, some every day, and so on. The tasks are resource intensive and need to be distributed across many machines. Right now tasks are stored in a database with an "execute at this time" timestamp. To find tasks that need to be executed, we query the database for jobs that are due to be executed, then update the timestamps when the task is complete. Naturally this leads to a substantial write load on the database. As far as I can tell, we are looking for something to release tasks into a queue at a set interval. (Workers could then request tasks from that queue.) What is the best way to schedule recurring tasks at scale? For what it's worth we're largely using Python, although we have no problems using components (RabbitMQ?) written in other languages.

    Read the article

  • How to manage reports/files distribution to different destinations in Unix?

    - by mossie
    The reporting tools will generate a huge numbers of reports/files in the file system (a Unix directory). There's a list of destinations (email addresses and shared folders) where a different set of reports/files (can have overlap) are required to be distributed at each destinations. Would like to know if there's a way to efficiently manage this reports delivery using shell scripts so that the maintenance of the list of reports and destinations will not become a mess in future. It's quite an open ended question, the constraint however is that it should work within the boundaries of managing the reports in a Unix FS.

    Read the article

  • How to access constant defined in child class from parent class functions?

    - by kavoir.com
    I saw this example from php.net: <?php class MyClass { const MY_CONST = "yonder"; public function __construct() { $c = get_class( $this ); echo $c::MY_CONST; } } class ChildClass extends MyClass { const MY_CONST = "bar"; } $x = new ChildClass(); // prints 'bar' $y = new MyClass(); // prints 'yonder' ?> But $c::MY_CONST is only recognized in version 5.3.0 or later. The class I'm writing may be distributed a lot. Basically, I have defined a constant in ChildClass and one of the functions in MyClass (father class) needs to use the constant. Any idea?

    Read the article

  • Python: What's a correct and good way to implement __hash__()?

    - by random-name
    What's a correct and good way to implement hash()? I am talking about the function that returns a hashcode that is then used to insert objects into hashtables aka dictionaries. As hash() returns an integer and is used for "binning" objects into hashtables I assume that the values of the returned integer should be uniformly distributed for common data (to minimize collisions). What's a good practice to get such values? Are collisions a problem? In my case I have a small class which acts as a container class holding some ints, some floats and a string.

    Read the article

  • Django Project Done and Working. Now What?

    - by Rodrogo
    Hi, I just finished what I would call a small django project and pretty soon it's going live. It's only 6 models but a fairly complex view layer and a lot of records saving and retrieving. Of course, forgetting the obvious huge amount of bugs that will, probably, fill my inbox to the top, what would it be the next step towards a website with best performance. What could be tweaked? I'm using jmeter a lot recently and feel confident that I have a good baseline for future performance comparisons, but the thing is: I'm not sure what is the best start, since I'm a greedy bastard that wants to work the least possible and gather the best results. For instance, should I try an approach towards infrastructure, like a distributed database, or should I go with the code itself and in that case, is there something that specifically results in better performance? In your experience, whats pays off more? Personal anecdotes are welcome, but some fact based opinions are even more. :) Thanks very much.

    Read the article

  • What you would learn. [closed]

    - by NDeveloper
    Hi, I have a little free time and would like to learn new development language/technology. I know it can be very subective, but please share with us what you would learn and why. I have about 4 years of .NET development experience mostly distributed applications. And a little more than 2 years of c/c++. There are a lot of options to choose like Google Go/F#/Python/Scala/Java/ASP.NET/Mobile App development like for Android, BB, iPhone.../DB (MS SQL, Oracle or even MongoDB or CouchDB)/any new concepts, etc... I would like to use the time for investment, so gained knowledge will be useful.

    Read the article

  • How to break up a table holding 100mil+ number of records?

    - by Chiao
    We're currently storing answers for 52 predefined questions for our clients in our matchmaking site. we have over 30million unique users summing up for worst case of a 52x30million rows. Of these 52 questions, 11 are required and always answered. Our previous solution was to open an answer table for each question. This solution distributed our answer rows for faster insert/delete/update. But it also caused us an unconventional programming such as dynamically opening a table each time a question is added/updated, or removing an answer table if it was to be destroyed permanently. We want to come up with a better solution for our third version but could't get very far yet. Any ideas to accomplish this in any other, perhaps a more conventional, way?

    Read the article

  • Is it possible (and if so how) to delete a zookeeper node with apache camel?

    - by b_habegger
    I am looking into using Zookeeper to synchronized distributed Camel instances. In doing so it seems that there is not possibility to delete a znode from Zookeeper from Camel (I imagined some sort of producer operation). A look at the components source seems to confirm this. (I didn't find any code explicitely requesting a node removal). Am I missing something ? When would a node created via Camel (in my case EPHEMERAL) be deleted ? Maybe I am miss understanding something with zookeeper ?

    Read the article

  • Learn Cloud Computing – It’s Time

    - by Ben Griswold
    Last week, I gave an in-house presentation on cloud computing.  I walked through an overview of cloud computing – characteristics (on demand, elastic, fully managed by provider), why are we interested (virtualization, distributed computing, increased access to high-speed internet, weak economy), various types (public, private, virtual private cloud) and services models (IaaS, PaaS, SaaS.)  Though numerous providers have emerged in the cloud computing space, the presentation focused on Amazon, Google and Microsoft offerings and provided an overview of their platforms, costs, data tier technologies, management and security.  One of the biggest talking points was why developers should consider the cloud as part of their deployment strategy: You only have to pay for what you consume You will be well-positioned for one time event provisioning You will reap the benefits of automated growth and scalable technologies For the record: having deployed dozens of applications on various platforms over the years, pricing tends to be the biggest customer concern.  Yes, scalability is a customer consideration, too, but it comes in distant second.  Boy do I hope you’re still reading… You may be thinking, “Cloud computing is well and good and it sounds catchy, but should I bother?  After all, it’s just another technology bundle which I’m supposed to ramp up on because it’s the latest thing, right?”  Well, my clients used to be 100% reliant upon me to find adequate hosting for them.  Now I find they are often aware of cloud services and some come to me with the “possibility” that deploying to the cloud is the best solution for them.  It’s like the patient who walks into the doctor’s office with their diagnosis and treatment already in mind thanks to the handful of Internet searches they performed earlier that day.  You know what?  The customer may be correct about the cloud. It may be a perfect fit for their app.  But maybe not…  I don’t think there’s a need to learn about every technical thing under the sun, but if you are responsible for identifying hosting solutions for your customers, it is time to get up to speed on cloud computing and the various offerings (if you haven’t already.)  Here are a few references to get you going: DZone Refcardz #82 Getting Started with Cloud Computing by Daniel Rubio Wikipedia Cloud Computing – What is it? Amazon Machine Images (AMI) Google App Engine SDK Azure SDK EC2 Spot Pricing Google App Engine Team Blog Amazon EC2 Team Blog Microsoft Azure Team Blog Amazon EC2 – Cost Calculator Google App Engine – Cost and Billing Resources Microsoft Azure – Cost Calculator Larry Ellison has stated that cloud computing has been defined as "everything that we currently do" and that it will have no effect except to "change the wording on some of our ads" Oracle launches worldwide cloud-computing tour NoSQL Movement  

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • Oracle collaborates with leading IT vendors on Cloud Management Standards

    - by Anand Akela
    During the last couple of days, two key specifications for cloud management standards have been announced. Oracle collaborated with leading technology vendors from the IT industry on both of these cloud management specifications. One of the specifications focuses "Infrastructure as a Service" ( IaaS )  cloud service model , while the other specification announced today focuses on "Platform as a Service" ( PaaS ) cloud service model. Please see The NIST Definition of Cloud Computing to learn more about IaaS and PaaS . Earlier today Oracle , CloudBees, Cloudsoft, Huawei, Rackspace, Red Hat, and Software AG   announced the Cloud Application Management for Platforms (CAMP) specification that will be submitted to Organization for the Advancement of Structured Information Standards (OASIS) for development of an industry standard, in an effort to help ensure interoperability for deploying and managing applications across cloud environments.  Typical PaaS architecture - Source : CAMP specification The CAMP specification defines the artifacts and APIs that need to be offered by a PaaS cloud to manage the building, running, administration, monitoring and patching of applications in the cloud. Its purpose is to enable interoperability among self-service interfaces to PaaS clouds by defining artifacts and formats that can be used with any conforming cloud and enable independent vendors to create tools and services that interact with any conforming cloud using the defined interfaces. Cloud vendors can use these interfaces to develop new PaaS offerings that will interact with independently developed tools and components. In a separate cloud standards announcement yesterday, the Distributed Management Task Force ( DMTF ), the organization bringing the IT industry together to collaborate on systems management standards development, validation, promotion and adoption, released the new Cloud Infrastructure Management Interface (CIMI) specification. Oracle collaborated with various technology vendors and industry organizations on this specification. CIMI standardizes interactions between cloud environments to achieve interoperable cloud infrastructure management between service providers and their consumers and developers, enabling users to manage their cloud infrastructure use easily and without complexity. DMTF developed CIMI as a self-service interface for infrastructure clouds ( IaaS focus ) , allowing users to dynamically provision, configure and administer their cloud usage with a high-level interface that greatly simplifies cloud systems management. Mark Carlson, Principal Cloud Strategist at Oracle provides more details about CAMP  and CIMI his blog . Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Enterprise 2.0 Conference: Building Social Business

    - by kellsey.ruppel
    The way we work is changing rapidly, offering an enormous competitive advantage to those who embrace the new tools that enable contextual, agile and simplified information exchange and collaboration to distributed workforces and networks of partners and customers. As many of you are aware, Enterprise 2.0 is the term for the technologies and business practices that liberate the workforce from the constraints of legacy communication and productivity tools like email. It provides business managers with access to the right information at the right time through a web of inter-connected applications, services and devices. Enterprise 2.0 makes accessible the collective intelligence of many, translating to a huge competitive advantage in the form of increased innovation, productivity and agility. The Enterprise 2.0 Conference takes a strategic perspective, emphasizing the bigger picture implications of the technology and the exploration of what is at stake for organizations trying to change not only tools, but also culture and process. Beyond discussion of the "why", there will also be in-depth opportunities for learning the "how" that will help you bring Enterprise 2.0 to your business.You won't want to miss this opportunity to learn and hear from leading experts in the fields of technology for business, collaboration, culture change and collective intelligence. Oracle is a proud Gold sponsor of the Enterprise 2.0 Conference, taking place this week in Boston. Come and learn about Oracle at the following panel sessions and Market Leaders Theater Sessions. Tuesday, June 19, 2012 at 1:30 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Tuesday, June 19, 2012 at 2:30 p.m.  Panel Session Innovation versus Integration Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 1:30 p.m. Business Leadership Roundtable Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 3:00 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Thursday, June 21, 2012 at 8:30 a.m. Panel Session Collecting and Processing Big Data: Architecting Systems that Scale Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Thursday, June 21, 2012 at 11:00 a.m. Panel Session The Future of Big Data: What's Next Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Be sure to stop by and visit Oracle booth #501, to see live demonstrations of Oracle Social Network and Oracle WebCenter!

    Read the article

  • Oracle Coherence 3.5 : Create Internet-scale applications using Oracle's high-performance data grid

    - by frederic.michiara
    Oracle Coherence Coherence provides replicated and distributed (partitioned) data management and caching services on top of a reliable, highly scalable peer-to-peer clustering protocol. Coherence has no single points of failure; it automatically and transparently fails over and redistributes its clustered data management services when a server becomes inoperative or is disconnected from the network. When a new server is added, or when a failed server is restarted, it automatically joins the cluster and Coherence fails back services to it, transparently redistributing the cluster load. Coherence includes network-level fault tolerance features and transparent soft re-start capability to enable servers to self-heal. For the ones looking at an easy reading and first good approach to Oracle Coherence, I would recommend reading the following book : Overview of Oracle Coherence 3.5 Build scalable web sites and Enterprise applications using a market-leading data grid product Design and implement your domain objects to work most effectively with Coherence and apply Domain Driven Designs (DDD) to Coherence applications Leverage Coherence events and continuous queries to provide real-time updates to client applications Successfully integrate various persistence technologies, such as JDBC, Hibernate, or TopLink, with Coherence Filled with numerous examples that provide best practice guidance, and a number of classes you can readily reuse within your own applications This book is targeted to Architects and developers, and as in our team we're more about Solutions Architects than developers I found interest in this book as it help to understand better Oracle Coherence and its value. The only point I may not agree with the authors is that Oracle Coherence is not an alternative to Oracle RAC in providing High Availability, but combining both Oracle RAC and Oracle Coherence will help Architects and Customers to reach higher level of service and high-availability. This book is available on https://www.packtpub.com/oracle-coherence-3-5/book Need to find out about Table of contents : https://www.packtpub.com/toc/oracle-coherence-35-table-contents Discover a sample chapter : https://www.packtpub.com/sites/default/files/6125_Oracle%20Coherence_SampleChapter.pdf Read also articles from the Authors on http://www.packtpub.com/ : Working with Aggregators in Oracle Coherence 3.5 Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5 Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes Installing Coherence 3.5 and Accessing the Data Grid: Part 1 Installing Coherence 3.5 and Accessing the Data Grid: Part 2 For more information on Oracle Coherence : What Oracle Coherence Can Do for You... : http://www.oracle.com/technology/products/coherence/coherencedatagrid/coherence_solutions.html Oracle Coherence on OTN : http://www.oracle.com/technology/products/coherence/index.html Oracle Coherence Knowledge Base : http://coherence.oracle.com/display/COH/Oracle+Coherence+Knowledge+Base+Home

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Taking the fear out of a Cloud initiative through the use of security tools

    - by user736511
    Typical employees, constituents, and business owners  interact with online services at a level where their knowledge of back-end systems is low, and most of the times, there is no interest in knowing the systems' architecture.  Most application administrators, while partially responsible for these systems' upkeep, have very low interactions with them, at least at an operational, platform level.  Of greatest interest to these groups is the consistent, reliable, and manageable operation of the interfaces with which they communicate.  Introducing the "Cloud" topic in any evolving architecture automatically raises the concerns for data and identity security simply because of the perception that when owning the silicon, enterprises are not able to manage its content.  But is this really true?   In the majority of traditional architectures, data and applications that access it are physically distant from the organization that owns it.  It may reside in a shared data center, or a geographically convenient location that spans large organizations' connectivity capabilities.  In the end, very often, the model of a "traditional" architecture is fairly close to the "new" Cloud architecture.  Most notable difference is that by nature, a Cloud setup uses security as a core function, and not as a necessary add-on. Therefore, following best practices, one can say that data can be safer in the Cloud than in traditional, stove-piped environments where data access is segmented and difficult to audit. The caveat is, of course, what "best practices" consist of, and here is where Oracle's security tools are perfectly suited for the task.  Since Oracle's model is to support very large organizations, it is fundamentally concerned about distributed applications, databases etc and their security, and the related Identity Management Products, or DB Security options reflect that concept.  In the end, consumers of applications and their data are to be served more safely in a controlled Cloud environment, while realizing the many cost savings associated with it. Having very fast resources to serve them (such as the Exa* platform) makes the concept even more attractive.  Finally, if a Cloud strategy does not seem feasible, consider the pros and cons of a traditional vs. a Cloud architecture.  Using the exact same criteria and business goals/traditions, and with Oracle's technology, you might be hard pressed to justify maintaining the technical status quo on security alone. For additional information please visit Oracle's Cloud Security page at: http://www.oracle.com/us/technologies/cloud/cloud-security-428855.html

    Read the article

  • At the Java DEMOgrounds - Oracle’s Java Embedded Suite 7.0

    - by Janice J. Heiss
    The Java Embedded Suite 7.0, a new, packaged offering that facilitates the creation of  applications across a wide range of  embedded systems including network appliances, healthcare devices, home gateways, and routers was demonstrated by Oleg Kostukovsky of  Oracle’s Java Embedded Global Business Unit. He presented a device-to-cloud application that relied upon a scan station connected to Java Demos throughout JavaOne. This application allows an NFC tag distributed on a handout given to attendees to be scanned to gather various kinds of data. “A raffle allows attendees to check in at six unique demos and qualify for a prize,” explained Kostukovsky. “At the same time, we are collecting data both from NFC tags and sensors. We have a sensor attached to the back of the skin page that collects temperature, humidity, light intensity, and motion data at each pod. So, all of this data is collected using an application running on a small device behind the scan station."“Analytics are performed on the network using Java Embedded Suite and technology from Oracle partners, SeeControl, Hitachi, and Globalscale,” Kostukovsky said. Next, he showed me a data visualization web site showing sensory, environmental, and scan data that is collected on the device and pushed into the cloud. The Oracle product that enabled all of this, Java Embedded Suite 7.0, was announced in late September. “You can see all kinds of data coming from the stations in real-time -- temperature, power consumption, light intensity and humidity,” explained Kostukovsky. “We can identify trends and look at sensory data and see all the trends of all the components. It uses a Java application written by a partner, SeeControl. So we are using a Java app server and web server and a database.” The Market for Java Embedded Suite 7.0 “It's mainly geared to mission-to-mission applications because the overall architecture applies across multiple industries – telematics, transportation, industrial automation, smart metering, etc. This architecture is one in which the network connects to sensory devices and then pre-analyzes the data from these devices, after which it pushes the data to the cloud for processing and visualization. So we are targeting all those industries with those combined solutions. There is a strong interest from Telcos, from carriers, who are now moving more and more to the space of providing full services for their interim applications. They are looking to deploy solutions that will provide a full service to those who are building M-to-M applications.”

    Read the article

  • Hadoop growing pains

    - by Piotr Rodak
    This post is not going to be about SQL Server. I have been reading recently more and more about “Big Data” – very catchy term that describes untamed increase of the data that mankind is producing each day and the struggle to capture the meaning of these data. Ten years ago, and perhaps even three years ago this need was not so recognized. Increasing number of smartphones and discernable trend of mainstream Internet traffic moving to the smartphone generated one means that there is bigger and bigger stream of information that has to be stored, transformed, analysed and perhaps monetized. The nature of this traffic makes if very difficult to wrap it into boundaries of relational database engines. The amount of data makes it near to impossible to process them in relational databases within reasonable time. This is where ‘cloud’ technologies come to play. I just read a good article about the growing pains of Hadoop, which became one of the leading players on distributed processing arena within last year or two. Toby Baer concludes in it that lack of enterprise ready toolsets hinders Hadoop’s apprehension in the enterprise world. While this is true, something else drew my attention. According to the article there are already about half of a dozen of commercially supported distributions of Hadoop. For me, who has not been involved into intricacies of open-source world, this is quite interesting observation. On one hand, it is good that there is competition as it is beneficial in the end to the customer. On the other hand, the customer is faced with difficulty of choosing the right distribution. In future, when Hadoop distributions fork even more, this choice will be even harder. The distributions will have overlapping sets of features, yet will be quite incompatible with each other. I suppose it will take a few years until leaders emerge and the market will begin to resemble what we see in Linux world. There are myriads of distributions, but only few are acknowledged by the industry as enterprise standard. Others are honed by bearded individuals with too much time to spend. In any way, the third fact I can’t help but notice about the proliferation of distributions of Hadoop is that IT professionals will have jobs.   BuzzNet Tags: Hadoop,Big Data,Enterprise IT

    Read the article

  • Back-sliding into Unmanaged Code

    - by Laila
    It is difficult to write about Microsoft's ambivalence to .NET without mentioning clichés about dog food.  In case you've been away a long time, you'll remember that Microsoft surprised everyone with the speed and energy with which it introduced and evangelised the .NET Framework for managed code. There was good reason for this. Once it became obvious to all that it had sleepwalked into third place as a provider of development languages, behind Borland and Sun, it reacted quickly to attract the best talent in the industry to produce a windows version of the Java runtime, with Bounds-checking, Automatic Garbage collection, structures exception handling and common data types. To develop applications for this managed runtime, it produced several excellent languages, and more are being provided. The only thing Microsoft ever got wrong was to give it a stupid name. The logical step for Microsoft would be to base the entire operating system on the .NET framework, and to re-engineer its own applications. In 2002, Bill Gates, then Microsoft Chairman and Chief Software Architect said about their plans for .NET, "This is a long-term approach. These things don't happen overnight." Now, eight years later, we're still waiting for signs of the 'long-term approach'. Microsoft's vision of an entirely managed operating system has subsided since the Vista fiasco, but stays alive yet dormant as Midori, still being developed by Microsoft Research. This is an Internet-centric fork of the singularity operating system, a research project started in 2003 to build a highly-dependable operating system in which the kernel, device drivers, and applications are all written in managed code. Midori is predicated on the prevalence of connected systems, with provisions for distributed concurrency where application components exist 'in the cloud', and supports a programming model that can tolerate cancellation, intermittent connectivity and latency. It features an entirely new security model that sandboxes applications for increased security. So have Microsoft converted its existing applications to the .NET framework? It seems not. What Windows applications can run on Mono? Very few, it seems. We all thought that .NET spelt the end of DLL Hell and the need for COM interop, but it looks as if Bill Gates' idea of 'not overnight' might stretch to a decade or more. The Operating System has shown only minimal signs of migrating to .NET. Even where the use of .NET has come to dominate, when used for server applications with IIS, IIS itself is still entirely developed in unmanaged code. This is an irritation to Microsoft's greatest supporters who committed themselves fully to the NET framework, only to find parts of the Ambivalent Microsoft Empire quietly backsliding into unmanaged code and the awful C++. It is a strategic mistake that the invigorated Apple didn't make with the Mac OS X Architecture. Cheers, Laila

    Read the article

  • Safely deploying changes to production servers

    - by oazabir
    When you deploy incremental changes on a production server, which is running and live all the time, you some times see error messages like “Compiler Error Message: The Type ‘XXX’ exists in both…”. Sometimes you find Application_Start event not firing although you shipped a new class, dll or web.config. Sometimes you find static variables not getting initialized and so on. There are so many weird things happen on webservers when you incrementally deploy changes to the server and the server has been up and running for several weeks. So, I came up with a full proof house keeping steps that we always do whenever we deploy some incremental change to our websites. These steps ensure that the web sites are properly recycled , cached are cleared, all the data stored at Application level is initialized. First of all you should have multiple web servers behind load balancer. This way you can take one server our of the production traffic, do your deployment and house keeping tasks like restarting IIS, and then put it back. Then you can do it for the second server and so on. This ensures there’s no outage for customer. If you can do it reasonable fast, hopefully customers won’t notice discrepancy between the servers some having new code and some having old code. You should only do this when your changes aren’t drastic. For ex, you aren’t delivering a complete revamped UI. In that case, some users hitting server1 with latest UI will suddenly get a completely different experience and then on next page refresh, they might hit server2 with old code and get a totally different experience. This works for incremental non-dramatic changes only.   During deployment you should follow these steps: Take server X out of load balancer so that it does not get any traffic. Stop all windows services on the server. Stop IIS. Delete the Temporary ASP.NET folders of all .NET versions incase you have multiple .NET versions running. You can follow this link. Deploy the changes. Flush any distributed cache you have, for ex, Velocity or Memcached. Start IIS. Start the windows services on the server. Warm up all websites by hitting major URLs on the websites. You should have some automated script to do this. You can use tinyget to hit some major URLs, especially pages that take a lot of time to compile. Read my post on keeping websites warm with zero coding. Put server X back to load balancer so that it starts receiving traffic. That’s it. It should give you a clean deployment and prevent unexpected errors. You should print these steps and hang on the desk of your deployment guys so that they never forget during deployment pressure.

    Read the article

  • Brendan Gregg's "Systems Performance: Enterprise and the Cloud"

    - by user12608550
    Long ago, the prerequisite UNIX performance book was Adrian Cockcroft's 1994 classic, Sun Performance and Tuning: Sparc & Solaris, later updated in 1998 as Java and the Internet. As Solaris evolved to include the invaluable DTrace observability features, new essential performance references have been published, such as Solaris Performance and Tools: DTrace and MDB Techniques for Solaris 10 and OpenSolaris (2006)  by McDougal, Mauro, and Gregg, and DTrace: Dynamic Tracing in Oracle Solaris, Mac OS X and FreeBSD (2011), also by Mauro and Gregg. Much has occurred in Solaris Land since those books appeared, notably Oracle's acquisition of Sun Microsystems in 2010 and the demise of the OpenSolaris community. But operating system technologies have continued to improve markedly in recent years, driven by stunning advances in multicore processor architecture, virtualization, and the massive scalability requirements of cloud computing. A new performance reference was needed, and I eagerly waited for something that thoroughly covered modern, distributed computing performance issues from the ground up. Well, there's a new classic now, authored yet again by Brendan Gregg, former Solaris kernel engineer at Sun and now Lead Performance Engineer at Joyent. Systems Performance: Enterprise and the Cloud is a modern, very comprehensive guide to general system performance principles and practices, as well as a highly detailed reference for specific UNIX and Linux observability tools used to examine and diagnose operating system behaviour.  It provides thorough definitions of terms, explains performance diagnostic Best Practices and "Worst Practices" (called "anti-methods"), and covers key observability tools including DTrace, SystemTap, and all the traditional UNIX utilities like vmstat, ps, iostat, and many others. The book focuses on operating system performance principles and expands on these with respect to Linux (Ubuntu, Fedora, and CentOS are cited), and to Solaris and its derivatives [1]; it is not directed at any one OS so it is extremely useful as a broad performance reference. The author goes beyond the intricacies of performance analysis and shows how to interpret and visualize statistical information gathered from the observability tools.  It's often difficult to extract understanding from voluminous rows of text output, and techniques are provided to assist with summarizing, visualizing, and interpreting the performance data. Gregg includes myriad useful references from the system performance literature, including a "Who's Who" of contributors to this great body of diagnostic tools and methods. This outstanding book should be required reading for UNIX and Linux system administrators as well as anyone charged with diagnosing OS performance issues.  Moreover, the book can easily serve as a textbook for a graduate level course in operating systems [2]. [1] Solaris 11, of course, and Joyent's SmartOS (developed from OpenSolaris) [2] Gregg has taught system performance seminars for many years; I have also taught such courses...this book would be perfect for the OS component of an advanced CS curriculum.

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >