Search Results

Search found 11720 results on 469 pages for 'enterprise systems'.

Page 108/469 | < Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >

  • What is "Open" anyway?

    - by EmbeddedInsider
    This terms is often used with many meanings.  For example, some people consider Flash 'open' and 'multi-platform' .  But Flash is a product of Adobe systems, locked down, copy protected and distribution restricted.  And versions for other than standard PC, home use, may carry licence fees. Check it out: 3.1 Adobe Runtime Restrictions. You will not use any Adobe Runtime on any non-PC device or with any embedded or device version of any operating system. For the avoidance of doubt, and by example only, you may not use an Adobe Runtime on any (a) mobile device, set top box (STB), handheld, phone, web pad, tablet and Tablet PC (other than with Windows XP Tablet PC Edition and its successors), game console, TV, DVD player, media center (other than with Windows XP Media Center Edition and its successors), electronic billboard or other digital signage, Internet appliance or other Internet-connected device, PDA, medical device, ATM, telematic device, gaming machine, home automation system, kiosk, remote control device, or any other consumer electronics device, (b) operator-based mobile, cable, satellite, or television system or (c) other closed system device. For information on licensing Adobe Runtimes for use on such systems please visit http://www.adobe.com/go/licensing. You will notice, for its embedded operating systems, Microsoft buys and includes a fully paid license for Adobe.   Do you get this with Linux?  Unix?  QNX? So, what is 'open'? Lawrence Ricci www.EmbeddedInsider.com

    Read the article

  • Defense Manpower Data Center Wins Award for Excellence in the Workforce

    - by Peggy Chen
    The Defense Manpower Data Center milConnect website recently won the 2012 Excellence.gov Award for Excellence in the Workforce. Defense Manpower Data Center milConnect is a centralized, online resource that provides military service members (active and retired) and their families (over 42 million in total) quick access to their profile, health care enrollments, benefits, and other military-related topics. An easy to use, safe and secure website, milConnect also provides service members with convenient access their personnel and service-related information. The self-service website allow users to quickly and easily find and, where applicable, update their information in the Defense Eligibility Reporting System (DEERS) and milConnect transmits information to and from one reliable source safely and securely.  The Defense Manpower Data Center (DMDC) maintains the largest, most comprehensive central repository of personnel, manpower, casualty, pay, entitlement, personnel security, person identity and attributes, survey, testing, training, and financial data in the Department of Defense (DoD).  This is one of the largest systems of record in the world. milConnect had the challenge of modernizing the user experience for over 42 million users. With records in over 22 applications and 25 interfaces in hundreds of existing systems, milConnect needed to reduce the complexity of multiple authentication sources as well as consolidating access to existing systems with sensitive information. It accomplished this using a service-orientated architecture, enterprise security and access and identity management for self-service access on a massive scale. By providing 24x7x365 secure access and handling over 5 million transactions daily, not only has milConnect, built on Oracle WebCenter, streamlined and improved the customer experience for military personnel and families. it has also done so while cutting costs, allowing self-service access, and promoting electronic government. Congrats to Defense Manpower Data Center and milConnect! 

    Read the article

  • ArchBeat Link-o-Rama for 11/14/2011

    - by Bob Rhubart
    InfoQ: Developer-Driven Threat Modeling Threat modeling is critical for assessing and mitigating the security risks in software systems. In this IEEE article, author Danny Dhillon discusses a developer-driven threat modeling approach to identify threats using the dataflow diagrams. Managing the Virtual World | Philip J. Gill "The killer app for virtualization has been server consolidation," says Al Gillen, program vice president for systems software at market research firm International Data Corporation (IDC). Solaris X86 AESNI OpenSSL Engine | Dan Anderson "Having X86 AESNI hardware crypto instructions is all well and good, but how do we access it? The software is available with Solaris 11 and is used automatically if you are running Solaris x86 on a AESNI-capable processor," says Anderson. WebLogic Access Management | René van Wijk "This post is a continuation of the post WebLogic Identity Management. In this post we will present the steps involved to integrate WebLogic and Oracle Access Manager," says Oracle ACE René van Wijk. OTN Developer Days in the Nordics - Helsinki, Oslo, Stockholm, and Copenhagen OTN Developer days head for the land of the midnight sun. Podcast: Information Integration Part 2/3 In part two of a three-part program, Oracle Information Integration, Migration, and Consolidation authors Jason Williamson, Tom Laszewsk, and Marc Hebert offer examples of some of the most daunting information integration challenges. Measuring the Human Task activity in Oracle BPM | Leon Smiers Leon Smiers discusses using Oracle BPM to get answer to important questions about what's happening with business process. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ- Dec 14 Spend the day with your peers learning from experts in Cloud computing, engineered systems, and Oracle Fusion Middleware. The Heroes of Java: Michael Hüttermann | Markus Eisele Oracle ACE Director Markus Eisele interviews Java Champion Michael Hüttermann on his role, his process, and on why he uses Java.

    Read the article

  • Current Technologies

    - by Charles Cline
    I currently work at the University of Kansas (KU) and before that Stanford University, to be particular the Stanford Linear Accelerator Center (SLAC).  Collaborating with various Higher Ed institutions the past several years has shown a marked increase in the Microsoft side of the house.  To give you an idea of our current environment, here are some of the things we (Enterprise Systems) have been working on the past two years I’ve been at KU: Migrated from Novell to Active Directory (AD), although we’re still leveraging Novell for IDM.  We currently have 550,000+ objects in AD, and we still have several departments to bring in. Upgraded from Exchange 2003 to Exchange 2010 and Forefront Online Protection for Exchange (FOPE) Implemented SCCM 2007 for Windows systems management Implemented central file storage using EMC products for the backend, using CIFS as the frontend Restructuring AD domains and Forests to decrease the administrative overhead and provide a primary authentication mechanism for the entire University Determining Key Performance Indicators for AD and Exchange Implemented SCOM 2007 to monitor AD and Exchange Implemented Confluence for collaboration within IT and other technology providers at the University Implemented Data Protection Manager (DPM) for backup of AD and Exchange Built a test and QA environment to better facilitate upcoming changes to the environment Almost ready to raise the AD domain level to 2008 R2   I’m sure I’m missing things, and my next post will be some of the things we’re getting ready for – like Centrify to provide AD for OS X and Linux systems.  If anyone would like more info on a particular area, please drop me a line.  I’d be happy to discuss.

    Read the article

  • Inevitable Corporate Bureaucracy

    - by Ahsan Alam
    Top executives of most smaller organizations want their companies to be different from the larger corporations. They want their organizations smaller in size; but bigger in productivity by eliminating red tapes and corporate bureaucracy. When the company is smaller, people often work like firefighters – taking on new business and technology challenges without thinking about any procedures and guidelines. People also tend to wear many hats to accomplish tasks quickly in order to integrate new businesses. For example, software developers in smaller organizations may take on responsibilities of client interactions, requirements gathering, design and development, code deployment, production support, network infrastructure support, database design and maintenance along with countless other duties. In addition, systems in smaller organizations tend to be loosely guarded. So, people often don't follow many procedures in order to setup environments and implement technical projects. It's not uncommon to change code and deploy without anyone realizing. Similarly, business requirements may also get defined in an informal manner without any type of documentation. As the company grows, everything starts to change significantly impacting people and the overall business process. Suddenly, following procedures become extremely important. Consequently, new roles, guidelines and procedures start to emerge. Everything from business process to technology implementation start to become more and more process oriented. Organizations start to define and document steps, invent procedure to track process and systems level changes, and start restricting access to various systems for security reasons. At the same time, as a growing company start doing businesses with larger clienteles, they are automatically forced to abide by all sorts of industry compliance laws. Moreover, growing companies tend to recruit experienced individuals to fill new roles who usually bring their expertise from larger and more bureaucratic organizations.   Despite the best efforts from the top executives, it seems increased number of procedures and guidelines as well as new recruits automatically contribute to the evolution of corporate bureaucracy. Maybe, corporate bureaucracy is an inevitable side effect of a growing organization.

    Read the article

  • Oracle Exadata X3 announcement at Oracle Openworld

    - by Javier Puerta
    Oracle Announces Oracle Exadata X3 Database In-Memory MachineOracle Press ReleaseFourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Detailed info at Oracle Exadata Database Machine

    Read the article

  • eSTEP TechCast - November 2013

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 7th of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 07. November 2013, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: The Operational Management benefits of Engineered Systems Abstract:Oracle Engineered Systems require significantly less administration effort than traditional platforms. This presentation will explain why this is the case, how much can be saved and discusses the best practices recommended to maximise Engineered Systems operational efficiency. Target audience: Tech Presales Speaker: Julian Lane Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 156 244Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Is unit testing development or testing?

    - by Rubio
    I had a discussion with a testing manager about the role of unit and integration testing. She requested that developers report what they have unit and integration tested and how. My perspective is that unit and integration testing are part of the development process, not the testing process. Beyond semantics what I mean is that unit and integration tests should not be included in the testing reports and systems testers should not be concerned about them. My reasoning is based on two things. Unit and integration tests are planned and performed against an interface and a contract, always. Regardless of whether you use formalized contracts you still test what e.g. a method is supposed to do, i.e. a contract. In integration testing you test the interface between two distinct modules. The interface and the contract determine when the test passes. But you always test a limited part of the whole system. Systems testing on the other hand is planned and performed against the system specifications. The spec determines when the test passes. I don't see any value in communicating the breadth and depth of unit and integration tests to the (systems) tester. Suppose I write a report that lists what kind of unit tests are performed on a particular business layer class. What is he/she supposed to take away from that? Judging what should and shouldn't be tested from that is a false conclusion because the system may still not function the way the specs require even though all unit and integration tests pass. This might seem like useless academic discussion but if you work in a strictly formal environment as I do, it's actually important in determining how we do things. Anyway, am I totally wrong? (Sorry for the long post.)

    Read the article

  • Managed Service Architectures Part I

    - by barryoreilly
    Instead of thinking about service oriented architecture, a concept that is continually defined, redefined, abused and mistreated, perhaps it is time to drop the acronym and consider what we actually need to get the job done.   ‘Pure’ SOA involves the modeling of an organisation’s processes, the so called ‘Top Down’ approach, followed by the implementation of these processes as services.     Another approach, more commonly seen in the wild, is the bottom up approach. This usually involves services that simply start popping up in the organization, and SOA in this case is often just an attempt to rein in these services. Such projects, although described as SOA projects for a variety of reasons, have clearly little relation to process driven architecture. Much has been written about these two approaches, with many deciding that a hybrid of both methods is needed to succeed with SOA.   These hybrid methods are a sensible compromise, but one gets the feeling that there is too much focus on ‘Succeeding with SOA’. Organisations who focus too much on bottom up development, or who waste too much time and money on top down approaches that don’t produce results, are often recommended to attempt an ‘agile’(Erl) or ‘middle-out’ (Microsoft) approach in order to succeed with SOA.  The problem with recommending this approach is that, in most cases, succeeding with SOA isn’t the aim of the project. If a project is started with the simple aim of ‘Succeeding with SOA’ then the reasons for the projects existence probably need to be questioned.   There are a number of things we can be sure of: ·         An organisation will have a number of disparate IT systems ·         Some of these systems will have redundant data and functionality ·         Integration will give considerable ROI ·         Integration will already be under way. ·         Services will already exist in the organisation ·         These services will be inconsistent in their implementation and in their governance   So there are three goals here: 1.       Alignment between the business and IT 2.     Integration of disparate systems 3.     Management of services.   2 and 3 are going to happen,  in fact they must happen if any degree of return is expected from the IT department. Ignoring 1 is considered a typical mistake in SOA implementations, as it ignores the business implications. However, the business implication of this approach is the money saved in more efficient IT processes. 2 and 3 are ongoing, and they will continue happening, even if a large project to produce a SOA metamodel is started. The result will then be an unstructured cackle of services, and a metamodel that is already going out of date. So we get stuck in and rebuild our services so that they match the metamodel, with the far reaching consequences that this will have on all our LOB systems are current. Lets imagine that this actually works ( how often do we rip and replace working software because it doesn't fit a certain pattern? Never -that's the point of integration), we will now be working with a metamodel that is out of date, and most likely incomplete if the organisation is large.      Accepting that an object can have more than one model over time, with perhaps more than one model being  at any given time will help us realise the limitations of the top down model. It is entirely normal , and perhaps necessary, for an organisation to be able to view an entity from different perspectives.   So, instead of trying to constantly force these goals in a straight line, why not let them happen in parallel, and manage the changes in each layer.     If  company A has chosen to model their business processes and create a business architecture, there will be a reason behind this. Often the aim is to make the business more flexible and able to cope with change, through alignment between the business and the IT department.   If company B’s IT department recognizes the problem of wild services springing up everywhere, and decides to do something about it, by designing a platform and processes for the introduction of services, is this not a valid approach?   With the hybrid approach, it is recommended that company A begin deploying services as quickly as possible. Based on models that are clearly incomplete, and which will therefore change rapidly and often in the near future. Natural business evolution will also mean that the models can be guaranteed to change in the not so near future. To ‘Succeed with SOA’ Company B needs to go back to the drawing board and start modeling processes and objects. So, in effect, we are telling business analysts to start developing code based on a model they are unsure of, and telling programmers to ignore the obvious and growing problems in their IT department and start drawing lines and boxes.     Could the problem be that there are two different problem domains? And the whole concept of SOA as it being described by clever salespeople today creates an example of oft dreaded ‘tight coupling’ between these two domains?   Could it be that we have taken two large problem areas, and bundled the solution together in order to create a magic bullet? And then convinced ourselves that the bullet actually exists?   Company A wants to have a closer relationship between the business and its IT department, in order to become a more flexible organization. Company B wants to decrease the maintenance costs of its IT infrastructure. If both companies focus on succeeding with SOA, then they aren’t focusing on their actual goals.   If Company A starts building services from incomplete models, without a gameplan, they will end up in the same situation as company B, with wild services. If company B focuses on modeling, they could easily end up with the same problems as company A.   Now we have two companies, who a short while ago had one problem each, that now have two problems each. This has happened because of a focus on ‘Succeeding with SOA’, rather than solving the problem at hand.   This is not to suggest that the two problem domains are unrelated, a strategy that encompasses both will obviously be good for the organization. But only if the organization realizes this and can develop such a strategy. This strategy cannot be bought in a box.       Anyone who has worked with SOA for a while will be used to analyzing the solutions to a problem and judging the solution’s level of coupling. If we have two applications that each perform separate functions, but need to communicate with each other, we create a integration layer between them, perhaps with a service, but we do all we can to reduce the dependency between the two systems. Using the same approach, we can separate the modeling (business architecture) and the service hosting (technical architecture).     The business architecture describes the processes and business objects in the business domain.   The technical architecture describes the hosting and management and implementation of services.   The glue that binds these together, the integration layer in our analogy, is the service contract, where the operations map the processes to their technical implementation, and the messages map business concepts to software objects in the implementation.   If we reduce the coupling between these layers, we should be able to allow developers to develop services, and business analysts to develop models, without the changes rippling through from one side to the other.   This would allow company A to carry on modeling, and company B to develop a service platform, each achieving their intended goal, without necessarily creating the problems seen in pure top down or bottom up approaches. Company B could then at a later date map their service infrastructure to a unified model, and company A could carry on modeling, insulating deployed services from changes in the ongoing modeling.   How do we do this?  The concept of service virtualization has been around for a while, and is instantly realizable in Microsoft’s Managed Services Engine. Here we can create a layer of virtual services, which represent the business analyst’s view, presenting uniform contracts to the outside world. These services can then transform and route messages to the actual service implementations. I like to think of the virtual services with their beautifully modeled interfaces as ‘SOA services’, and the implementations as simple integration ‘adapter’ services providing an interface to a technical implementation. The Managed Services Engine also provides policy based control over services, regardless of where they are deployed, simplifying handling of security, logging, exception handling etc.   This solves a big problem. The pressure to deliver services quickly is always there in projects. It is very important to quickly show value when implementing service architectures. There is also pressure to deliver quality, and you can’t easily do both at the same time. This approach allows quick delivery with quality increasing over time, allowing modeling and service development to occur in parallel and independent of each other. The link between business modeling and service implementation is not one that is obvious to many organizations, and requires a certain maturity to realize and drive forward. It is also completely possible that a company can benefit from one without the other, even if this approach is frowned upon today, there are many companies doing so and seeing ROI.   Of course there are disadvantages to this. The biggest one being the transformations necessary between the virtual interfaces and the service implementations. Bad choices in developing the services in the service implementation could mean that it is impossible to map the modeled processes to the implementation with redevelopment of the service. In many cases the architect will not have a choice here anyway, as proprietary systems are often delivered with predeveloped services. The alternative is to wait until the model is finished and then build the service according the model. However, if that approach worked we wouldn’t be having this discussion! And even when it does work, natural business evolution will mean that the two concepts (model and implementation) will immediately start to drift away from each other, so coupling them tightly together so that they are forever bound to the model that only applies at the time of the modeling work will not really achieve a great deal. Architecture is all about trade offs, and here a choice has to be made. The choice is between something will initially be of low quality but will work, or something that may well be impossible to achieve in most situations.         In conclusion, top-down is a natural approach for business analysts, and bottom-up  is a natural approach for developers. Instead of trying to force something on both that neither want, and which has not shown itself to be successful,  why not let them get on with their jobs, and let an enterprise architect coordinate the processes?

    Read the article

  • What's a "Cloud Operating System"?

    - by user12608550
    What's a "Cloud Operating System"? Oracle's recently introduced Solaris 11 has been touted as "The First Cloud OS". Interesting claim, but what exactly does it mean? To answer that, we need to recall what characteristics define a cloud and then see how Solaris 11's capabilities map to those characteristics. By now, most cloud computing professionals have at least heard of, if not adopted, the National Institute of Standards and Technology (NIST) Definition of Cloud Computing, including its vocabulary and conceptual architecture. NIST says that cloud computing includes these five characteristics: On-demand self-service Broad network access Resource pooling Rapid elasticity Measured service How does Solaris 11 support these capabilities? Well, one of the key enabling technologies for cloud computing is virtualization, and Solaris 11 along with Oracle's SPARC and x86 hardware offerings provides the full range of virtualization technologies including dynamic hardware domains, hypervisors for both x86 and SPARC systems, and efficient non-hypervisor workload virtualization with containers. This provides the elasticity needed for cloud systems by supporting on-demand creation and resizing of application environments; it supports the safe partitioning of cloud systems into multi-tenant infrastructures, adding resources as needed and deprovisioning computing resources when no longer needed, allowing for pay-only-for-usage chargeback models. For cloud computing developers, add to that the next generation of Java, and you've got the NIST requirements covered. The results, or one of them anyway, are services like the new Oracle Public Cloud. And Solaris is the ideal platform for running your Java applications. So, if you want to develop for cloud computing, for IaaS, PaaS, or SaaS, start with an operating system designed to support cloud's key requirements…start with Solaris 11.

    Read the article

  • What is a Data Warehouse?

    Typically Data Warehouses are considered to be non-volatile in comparison to traditional databasesdue to the fact that data within the warehouse does not change that often.  In addition, Data Warehouses typically represent data through the use of Multidimensional Conceptual Views that allow data to be extracted based on the view and the current position within the view. Common Data Warehouse Traits Relatively Non-volatile Data Supports Data Extraction and Analysis Optimized for Data Retrieval and Analysis Multidimensional Views of Data Flexible Reporting Multi User Support Generic Dimensionality Transparent Accessible Unlimited Dimensions of Data Unlimited Aggregation levels of Data Normally, Data Warehouses are much larger then there traditional database counterparts due to the fact that they store the basis data along with derived data via Multidimensional Conceptual Views. As companies store larger and larger amounts of data, they will need a way to effectively and accurately extract analysis information that can be used to aide in formulating current and future business decisions. This process can be done currently through data mining within a Data Warehouse. Data Warehouses provide access to data derived through complex analysis, knowledge discovery and decision making. Secondly, they support the demands for high performance in regards to analyzing an organization’s existing and current data. Data Warehouses provide support for an organization’s data and acquired business knowledge.  Within a Data Warehouse multiple types of operations/sub systems are supported. Common Data Warehouse Sub Systems Online Analytical Processing (OLAP) Decision –Support Systems (DSS) Online Transaction Processing (OLTP)

    Read the article

  • links for 2011-03-09

    - by Bob Rhubart
    Is there a Telecommunications Reference Architecture? (Telecommunications Architecture Corner) The answer is "yes," and Raul Goycoolea shares the details. (tags: oracle otn enterprisearchitecture) Oracle@info360: Advance Beyond Point Solutions To An Enterprise Content Strategy (Oracle Enterprise 2.0 Blog) Kellsey Ruppel shares information on some of the speakers at the upcoming info360/AIIM conference. (tags: oracle otn enterprise2.0 aiim info360) ERP in the Cloud for Local Government | Oracle Blog | Capgemini | Consulting, Technology, Outsourcing In these times of austerity, Local Authorities are facing significant reductions in budgets (on average over 30%). Now that the easier savings have been realised, Councils are faced with two options, cutting services or revolutionary changes to the way they do things today. (tags: oracle capgemini cloud) Mobile HR Apps "Good, so we have we have plenty of commercial applications making use of the smart phone," says Raheel Khan. "But what about core backend business applications?" (tags: oracle mobilecomputing) Policy Administration is the Top 2011 IT Priority for Insurers (Oracle Insurance) "Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities." - Helen Pitts (tags: oracle otn enterprisearchitecture) Free: Oracle Technology Network Architect Day - Denver - March 23 The live one-day event in Denver brings together architects from a broad range of disciplines and domains to share insights and expertise in the use of Oracle technologies to meet the challenges today’s architects regularly face. The event is free, but seating is limited. (tags: oracle otn enterprisearchitecture cloud optimization) InfoQ: Randy Shoup on Evolvable Systems Randy Shoup discusses evolvable systems: how to run different versions of a system in parallel during migrations, decoupling a system with events, schemas at eBay and much more. (tags: ping.fm)

    Read the article

  • Cross-Platform Migration using Heterogeneous Data Guard

    - by Roy F. Swonger
    Most people think of Data Guard as a disaster recovery solution, and it certainly excels in that role. However, did you know that you can also use Data Guard for platform migration under some conditions? While you would normally have your primary and standby Data Guard systems running on the same OS and hardware platform, there are some heterogeneous combinations of primary and stanby system that are supported by Data Guard Physical Standby. One example of heterogenous Data Guard support is the ability to go between Linux and Windows on many processor architectures. Another is the support for environments that are running HP-UX on both PA-RIsC and Itanium hardware. Brand new in 11.2.0.2 is the ability to have both SPARC Solaris and IBM AIX on Power Systems in the same Data Guard environment. See My Oracle Support note 413484.1 for all the details about supported platform combinations. So, why mention this in an upgrade blog? Simple: much of the time required for a platform migration is usually spent copying files from one system to another. If you are moving between systems that are supported by heterogenous Data Guard, then you can reduce that migration downtime to a matter of minutes. This can be a big win when downtime is at a premium (and isn't downtime always at a premium? In addition, you get the benefit of being able to keep the old and new environments synchronized until you are sure the migration is successful! A great case study of using Data Guard for a technology refresh is located on this OTN page. The case study showing CERN's methodology isn't highlighted as a link on the overview page, but it is clickable. As always, make sure you are fully versed on the details and restrictions by reading the available documentation and MOS notes. Happy migrating!

    Read the article

  • Can you say "Architect?"

    - by Bob Rhubart
    Photo by Jennifer Ortiz In his article, It's Time To Occupy IT, AIIM CEO and president John Mancini examines the evolution of "Systems of Engagement," the social technologies that are transforming how customers and employees relate to and interact with companies. Surviving the disruption that transformation entails is a matter of when, rather than if, a given organization embraces the change. But as Mancini points out, that transformation will require a "new breed" of IT professional: "While addressing this kind of challenge requires technical skills, it also requires process and customer acumen more often found in the business than in our IT departments. It requires a new type of information professional, whose expertise includes technical and domain knowledge, but who also has an idea of how the pieces of a process that spans the worlds of Systems of Record and Systems of Engagement should fit together. Gartner estimates that the demand for this new breed of information professional will grow by 50 percent by 2015." Though Mancini makes no reference to the title, the skills he desribes are those of the IT architect. While the specific definition of the role remains fodder for seemingly endless discussion and debate on various social networks and forums, the fact remains that the skills required for success in the evolving world of IT will increasingly involve a deep understanding of how all the pieces fit together.

    Read the article

  • Entity Component System for HUD and GUI

    - by Jason L.
    This is a very rough sketch of how I currently have things designed. It should, at least, give an idea of how my ECS is currently designed. If you notice in that diagram, I have basically split the HUD out of the ECS. They have their own set of things (HudLayer, HudComponent, etc) and are handled differently. This is where I'm struggling, though. There are many different instances in which the HUD will need to know about entities. Not just data changing (I have an event dispatcher for that), but the actual entity and all it encompasses. There are also situations where entities will need to be able to query the HUD for data. Let's take a couple examples: First, my equipment screen. On here I can change the equipment on a character (Entity). In order for this to happen, I need to know about the entity. At least I think I do? How can I handle this? The second scenario involves my Systems needing to query a HudComponent for data. A specific example would be my battle system. Each "team" is given a 3x3 grid they can move around in. See here: Skills target these cells, and not the player, so I would need a way for my systems to determine which cells are occupied and which are not. Basically I need a way for two way communication between Systems and my HUD. I know it's recommended (by some people, anyways) to take your HUD out of the ECS. Is that appropriate in my case?

    Read the article

  • Game editor integration with the engine?

    - by Daniel
    What I am trying to figure out is what is the best way to integrate the editor(level, effects, model, etc...) in the most effective way? Now the first thing I thought would be to create the game engine(*) extremely modular. For example I took the example of game states. You could have multiple game states that all have their own update() and draw() methods among others. Each game state class would inherit from a base GameState class. This allows for a more modular approach and a useful one at that. Now would the most efficient approach be to implement the editor along with the modular engine, or create two different designs for both the game, and editor? I thought to take the game state example and extend it to window states, and well could be used for a lot more systems. Is there a better implementation of this design(game state) for use in other systems used in the engine? *: Now I know the term game engine is sorta irrelevant, and misused in many situations. What I am referring to as the "game engine" is the combination of the systems that the game must interact with for short. Also this is more of a theory / design question than an implementation. Even though both mix, i'd rather like to have a more general idea on how the editor is built in an efficient way and still using the same engine code as what the game uses. Thanks, Daniel P.S If you need more clarification or extra bits just leave a comment.

    Read the article

  • TCO Comparison: Oracle Exadata vs IBM P-Series

    - by Javier Puerta
    Cost Comparison for Business Decision-makersOracle Exadata Database Machine vs. IBM Power SystemsHow to Weigh a Purchase DecisionOctober 2012 Download full report here In this research-based  white paper conducted at the request of Oracle, The FactPoint Group compares the cost of ownership of the Oracle Exadata engineered system to a traditional build-your-own (BYO) solution, in this case an IBM Power 770 (P770) with SAN storage.  The IBM P770 was chosen given it is IBM’s current most popular model, based on FactPoint primary and secondary research and IBM claims, and because at least one of the interviewed customers had specifically migrated from a P770 to Exadata, affording us a more specific data point for comparison. This research found that Oracle Exadata: Can be deployed more quickly and easily requiring 59% fewer man-hours than a traditional IBM Power Systems solution. Delivers dramatically higher performance typically up to 12X improvement, as described by customers, over their prior solution.  Requires 40% fewer systems administrator hours to maintain and operate annually, including quicker support calls because of less finger-pointing and faster service with a single vendor.  Will become even easier to operate over time as users become more proficient and organize around the benefits of integrated infrastructure. Supplies a highly available, highly scalable and robust solution that results in reserve capacity that make Exadata easier for IT to operate because IT administrators can manage proactively, not reactively.  Overall, Exadata operations and maintenance keep IT administrators from “living on the edge.”  And it’s pre-engineered for long-term growth. Finally, compared to IBM Power Systems hardware, Exadata is a bargain from a total cost of ownership perspective:  Over three years, the IBM hardware running Oracle Database cost 31% more in TCO than Exadata.

    Read the article

  • A case for not installing your own software

    - by James Gentsch
    This week I watched some of the Oracle Open World presentations (from the comfort of my Oracle office) and happened on some of Larry Ellison’s comments about cloud computing and engineered systems.  Larry said he sees the move to these as analogous to the moves made by the original adopters of electricity.  The argument goes that the first consumers of electricity had to set up their own power plant.  Then, as the market and infrastructure for electricity matured, power consumers moved from using their own personal power plant to purchasing power from another entity that was focused on power production as their primary product. In the end this was a cheaper and more reliable solution. Now, there are lots of compelling reasons to be looking very seriously at cloud computing and engineered systems for enterprise application deployment.  However, speaking as a software developer of enterprise applications, the part of this that I really love (besides Larry’s early electricity adopter analogy) is that as a mode of application deployment it provides me and my customers a consistent environment in which the applications I am providing will be run.  This cuts way down on the environmental surprises that consistently lead to the hated “well, it works here” situation with the support desk. And just to be clear, I think I hate this situation more than my clients, who I think are happy that at least it is working somewhere.  I hate this because when a problem happens, and let’s face it customers are not wasting their time calling in easy problems, we are seriously disabled when we cannot reproduce the issue which is triggered by something unforeseen in the environment where the application is running.  This situation is incredibly frustrating and an all too often occurrence. I look selfishly forward to cloud computing and engineered systems dramatically reducing the occurrence of problems triggered by unforeseen environmental situations in the software I am responsible for.  I think this is an evolutionary game changer that will be a huge benefit to the reliability and consistent performance of the software for my customers, and may make “well, it works here” a well forgotten phase for future software developers. It may even impact the stress squeeze toy industry.  Well, maybe at least for my group.

    Read the article

  • In agile environment, how is bug tracking and iteration tracking consolidated.

    - by DXM
    This topic stemmed from my other question about management-imposed waterfall-like schedule. From the responses in the other thread, I gathered this much about what is generally advised: Each story should be completed with no bugs. Story is not closed until all bugs have been addressed. No news there and I think we can all agree with this. If at a later date QA (or worse yet a customer) finds a bug, the report goes into a bug tracking database and also becomes a story which should be prioritized just like all other work. Does this sum up general handling of bugs in agile environment? If yes, the part I'm curious about is how do teams handle tracking in two different systems? (unless most teams don't have different systems). I've read a lot of advice (including Joel's blog) on software development in general and specifically on importance of a good bug tracking tool. At the same time when you read books on agile methodology, none of them seem to cover this topic because in "pure" agile, you finish iteration with no bugs. Feels like there's a hole there somewhere. So how do real teams operate? To track iterations you'd use (whiteboard, Rally...), to track bugs you'd use something from another set of products (if you are lucky enough, you might even get stuck with HP Quality Center). Should there be 2 separate systems? If they are separate, do teams spend time creating import/sync functionality between them? What have you done in your company? Is bug tracking software even used? Or do you just go straight to creating a story?

    Read the article

  • Can Clojure's thread-based agents handle c10k performance?

    - by elliot42
    I'm writing a c10k-style service and am trying to evaluate Clojure's performance. Can Clojure agents handle this scale of concurrency with its thread-based agents? Other high performance systems seem to be moving towards async-IO/events/greenlets, albeit at a seemingly higher complexity cost. Suppose there are 10,000 clients connected, sending messages that should be appended to 1,000 local files--the Clojure service is trying to write to as many files in parallel as it can, while not letting any two separate requests mangle the same single file by writing at the same time. Clojure agents are extremely elegant conceptually--they would allow separate files to be written independently and asynchronously, while serializing (in the database sense) multiple requests to write to the same file. My understanding is that agents work by starting a thread for each operation (assume we are IO-bound and using send-off)--so in this case is it correct that it would start 1,000+ threads? Can current-day systems handle this number of threads efficiently? Most of them should be IO-bound and sleeping most of the time, but I presume there would still be a context-switching penalty that is theoretically higher than async-IO/event-based systems (e.g. Erlang, Go, node.js). If the Clojure solution can handle the performance, it seems like the most elegant thing to code. However if it can't handle the performance then something like Erlang or Go's lightweight processes might be preferable, since they are designed to have tens of thousands of them spawned at once, and are only moderately more complex to implement. Has anyone approached this problem in Clojure or compared to these other platforms? (Thanks for your thoughts!)

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • Building a linux system

    - by webyankee
    I am worried about hardware compatibility. I have several older PCs with various hardware and wish to install Linux onto them. I have several ideas about what I would like to do. first, I am a novice and know just enough to get me into trouble in a lot of areas. I can not find adequate descriptions of the usage between a desktop and a server version of Linux. When would you want to choose to build a server instead of a desktop and can you change a desktop to a server if you need higher functionality? I wonder if I should use 32 or 64 bit? I believe 32 bit on older (P1 or P2 systems) would be the safe way to go. what is the extent can these systems be used? Can they used to play high end graphics on-line games or just simple browsing and word processing? How do I determine what programs the system can use? I have pondered on the idea of linking several systems together to make one big computer. I know this can be done with some functionality improvement. Any Ideas about this?

    Read the article

  • Scenes from OpenWorld Day One

    - by Larry Wake
    Sunday's the day that everything comes together, but there's always that last minute scramble. Here are a few peeks at what everyone's doing, and may still be doing far into the night. This is the team putting the final touches on the Hands-On Lab room for  HOL10201, "Reduce Risk with Oracle Solaris Access Control to Restrain Users and Isolate Applications". This should be a great learning experience--plus it's a chance to meet up with some of the top Solaris security people, including Glenn Faden and Darren Moffat. And here's the OTN Garage's own Rick Ramsey, working feverishly to help set up the Oracle Solaris Systems Pavilion. (Moscone South, Booth 733). Several of our featured partners will be demonstrating solutions running on Oracle Solaris systems -- plus, we'll be serving espresso, to help you power through the week. Another panorama shot, courtesy of iOS 6 -- come for the maps, stay for the photos.... Moscone South is also home once again this year to the systems and storage DEMOgrounds. Plenty to learn and see; you might even catch a glimpse of me there on Tuesday afternoon.

    Read the article

  • How to make and restore incremental snapshots of hard disk

    - by brunopereira81
    I use Virtual Box a lot for distro / applications testing purposes. One of the features I simply love about it is virtual machines snapshots, its saves a state of a virtual machine and is able to restore it to its former glory if something you did went wrong without any problems and without consuming your all hard disk space. On my live systems I know how to create a 1:1 image of the file system but all the solutions I'v known will create a new image of the complete file system. Are there any programs / file systems that are capable of taking a snapshot of a current file system, save it on another location but instead of making a complete new image it creates incremental backups? To easy describe what I want, it should be as dd images of a file system, but instead of only a full backup it would also create incremental. I am not looking for clonezilla, etc. It should run within the system itself with no (or almost none) intervention from the user, but contain all the data of the file systems. I am also not looking for a duplicity backup your all system excluding some folders script + dd to save your mbr. I can do that myself, looking for extra finesse. I'm looking for something I can do before doing massive changes to a system and then if something when wrong or I burned my hard disk after spilling coffee on it I can just boot from a liveCD and restore a working snapshot to a hard disk. It does not need to be daily, it doesn't even need a schedule. Just run once in a while and let it its job and preferably RAW based not file copy based.

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

< Previous Page | 104 105 106 107 108 109 110 111 112 113 114 115  | Next Page >