Search Results

Search found 9471 results on 379 pages for 'technology tid bits'.

Page 59/379 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • How to pass multiple params to function in python?

    - by user1322731
    I am implementing 8bit adder in python. Here is the adder function definition: def add8(a0,a1,a2,a3,a4,a5,a6,a7,b0,b1,b2,b3,b4,b5,b6,b7,c0): All function parameters are boolean. I have implemented function that converts int into binary: def intTObin8(num): bits = [False]*8 i = 7 while num >= 1: if num % 2 == 1: bits[i] = True else: bits[i] = False i = i - 1 num /= 2 print bits return [bits[x] for x in range(0,8)] I want this function to return 8 bits. And to use this two functions as follows: add8(intTObin8(1),intTObin8(1),0) So the question is: How to pass 8 parameters using one function?

    Read the article

  • SOA Governance Starts with People and Processes

    - by Jyothi Swaroop
    While we all agree that SOA Governance is about People, Processes and Technology. Some experts are of the opinion that SOA Governance begins with People and Processes but needs to be empowered with technology to achieve the best results. Here's an interesting piece from David Linthicum on eBizq: In the world of SOA, the concept of SOA governance is getting a lot of attention. However, how SOA governance is defined and implemented really depends on the SOA governance vendor who just left the building within most enterprises. Indeed, confusion is a huge issue when considering SOA governance, and the core issues are more about the fundamentals of people and processes, and not about the technology. SOA governance is a concept used for activities related to exercising control over services in an SOA, including tracking the services, monitoring the service, and controlling changes made to the services, simple put. The trouble comes in when SOA governance vendors attempt to define SOA governance around their technology, all with different approaches to SOA governance. Thus, it's important that those building SOAs within the enterprise take a step back and understand what really need to support the concept of SOA governance. The value of SOA governance is pretty simple. Since services make up the foundation of an SOA, and are at their essence the behavior and information from existing systems externalized, it's critical to make sure that those accessing, creating, and changing services do so using a well controlled and orderly mechanism. Those of you, who already have governance in place, typically around enterprise architecture efforts, will be happy to know that SOA governance does not replace those processes, but becomes a mechanism within the larger enterprise governance concept. People and processes are first thing on the list to get under control before you begin to toss technology at this problem. This means establishing an understanding of SOA governance within the team members, including why it's important, who's involved, and the core processes that are to be follow to make SOA governance work. Indeed, when creating the core SOA governance strategy should really be independent of the technology. The technology will change over the years, but the core processes and discipline should be relatively durable over time.

    Read the article

  • A Better Way to Plan, Execute and Manage Enterprise Architecture

    - by JuergenKress
    IT Strategies from Oracle is an authorized library of guidelines and reference architectures that will help you better plan, execute, and manage your enterprise architecture and IT initiatives. The IT Strategies from Oracle library offers two types of best practice documents: practitioner guides containing pragmatic advice and approaches, and reference architectures containing the proven technology patterns to jumpstart your initiative. The IT Strategies from Oracle library can help you establish a reliable set of principles and standards to guide your use of Oracle technology. We will expand this library over time across all of Oracle's technologies. Today, you can access: Overview documents providing an introduction to all the resources available in the library and best practices maturity models Oracle Reference Architectures covering the application infrastructure foundation, management and monitoring, security, software engineering, service-oriented integration, service orientation, user interaction, engineered systems, and a master glossary. Enterprise Technology Strategies for Service-Oriented Architecture offering practitioner guides on creating a SOA roadmap, frameworks for governance, determining ROI, identifying services, software engineering, and white papers. Enterprise Technology Strategies for Event-Driven Architecture offering practitioner guides on creating an EDA roadmap and reference architectures on an EDA foundation and EDA infrastructure. Enterprise Technology Strategies for Business Process Management including practitioner guides on creating a BPM roadmap, business process engineering, governance, and reference architectures on a BPM foundation and BPM infrastructure. Enterprise Technology Strategies for Cloud Computing including reference architectures on a Cloud foundation and Cloud infrastructure. Enterprise Technology Strategies for Business Analytics includes a practitioner guide for creating a BA roadmap, and reference architectures for a BA foundation and BA infrastructure. Get the Oracle Enterprise Architecture content here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Architecture,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Domain Specific Software Engineering (DSSE)

    Domain Specific Software Engineering (DSSE) believes that creating every application from nothing is not advantageous when existing systems can be leveraged to create the same application in less time and with less cost.  This belief is founded in the idea that forcing applications to recreate exiting functionality is unnecessary. Why would we build a better wheel when we already have four really good and proven wheels? DSSE suggest that we take an existing wheel and just modify it to fit an existing need of a system. This allows developers to leverage existing codebases so that more time and expense are focused on creating more usable functionality compared to just creating more functionality. As an example, how many functions do we need to create to send an email when one can be created and used by all other applications within the existing domain? Key Factors of DSSE Domain Technology Business A Domain in DSSE is used to control the problem space for a project. This control allows for applications to be developed within specific constrains that focus development is to a specific direction.Technology in DSSE offers a variety of technological solutions to be applied within a domain. Technology Examples: Tools Patterns Architectures & Styles Legacy Systems Business is the motivator for any originations to use DSSE in there software development process. Business reason to use DSSE: Minimize Costs Maximize market and Profits When these factors are used in combination additional factors and benefits can be found. Result of combining Key Factors of DSSE Domain + Business  = Corporate Core Competencies Domain expertise improved by market and business expertise Domain + Technology = Application Family Architectures All possible technological solutions to problems in a domain without any business constraints.  Business + Technology =  Domain independent infrastructure Tools and techniques for building systems  independent of all domains  Domain + Business + Technology = Domain-specific software engineering Applies technology to domain related goals in the context of business and market expertise

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • JavaOne 2012 Sunday Strategy Keynote

    - by Janice J. Heiss
    At the Sunday Strategy Keynote, held at the Masonic Auditorium, Hasan Rizvi, EVP, Middleware and Java Development, stated that the theme for this year's JavaOne is: “Make the future Java”-- meaning that Java continues in its role as the most popular, complete, productive, secure, and innovative development platform. But it also means, he qualified, the process by which we make the future Java -- an open, transparent, collaborative, and community-driven evolution. "Many of you have bet your businesses and your careers on Java, and we have bet our business on Java," he said.Rizvi detailed the three factors they consider critical to the success of Java--technology innovation, community participation, and Oracle's leadership/stewardship. He offered a scorecard in these three realms over the past year--with OS X and Linux ARM support on Java SE, open sourcing of JavaFX by the end of the year, the release of Java Embedded Suite 7.0 middleware platform, and multiple releases on the Java EE side. The JCP process continues, with new JSR activity, and JUGs show a 25% increase in participation since last year. Oracle, meanwhile, continues its commitment to both technology and community development/outreach--with four regional JavaOne conferences last year in various part of the world, as well as the release of Java Magazine, with over 120,000 current subscribers. Georges Saab, VP Development, Java SE, next reviewed features of Java SE 7--the first major revision to the platform under Oracle's stewardship, which has included near-monthly update releases offering hundreds of fixes, performance enhancements, and new features. Saab indicated that developers, ISVs, and hosting providers have all been rapid adopters of the platform. He also noted that Oracle's entire Fusion middleware stack is supported on SE 7. The supported platforms for SE 7 has also increased--from Windows, Linux, and Solaris, to OS X, Linux ARM, and the emerging ARM micro-server market. "In the last year, we've added as many new platforms for Java, as were added in the previous decade," said Saab.Saab also explored the upcoming JDK 8 release--including Project Lambda, Project Nashorn (a modern implementation of JavaScript running on the JVM), and others. He noted that Nashorn functionality had already been used internally in NetBeans 7.3, and announced that they were planning to contribute the implementation to OpenJDK. Nandini Ramani, VP Development, Java Client, ME and Card, discussed the latest news pertaining to JavaFX 2.0--releases on Windows, OS X, and Linux, release of the FX Scene Builder tool, the JavaFX WebView component in NetBeans 7.3, and an OpenJFX project in OpenJDK. Nandini announced, as of Sunday, the availability for download of JavaFX on Linux ARM (developer preview), as well as Scene Builder on Linux. She noted that for next year's JDK 8 release, JavaFX will offer 3D, as well as third-party component integration. Avinder Brar, Senior Software Engineer, Navis, and Dierk König, Canoo Fellow, next took the stage and demonstrated all that JavaFX offers, with a feature-rich, animation-rich, real-time cargo management application that employs Canoo's just open-sourced Dolphin technology.Saab also explored Java SE 9 and beyond--Jigsaw modularity, Penrose Project for interoperability with OSGi, improved multi-tenancy for Java in the cloud, and Project Sumatra. Phil Rogers, HSA Foundation President and AMD Corporate Fellow, explored heterogeneous computing platforms that combine the CPU and the parallel processor of the GPU into a single piece of silicon and shared memory—a hardware technology driven by such advanced functionalities as HD video, face recognition, and cloud workloads. Project Sumatra is an OpenJDK project targeted at bringing Java to such heterogeneous platforms--with hardware and software experts working together to modify the JVM for these advanced applications and platforms.Ramani next discussed the latest with Java in the embedded space--"the Internet of things" and M2M--declaring this to be "the next IT revolution," with Java as the ideal technology for the ecosystem. Last week, Oracle released Java ME Embedded 3.2 (for micro-contollers and low-power devices), and Java Embedded Suite 7.0 (a middleware stack based on Java SE 7). Axel Hansmann, VP Strategy and Marketing, Cinterion, explored his company's use of Java in M2M, and their new release of EHS5, the world's smallest 3G-capable M2M module, running Java ME Embedded. Hansmaan explained that Java offers them the ability to create a "simple to use, scalable, coherent, end-to-end layer" for such diverse edge devices.Marc Brule, Chief Financial Office, Royal Canadian Mint, also explored the fascinating use-case of JavaCard in his country's MintChip e-cash technology--deployable on smartphones, USB device, computer, tablet, or cloud. In parting, Ramani encouraged developers to download the latest releases of Java Embedded, and try them out.Cameron Purdy, VP, Fusion Middleware Development and Java EE, summarized the latest developments and announcements in the Enterprise space--greater developer productivity in Java EE6 (with more on the way in EE 7), portability between platforms, vendors, and even cloud-to-cloud portability. The earliest version of the Java EE 7 SDK is now available for download--in GlassFish 4--with WebSocket support, better JSON support, and more. The final release is scheduled for April of 2013. Nicole Otto, Senior Director, Consumer Digital Technology, Nike, explored her company's Java technology driven enterprise ecosystem for all things sports, including the NikeFuel accelerometer wrist band. Looking beyond Java EE 7, Purdy mentioned NoSQL database functionality for EE 8, the concurrency utilities (possibly in EE 7), some of the Avatar projects in EE 7, some in EE 8, multi-tenancy for the cloud, supporting SaaS applications, and more.Rizvi ended by introducing Dr. Robert Ballard, oceanographer and National Geographic Explorer in Residence--part of Oracle's philanthropic relationship with the National Geographic Society to fund K-12 education around ocean science and conservation. Ballard is best known for having discovered the wreckage of the Titanic. He offered a fascinating video and overview of the cutting edge technology used in such deep-sea explorations, noting that in his early days, high-bandwidth exploration meant that you’d go down in a submarine and "stick your face up against the window." Now, it's a remotely operated, technology telepresence--"I think of my Hercules vehicle as my equivalent of a Na'vi. When I go beneath the sea, I actually send my spirit." Using high bandwidth satellite links, such amazing explorations can now occur via smartphone, laptop, or whatever platform. Ballard’s team regularly offers live feeds and programming out to schools and the world, spanning 188 countries--with embedding educators as part of the expeditions. It's technology at its finest, inspiring the next-generation of scientists and explorers!

    Read the article

  • SS7(M3UA, SCCP, TCAP, MAP) Stack

    - by Ammar Hameed
    I'm building an open source SMSC from scratch; it's almost finished, The SRI and the forwardSM operations are working, but I still have few things to do for the receiving part. I've built the SS7 stack already, but I'm using DB for saving the TCAP transactions IDs to be updated later to get/generate responses. My approach is this: I created memory table (heap table), saved the TCAP TID in the database, then compared the received TCAP TID with the TIDs saved in the database and then decide whether to end the TCAP session or continue. What is the best way to implement it? I'm thinking of doubly linked list that holds the TCAP TID. Am I going towards the right direction, or should I use another technique other than database or D-linked list? Should I leave it as it is, and let the database do the job for saving the TIDs? Please note that I'm using SCTP implementation available on Linux (lsctp) as a transport protocol, the language I'm using is C and the DB is MYSQL.

    Read the article

  • Why has Javascript been (mostly) only a browser-side technology for more than 10 years?

    - by Gabriel Cuvillier
    Recently there is a lot of projects that pushes Javascript into other directions: as a general purpose scripting language (GLUEScript, Rhino), as an extension language (QTScript, Adobe Reader, OO Macros), Widgets (Yahoo Widgets, MS Gadgets, Dashboard), and even server-side JS & web frameworks (CommonJS, Helma, Phobos, V8cgi), which seems obvious since it is already a language widely used for web development. But wait, everything is so new and nothing is really mature. However JS is around for almost 15 years, being as powerfull as any other scripting languages, being standardised by the ECMA, and being a mandatory technology for web development. Why did it take so much time to gain acceptance into other domains than web browsers?

    Read the article

  • Web-To-Print - What technology should I explore? Web-Content -> drag & drop to Templates -> PDF

    - by hamlin11
    I'm discussing some software design issues with a potential client and the idea of Web-to-Print technology has come up. We need users to be able to drag images from an image library to various regions defined by a template. Example: Images may go in box A, B, or C and text may go in boxes D or E. These templates would be setup Boxes A through E would be defined inside a template by administrators using some sort of editor. These templates would serve as a mapping from web-content to a PDF. Once users drag images and insert text in the appropriate regions of the template, the result will be converted into a PDF. Is this feasible these days with jquery & asp.net? That would be nice. If not, what would be the ideal solution?

    Read the article

  • Higher order function « filter » in C++

    - by Red Hyena
    Hi all. I wanted to write a higher order function filter with C++. The code I have come up with so far is as follows: #include <iostream> #include <string> #include <functional> #include <algorithm> #include <vector> #include <list> #include <iterator> using namespace std; bool isOdd(int const i) { return i % 2 != 0; } template < template <class, class> class Container, class Predicate, class Allocator, class A > Container<A, Allocator> filter(Container<A, Allocator> const & container, Predicate const & pred) { Container<A, Allocator> filtered(container); container.erase(remove_if(filtered.begin(), filtered.end(), pred), filtered.end()); return filtered; } int main() { int const a[] = {23, 12, 78, 21, 97, 64}; vector<int const> const v(a, a + 6); vector<int const> const filtered = filter(v, isOdd); copy(filtered.begin(), filtered.end(), ostream_iterator<int const>(cout, " ")); } However on compiling this code, I get the following error messages that I am unable to understand and hence get rid of: /usr/include/c++/4.3/ext/new_allocator.h: In instantiation of ‘__gnu_cxx::new_allocator<const int>’: /usr/include/c++/4.3/bits/allocator.h:84: instantiated from ‘std::allocator<const int>’ /usr/include/c++/4.3/bits/stl_vector.h:75: instantiated from ‘std::_Vector_base<const int, std::allocator<const int> >’ /usr/include/c++/4.3/bits/stl_vector.h:176: instantiated from ‘std::vector<const int, std::allocator<const int> >’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:82: error: ‘const _Tp* __gnu_cxx::new_allocator<_Tp>::address(const _Tp&) const [with _Tp = const int]’ cannot be overloaded /usr/include/c++/4.3/ext/new_allocator.h:79: error: with ‘_Tp* __gnu_cxx::new_allocator<_Tp>::address(_Tp&) const [with _Tp = const int]’ Filter.cpp: In function ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’: Filter.cpp:30: instantiated from here Filter.cpp:23: error: passing ‘const std::vector<const int, std::allocator<const int> >’ as ‘this’ argument of ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ discards qualifiers /usr/include/c++/4.3/bits/stl_algo.h: In function ‘_FIter std::remove_if(_FIter, _FIter, _Predicate) [with _FIter = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _Predicate = bool (*)(int)]’: Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algo.h:821: error: assignment of read-only location ‘__result.__gnu_cxx::__normal_iterator<_Iterator, _Container>::operator* [with _Iterator = const int*, _Container = std::vector<const int, std::allocator<const int> >]()’ /usr/include/c++/4.3/ext/new_allocator.h: In member function ‘void __gnu_cxx::new_allocator<_Tp>::deallocate(_Tp*, size_t) [with _Tp = const int]’: /usr/include/c++/4.3/bits/stl_vector.h:150: instantiated from ‘void std::_Vector_base<_Tp, _Alloc>::_M_deallocate(_Tp*, size_t) [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:136: instantiated from ‘std::_Vector_base<_Tp, _Alloc>::~_Vector_base() [with _Tp = const int, _Alloc = std::allocator<const int>]’ /usr/include/c++/4.3/bits/stl_vector.h:286: instantiated from ‘std::vector<_Tp, _Alloc>::vector(_InputIterator, _InputIterator, const _Alloc&) [with _InputIterator = const int*, _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:29: instantiated from here /usr/include/c++/4.3/ext/new_allocator.h:98: error: invalid conversion from ‘const void*’ to ‘void*’ /usr/include/c++/4.3/ext/new_allocator.h:98: error: initializing argument 1 of ‘void operator delete(void*)’ /usr/include/c++/4.3/bits/stl_algobase.h: In function ‘_OI std::__copy_move_a(_II, _II, _OI) [with bool _IsMove = false, _II = const int*, _OI = const int*]’: /usr/include/c++/4.3/bits/stl_algobase.h:435: instantiated from ‘_OI std::__copy_move_a2(_II, _II, _OI) [with bool _IsMove = false, _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/stl_algobase.h:466: instantiated from ‘_OI std::copy(_II, _II, _OI) [with _II = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >, _OI = __gnu_cxx::__normal_iterator<const int*, std::vector<const int, std::allocator<const int> > >]’ /usr/include/c++/4.3/bits/vector.tcc:136: instantiated from ‘__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> > std::vector<_Tp, _Alloc>::erase(__gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >, __gnu_cxx::__normal_iterator<typename std::_Vector_base<_Tp, _Alloc>::_Tp_alloc_type::pointer, std::vector<_Tp, _Alloc> >) [with _Tp = const int, _Alloc = std::allocator<const int>]’ Filter.cpp:23: instantiated from ‘Container<A, Allocator> filter(const Container<A, Allocator>&, const Predicate&) [with Container = std::vector, Predicate = bool ()(int), Allocator = std::allocator<const int>, A = const int]’ Filter.cpp:30: instantiated from here /usr/include/c++/4.3/bits/stl_algobase.h:396: error: no matching function for call to ‘std::__copy_move<false, true, std::random_access_iterator_tag>::__copy_m(const int*&, const int*&, const int*&)’ Please tell me what I am doing wrong here and what is the correct way to achieve the kind of higher order polymorphism I want. Thanks.

    Read the article

  • W3C Web Content Accessibility Guidelines 1.0, which technology could I use?

    - by vtortola
    Hi, I've a project where one of the requirements is fullfil the "W3C Web Content Accessibility Guidelines 1.0 (WCAG 1.0)". I'm now considering wich technology could I use to acomplish it, but I'm a little bit confused. Silverlight would be the easiest way, but I cannot find conclusive information about if silverlight is or isn't compilant. I've seen controls pack done in javascript that looks very nice, like DHTMLX, but again the same problem, I don't know for sure. Besides, I've always read that a website should work wihthout javascript, and use it just for improve the user experience. Thanks.

    Read the article

  • Which faces technology for use with GlassFish 2.1 and NetBeans 6.7?

    - by SteJav
    I'm running GlassFish 2.1 and using NetBeans 6.7. I'd like to create a web interface to my data using JSF 1.2. Trouble is, I'm not sure which 'faces' technology to learn (that includes some good documentation). JBoss/RichFaces seem pretty good on documentation, but I'm using GlassFish. Any thoughts? The choices appear overwhelming: Tomahawk Tobago Trinidad ICEfaces RCFaces Netadvantage WebGalileoFaces QuipuKit BluePrints Woodstock JBoss RichFaces Ajax4jsf ILOG Oracle ADF G4JSF Simplica Backbase jenia4faces VisualWebPack DynaFaces IBM Impl Dinamica Mojarra PrimeFaces jQuery OpenFaces ZK ExtJS Anybody had any experience with any of the above and found the documentation to be clear to a beginner? Being a JSF/Web beginner, I tried some ICEFaces, Mojarra tutorials and had a go at getting RichFaces working with NBeans and GlassFish, but no luck. Lots of XML complaints. I'm clearly missing some huge chunks of configuration, but I can't find any documentation to help me. Any suggestions would be much appreciated :-)

    Read the article

  • Which technology(s) / language(s) to write linux web application/service? [closed]

    - by Lee Tickett
    I am currently playing with some open source home automation software www.domotiga.nl The software is built in Gambas2 (a graphical programming language similar to visual basic). I am considering building something similar or porting domotiga to a server based application/service. The application would need a web front end and i will likely be developing in debian (arm). But i'm not sure if php or python are suitable for server based applications which need to be always running (collecting data etc) rather than just running when accessed. Which technology(s) / language(s) would you suggest i look into? I used to do a lot of Visual Basic, then VB.NET, now C# and have played with php a few years back- but don't really want this to sway the decision too much as i should be able to pickup whatever language if i decide to proceed.

    Read the article

  • What's in-memory database technology that do realtime materialized view?

    - by KA100
    What I'm looking for is something like materialized views in front-end that shows my data in diffident ways without full recalculation. let's say I have stock watcher with many front-end views and dashborads some based on aggregation, order by or just filter with different criteria defined realtime by user. Now, I receive online record updates from some webservice and it's not like "data warehouse" every single record can be updated any time and it actually happens every second. Is there any technology can help me in such I create something like materialized view and it's update it without doing full recalculation every time data changed. Thank you.

    Read the article

  • Quick guide to Oracle IRM 11g: Classification design

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g indexThis is the final article in the quick guide to Oracle IRM. If you've followed everything prior you will now have a fully functional and tested Information Rights Management service. It doesn't matter if you've been following the 10g or 11g guide as this next article is common to both. ContentsWhy this is the most important part... Understanding the classification and standard rights model Identifying business use cases Creating an effective IRM classification modelOne single classification across the entire businessA context for each and every possible granular use caseWhat makes a good context? Deciding on the use of roles in the context Reviewing the features and security for context roles Summary Why this is the most important part...Now the real work begins, installing and getting an IRM system running is as simple as following instructions. However to actually have an IRM technology easily protecting your most sensitive information without interfering with your users existing daily work flows and be able to scale IRM across the entire business, requires thought into how confidential documents are created, used and distributed. This article is going to give you the information you need to ask the business the right questions so that you can deploy your IRM service successfully. The IRM team here at Oracle have over 10 years of experience in helping customers and it is important you understand the following to be successful in securing access to your most confidential information. Whatever you are trying to secure, be it mergers and acquisitions information, engineering intellectual property, health care documentation or financial reports. No matter what type of user is going to access the information, be they employees, contractors or customers, there are common goals you are always trying to achieve.Securing the content at the earliest point possible and do it automatically. Removing the dependency on the user to decide to secure the content reduces the risk of mistakes significantly and therefore results a more secure deployment. K.I.S.S. (Keep It Simple Stupid) Reduce complexity in the rights/classification model. Oracle IRM lets you make changes to access to documents even after they are secured which allows you to start with a simple model and then introduce complexity once you've understood how the technology is going to be used in the business. After an initial learning period you can review your implementation and start to make informed decisions based on user feedback and administration experience. Clearly communicate to the user, when appropriate, any changes to their existing work practice. You must make every effort to make the transition to sealed content as simple as possible. For external users you must help them understand why you are securing the documents and inform them the value of the technology to both your business and them. Before getting into the detail, I must pay homage to Martin White, Vice President of client services in SealedMedia, the company Oracle acquired and who created Oracle IRM. In the SealedMedia years Martin was involved with every single customer and was key to the design of certain aspects of the IRM technology, specifically the context model we will be discussing here. Listening carefully to customers and understanding the flexibility of the IRM technology, Martin taught me all the skills of helping customers build scalable, effective and simple to use IRM deployments. No matter how well the engineering department designed the software, badly designed and poorly executed projects can result in difficult to use and manage, and ultimately insecure solutions. The advice and information that follows was born with Martin and he's still delivering IRM consulting with customers and can be found at www.thinkers.co.uk. It is from Martin and others that Oracle not only has the most advanced, scalable and usable document security solution on the market, but Oracle and their partners have the most experience in delivering successful document security solutions. Understanding the classification and standard rights model The goal of any successful IRM deployment is to balance the increase in security the technology brings without over complicating the way people use secured content and avoid a significant increase in administration and maintenance. With Oracle it is possible to automate the protection of content, deploy the desktop software transparently and use authentication methods such that users can open newly secured content initially unaware the document is any different to an insecure one. That is until of course they attempt to do something for which they don't have any rights, such as copy and paste to an insecure application or try and print. Central to achieving this objective is creating a classification model that is simple to understand and use but also provides the right level of complexity to meet the business needs. In Oracle IRM the term used for each classification is a "context". A context defines the relationship between.A group of related documents The people that use the documents The roles that these people perform The rights that these people need to perform their role The context is the key to the success of Oracle IRM. It provides the separation of the role and rights of a user from the content itself. Documents are sealed to contexts but none of the rights, user or group information is stored within the content itself. Sealing only places information about the location of the IRM server that sealed it, the context applied to the document and a few other pieces of metadata that pertain only to the document. This important separation of rights from content means that millions of documents can be secured against a single classification and a user needs only one right assigned to be able to access all documents. If you have followed all the previous articles in this guide, you will be ready to start defining contexts to which your sensitive information will be protected. But before you even start with IRM, you need to understand how your own business uses and creates sensitive documents and emails. Identifying business use cases Oracle is able to support multiple classification systems, but usually there is one single initial need for the technology which drives a deployment. This need might be to protect sensitive mergers and acquisitions information, engineering intellectual property, financial documents. For this and every subsequent use case you must understand how users create and work with documents, to who they are distributed and how the recipients should interact with them. A successful IRM deployment should start with one well identified use case (we go through some examples towards the end of this article) and then after letting this use case play out in the business, you learn how your users work with content, how well your communication to the business worked and if the classification system you deployed delivered the right balance. It is at this point you can start rolling the technology out further. Creating an effective IRM classification model Once you have selected the initial use case you will address with IRM, you need to design a classification model that defines the access to secured documents within the use case. In Oracle IRM there is an inbuilt classification system called the "context" model. In Oracle IRM 11g it is possible to extend the server to support any rights classification model, but the majority of users who are not using an application integration (such as Oracle IRM within Oracle Beehive) are likely to be starting out with the built in context model. Before looking at creating a classification system with IRM, it is worth reviewing some recognized standards and methods for creating and implementing security policy. A very useful set of documents are the ISO 17799 guidelines and the SANS security policy templates. First task is to create a context against which documents are to be secured. A context consists of a group of related documents (all top secret engineering research), a list of roles (contributors and readers) which define how users can access documents and a list of users (research engineers) who have been given a role allowing them to interact with sealed content. Before even creating the first context it is wise to decide on a philosophy which will dictate the level of granularity, the question is, where do you start? At a department level? By project? By technology? First consider the two ends of the spectrum... One single classification across the entire business Imagine that instead of having separate contexts, one for engineering intellectual property, one for your financial data, one for human resources personally identifiable information, you create one context for all documents across the entire business. Whilst you may have immediate objections, there are some significant benefits in thinking about considering this. Document security classification decisions are simple. You only have one context to chose from! User provisioning is simple, just make sure everyone has a role in the only context in the business. Administration is very low, if you assign rights to groups from the business user repository you probably never have to touch IRM administration again. There are however some obvious downsides to this model.All users in have access to all IRM secured content. So potentially a sales person could access sensitive mergers and acquisition documents, if they can get their hands on a copy that is. You cannot delegate control of different documents to different parts of the business, this may not satisfy your regulatory requirements for the separation and delegation of duties. Changing a users role affects every single document ever secured. Even though it is very unlikely a business would ever use one single context to secure all their sensitive information, thinking about this scenario raises one very important point. Just having one single context and securing all confidential documents to it, whilst incurring some of the problems detailed above, has one huge value. Once secured, IRM protected content can ONLY be accessed by authorized users. Just think of all the sensitive documents in your business today, imagine if you could ensure that only everyone you trust could open them. Even if an employee lost a laptop or someone accidentally sent an email to the wrong recipient, only the right people could open that file. A context for each and every possible granular use case Now let's think about the total opposite of a single context design. What if you created a context for each and every single defined business need and created multiple contexts within this for each level of granularity? Let's take a use case where we need to protect engineering intellectual property. Imagine we have 6 different engineering groups, and in each we have a research department, a design department and manufacturing. The company information security policy defines 3 levels of information sensitivity... restricted, confidential and top secret. Then let's say that each group and department needs to define access to information from both internal and external users. Finally add into the mix that they want to review the rights model for each context every financial quarter. This would result in a huge amount of contexts. For example, lets just look at the resulting contexts for one engineering group. Q1FY2010 Restricted Internal - Engineering Group 1 - Research Q1FY2010 Restricted Internal - Engineering Group 1 - Design Q1FY2010 Restricted Internal - Engineering Group 1 - Manufacturing Q1FY2010 Restricted External- Engineering Group 1 - Research Q1FY2010 Restricted External - Engineering Group 1 - Design Q1FY2010 Restricted External - Engineering Group 1 - Manufacturing Q1FY2010 Confidential Internal - Engineering Group 1 - Research Q1FY2010 Confidential Internal - Engineering Group 1 - Design Q1FY2010 Confidential Internal - Engineering Group 1 - Manufacturing Q1FY2010 Confidential External - Engineering Group 1 - Research Q1FY2010 Confidential External - Engineering Group 1 - Design Q1FY2010 Confidential External - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret Internal - Engineering Group 1 - Research Q1FY2010 Top Secret Internal - Engineering Group 1 - Design Q1FY2010 Top Secret Internal - Engineering Group 1 - Manufacturing Q1FY2010 Top Secret External - Engineering Group 1 - Research Q1FY2010 Top Secret External - Engineering Group 1 - Design Q1FY2010 Top Secret External - Engineering Group 1 - Manufacturing Now multiply the above by 6 for each engineering group, 18 contexts. You are then creating/reviewing another 18 every 3 months. After a year you've got 72 contexts. What would be the advantages of such a complex classification model? You can satisfy very granular rights requirements, for example only an authorized engineering group 1 researcher can create a top secret report for access internally, and his role will be reviewed on a very frequent basis. Your business may have very complex rights requirements and mapping this directly to IRM may be an obvious exercise. The disadvantages of such a classification model are significant...Huge administrative overhead. Someone in the business must manage, review and administrate each of these contexts. If the engineering group had a single administrator, they would have 72 classifications to reside over each year. From an end users perspective life will be very confusing. Imagine if a user has rights in just 6 of these contexts. They may be able to print content from one but not another, be able to edit content in 2 contexts but not the other 4. Such confusion at the end user level causes frustration and resistance to the use of the technology. Increased synchronization complexity. Imagine a user who after 3 years in the company ends up with over 300 rights in many different contexts across the business. This would result in long synchronization times as the client software updates all your offline rights. Hard to understand who can do what with what. Imagine being the VP of engineering and as part of an internal security audit you are asked the question, "What rights to researchers have to our top secret information?". In this complex model the answer is not simple, it would depend on many roles in many contexts. Of course this example is extreme, but it highlights that trying to build many barriers in your business can result in a nightmare of administration and confusion amongst users. In the real world what we need is a balance of the two. We need to seek an optimum number of contexts. Too many contexts are unmanageable and too few contexts does not give fine enough granularity. What makes a good context? Good context design derives mainly from how well you understand your business requirements to secure access to confidential information. Some customers I have worked with can tell me exactly the documents they wish to secure and know exactly who should be opening them. However there are some customers who know only of the government regulation that requires them to control access to certain types of information, they don't actually know where the documents are, how they are created or understand exactly who should have access. Therefore you need to know how to ask the business the right questions that lead to information which help you define a context. First ask these questions about a set of documentsWhat is the topic? Who are legitimate contributors on this topic? Who are the authorized readership? If the answer to any one of these is significantly different, then it probably merits a separate context. Remember that sealed documents are inherently secure and as such they cannot leak to your competitors, therefore it is better sealed to a broad context than not sealed at all. Simplicity is key here. Always revert to the first extreme example of a single classification, then work towards essential complexity. If there is any doubt, always prefer fewer contexts. Remember, Oracle IRM allows you to change your mind later on. You can implement a design now and continue to change and refine as you learn how the technology is used. It is easy to go from a simple model to a more complex one, it is much harder to take a complex model that is already embedded in the work practice of users and try to simplify it. It is also wise to take a single use case and address this first with the business. Don't try and tackle many different problems from the outset. Do one, learn from the process, refine it and then take what you have learned into the next use case, refine and continue. Once you have a good grasp of the technology and understand how your business will use it, you can then start rolling out the technology wider across the business. Deciding on the use of roles in the context Once you have decided on that first initial use case and a context to create let's look at the details you need to decide upon. For each context, identify; Administrative rolesBusiness owner, the person who makes decisions about who may or may not see content in this context. This is often the person who wanted to use IRM and drove the business purchase. They are the usually the person with the most at risk when sensitive information is lost. Point of contact, the person who will handle requests for access to content. Sometimes the same as the business owner, sometimes a trusted secretary or administrator. Context administrator, the person who will enact the decisions of the Business Owner. Sometimes the point of contact, sometimes a trusted IT person. Document related rolesContributors, the people who create and edit documents in this context. Reviewers, the people who are involved in reviewing documents but are not trusted to secure information to this classification. This role is not always necessary. (See later discussion on Published-work and Work-in-Progress) Readers, the people who read documents from this context. Some people may have several of the roles above, which is fine. What you are trying to do is understand and define how the business interacts with your sensitive information. These roles obviously map directly to roles available in Oracle IRM. Reviewing the features and security for context roles At this point we have decided on a classification of information, understand what roles people in the business will play when administrating this classification and how they will interact with content. The final piece of the puzzle in getting the information for our first context is to look at the permissions people will have to sealed documents. First think why are you protecting the documents in the first place? It is to prevent the loss of leaking of information to the wrong people. To control the information, making sure that people only access the latest versions of documents. You are not using Oracle IRM to prevent unauthorized people from doing legitimate work. This is an important point, with IRM you can erect many barriers to prevent access to content yet too many restrictions and authorized users will often find ways to circumvent using the technology and end up distributing unprotected originals. Because IRM is a security technology, it is easy to get carried away restricting different groups. However I would highly recommend starting with a simple solution with few restrictions. Ensure that everyone who reasonably needs to read documents can do so from the outset. Remember that with Oracle IRM you can change rights to content whenever you wish and tighten security. Always return to the fact that the greatest value IRM brings is that ONLY authorized users can access secured content, remember that simple "one context for the entire business" model. At the start of the deployment you really need to aim for user acceptance and therefore a simple model is more likely to succeed. As time passes and users understand how IRM works you can start to introduce more restrictions and complexity. Another key aspect to focus on is handling exceptions. If you decide on a context model where engineering can only access engineering information, and sales can only access sales data. Act quickly when a sales manager needs legitimate access to a set of engineering documents. Having a quick and effective process for permitting other people with legitimate needs to obtain appropriate access will be rewarded with acceptance from the user community. These use cases can often be satisfied by integrating IRM with a good Identity & Access Management technology which simplifies the process of assigning users the correct business roles. The big print issue... Printing is often an issue of contention, users love to print but the business wants to ensure sensitive information remains in the controlled digital world. There are many cases of physical document loss causing a business pain, it is often overlooked that IRM can help with this issue by limiting the ability to generate physical copies of digital content. However it can be hard to maintain a balance between security and usability when it comes to printing. Consider the following points when deciding about whether to give print rights. Oracle IRM sealed documents can contain watermarks that expose information about the user, time and location of access and the classification of the document. This information would reside in the printed copy making it easier to trace who printed it. Printed documents are slower to distribute in comparison to their digital counterparts, so time sensitive information in printed format may present a lower risk. Print activity is audited, therefore you can monitor and react to users abusing print rights. Summary In summary it is important to think carefully about the way you create your context model. As you ask the business these questions you may get a variety of different requirements. There may be special projects that require a context just for sensitive information created during the lifetime of the project. There may be a department that requires all information in the group is secured and you might have a few senior executives who wish to use IRM to exchange a small number of highly sensitive documents with a very small number of people. Oracle IRM, with its very flexible context classification system, can support all of these use cases. The trick is to introducing the complexity to deliver them at the right level. In another article i'm working on I will go through some examples of how Oracle IRM might map to existing business use cases. But for now, this article covers all the important questions you need to get your IRM service deployed and successfully protecting your most sensitive information.

    Read the article

  • ADF TaskFlows Communications

    - by raghu.yadav
    Here is the list of various ADF Taskflows communication examples. http://www.oracle.com/technology/products/jdev/tips/fnimphius/CtxEvent/CtxEvent.html http://thepeninsulasedge.com/frank_nimphius/2008/02/07/adf-faces-rc-refreshing-a-table-ui-from-a-contextual-event/ http://www.oracle.com/technology/products/jdev/tips/fnimphius/generictreeselectionlistener/index.html http://www.oracle.com/technology/products/jdev/tips/fnimphius/syncheditformwithtree/index.html http://biemond.blogspot.com/2009/01/passing-adf-events-between-task-flow.html http://www.oracle.com/technology/products/jdev/tips/fnimphius/opentaskflowintab/index.html http://lucbors.blogspot.com/2010/03/adf-11g-contextual-event-framework.html http://thepeninsulasedge.com/blog/?cat=2 http://www.ora600.be/news/adf-contextual-events-11g-r1-ps1

    Read the article

  • Network authentication + roaming home directory - which technology should I look into using?

    - by Brian
    I'm looking into software which provides a user with a single identity across multiple computers. That is, a user should have the same permissions on each computer, and the user should have access to all of his or her files (roaming home directory) on each computer. There seem to be many solutions for this general idea, but I'm trying to determine the best one for me. Here are some details along with requirements: The network of machines are Amazon EC2 instances running Ubuntu. We access the machines with SSH. Some machines on this LAN may have different uses, but I am only discussing machines for a certain use (running a multi-tenancy platform). The system will not necessarily have a constant amount of machines. We may have to permanently or temporarily alter the amount of machines running. This is the the reason why I'm looking into centralized authentication/storage. The implementation of this effect should be a secure one. We're unsure if users will have direct shell access, but their software will potentially be running (under restricted Linux user names, of course) on our systems, which is as good as direct shell access. Let's assume that their software could potentially be malicious for the sake of security. I have heard of several technologies/combinations to achieve my goal, but I'm unsure of the ramifications of each. An older ServerFault post recommended NFS & NIS, though the combination has security problems according to this old article by Symantec. The article suggests moving to NIS+, but, as it is old, this Wikipedia article has cited statements suggesting a trending away from NIS+ by Sun. The recommended replacement is another thing I have heard of... LDAP. It looks like LDAP can be used to save user information in a centralized location on a network. NFS would still need to be used to cover the 'roaming home folder' requirement, but I see references of them being used together. Since the Symantec article pointed out security problems in both NIS and NFS, is there software to replace NFS, or should I heed that article's suggestions for locking it down? I'm tending toward LDAP because another fundamental piece of our architecture, RabbitMQ, has a authentication/authorization plugin for LDAP. RabbitMQ will be accessible in a restricted manner to users on the system, so I would like to tie the security systems together if possible. Kerberos is another secure authentication protocol that I have heard of. I learned a bit about it some years ago in a cryptography class but don't remember much about it. I have seen suggestions online that it can be combined with LDAP in several ways. Is this necessary? What are the security risks of LDAP without Kerberos? I also remember Kerberos being used in another piece of software developed by Carnegie Mellon University... Andrew File System, or AFS. OpenAFS is available for use, though its setup seems a bit complicated. At my university, AFS provides both requirements... I can log in to any machine, and my "AFS folder" is always available (at least when I acquire an AFS token). Along with suggestions for which path I should look into, does anybody have any guides which were particularly helpful? As the bold text pointed out, LDAP looks to be the best choice, but I'm particularly interested in the implementation details (Keberos? NFS?) with respect to security.

    Read the article

  • Innovation Java : 8ème édition du Duke's Choice Awards - Les candidatures sont ouvertes, quels sont

    Bonjour, Pour la 8ème année consécutive sont organisés les Duke's Choice Awards. Le principe : récompenser les innovations dans le monde Java (rapport innovation / moyens mis en oeuvre) dans différentes catégories. Quelques catégories présentes les années passées :Java Technology in Education Java Technology for the Environment Java Technology for the Open Source Community Java Everywhere! Java Technology Tools ... Les résultats seront probablement donnés lors de JavaOne du 19 au 23 septembre 2010. Des pronostics ? Voir également :

    Read the article

  • Downloading stuff from Oracle: an example

    - by user12587121
    Introduction Oracle has a lot of software on offer.  Components of the stack can evolve at different rates and different versions of the components may be in use at any given time.  All this means that even the process of downloading the bits you need can be somewhat daunting.  Here, by way of example, and hopefully to convince you that there is method in the downloading madness,  we describe how to go about downloading the bits for Oracle Identity Manager  (OIM) 11.1.1.5.Firstly, a couple of preliminary points: Folks with Oracle products already installed and looking for bug fixes, patch bundles or patch sets would go directly to the Oracle support website. This Oracle document is a comprehensive description of the Oracle FMW download process and the licensing that applies to downloaded software.   Downloading Oracle Identity Manager 11.1.1.5     To be sure we download the right versions, first locate the Certification Matrix for OIM 11.1.1.5: first go to the Fusion Certification Page then go to the “System Requirements and Supported Platforms for Oracle Identity and Access Management 11gR1” link. Let’s assume you have a 64 bit Linux Machine and an Oracle database already.  Then our  goal is to end up with a list of files like the following: jdk-6u29-linux-x64.bin                    (Java JDK)V26017-01.zip                             (the Repository Creation Utility to create the DB schemas)wls1035_generic.jar                       (the Weblogic Application Server)ofm_iam_generic_11.1.1.5.0_disk1_1of1.zip (the Identity Managament bits)ofm_soa_generic_11.1.1.5.0_disk1_1of2.zip (the SOA bits)ofm_soa_generic_11.1.1.5.0_disk1_2of2.zip jdevstudio11115install.exe                (optional: JDeveloper IDE)soa-jdev-extension.zip                    (optional: SOA extensions for JDeveloper) Downloading the bits 1.    Download the Java JDK, 64 bit version 1.6.0_24+.2.    Download the RCU: here you will see that the RCU is mentioned on the Identity Management home page but no link is provided.  Do not panic.  Due to the amount and turnover of software available only the latest versions are available for download from the main Oracle site.  Over time software gets moved on to the Oracle edelivery site and it is here that we find the RCU version we require: a.    Go to edelivery: https://edelivery.oracle.com b.    Choose Pack ‘ Oracle Fusion Middleware’ and ‘Linux x86-64’ c.    Click on ‘Oracle Fusion Middleware 11g Media Pack for Linux x86-64’ d.    Download: ‘Oracle Fusion Middleware Repository Creation Utility 11g (11.1.1.5.0) for Linux x86’ (V26017.zip) 3.    Download the Weblogic Application Server: WLS 10.3.54.    Download the Oracle Identity Manager bits: one point to clarify here is that currently  the Identity Management bits come in two trains, essentially one for the Directory Services piece and the other for the Access Management and Identity Management parts.  We need to be careful not to confuse the two, in particular to be clear which of the trains is being referred to by  the documentation: a.   So, with this in mind, go to ‘ Oracle Identity and Access Management (11.1.1.5.0)’ and download Disk1. 5.    Download the SOA bits: a.    Go to the edelivery area as for the RCU and download: i.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 1 of 2) ii.    Oracle SOA Suite 11g Patch Set 4 (11.1.1.5.0) (Part 2 of 2) 6.    You will want to download some development tooling (for plugins or BPEL workflow development): a.    Download Jdeveloper 11.1.1.5 (11.1.1.6 may work but best to stick to the versions that correspond to the WLS version we are using) b.    Go to the site for  SOA tools and download the SOA Composite Editor 11.1.1.5 That’s it, you may proceed to the installation. 

    Read the article

  • Building the Bootsector of BIOSLOADER

    - by Kate Moss' Open Space
    Windows CE is a 32 bits OS since day one, so it makes sense tools shipped with PB, compiler, linker, assembler and etc, are for targeting to 32 bits system. But occasionally, if you are developing x86 based system and especially working on some boot code, such as boot sector of BIOSLOADER, that will be a problem. Normally, as PB provides the prebuilt boot sector image but if you ever need to rebuilt it, what should you do? You may say as it's an x86, perhaps you can use VS or Windows SDK to build it. But unfortunately, today's desktop Windows tool chains are also 32 or even 64 bits only, you need to find something older. VC++ 6.0, but how can you find one? This Website http://thestarman.pcministry.com/asm/masm.htm arranges some useful resources. Basically, you need 2 thing, the 16 bits MASM and 16 bits linker. Just make it even easier for you Download http://download.microsoft.com/download/vb60ent/Update/6/W9X2KXP/EN-US/vcpp5.exe for Assembler (MASM). Download http://download.microsoft.com/download/vc15/Update/1/WIN98/EN-US/Lnk563.exe for the Linker. And then just extract the archives and what you need is ml.exe, ml.err and link.exe

    Read the article

  • FlasCC requirements and limitations?

    - by Arthur Wulf White
    It is now available for download. It says you need twice* as many bits as I have. Why would you need more bits to compile code? Does that mean you need more bits to run flash games writtes with flasCC Did anyone try it out and happens to know the answers? http://gaming.adobe.com/technologies/flascc/ Minimum system requirements Flash Player 11 or higher Flex SDK 4.6 or higher Java Virtual Machine (64-bit) Windows Microsoft® Windows® 7 (64-bit edition) Cygwin (included) *This is meant as a joke. however I do own a 32-bit laptop and I am wondering why you need 64-bit. Afaik - You only need 64-bit if you want to run a system that has more than 4gigs of memory. Why would any flash game require more than 4 gigs of memory. The only system that is 64-bits and does not have 4gigs of memory that I can quickly recall is that hilarious Nintendo that came ages ago with a Motorola CPU.

    Read the article

  • Wireless technology - which is better: more single radio APs or less dual radio APs?

    - by gert_78
    We are currently talking to vendors of wireless solutions for a wireless deployment in a university campus with some 5000 students. One vendor is offering us a Cisco solution with a WLC 5508 controller and 69 2x2 MIMO Dual-band/Dual radio APs (Aironet AP 1042 model) The other vendor is offering us an Aruba solution with a 3600 controller and 96 2x2 MIMO dual band BUT single radio APs (Aruba AP93) Both vendors are charging 82.000 US$ (support, 3y service contracts, switches and additional required options all included of course) The Aruba vendor is trying to convince me that 96 single radio APs will give us more connection/users/capacity then the 69 dual radio APs. I have my doubts about that and since it is my core competence-domain I wanted to ask here the opinion of people that have a more profound knowledge and experience in this area. When you talk to vendors it's often hard to get objective information. So try to answer only if you are sure and please mention it if you are affiliated with one of the vendors. I appreciate all useful help and want to thank you in advance for the effort!

    Read the article

  • how to record mic input and pipe the output to another program

    - by acrs
    Hi everyone Im trying to follow a tutorial on generating truly random bits How To Generate Truly Random Bits This is the command from the tutorial but it does not work rec -c 1 -d /dev/dsp -r 8000 -t wav -s w - | ./noise-filter >bits I know i can record my mic input using rec -c 1 no.wav this is the command i tried using rec -c 1 -r 8000 -t wav -s noise.wav | ./noise-filter >bits but i get root@xxc:~/cc# rec -c 1 -r 8000 -t wav -s noise.wav - | ./noise-filter >bits rec WARN formats: can't set sample rate 8000; using 48000 rec FAIL sox: Input files must have the same sample-rate I have complied noise-filter noise-filter I think the tutorial is using an older version of SOX and REC I'm using sox: SoX v14.3.2 on Ubuntu 12.04 server Can someone please help me ?

    Read the article

  • Can I animate render targets or the swap chain?

    - by Eric F.
    I want to animate some synthetic video bits to fullscreen w/o tearing. Can I set up D3D 9/10/11 in exclusive mode, and have it present a series of buffers that I'm writing to? I know how to copy system memory bits into a texture, then draw that texture as a fullscreen quad, but it seems like overkill. Why should I use the triangle rasterizer when I want to do something so simple? All I want to do is set up a long (4-8 buffer) swapchain and set the bits of the back buffer that is about to be displayed. Or, I want to allocate 4-8 RenderTargets, and on each frame, copy the bits from system memory to the RenderTarget, then set it as the next thing to display. I've never seen or heard about anybody doing this, but it seems so dead simple!

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >