Search Results

Search found 8266 results on 331 pages for 'distributed systems'.

Page 80/331 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Announcing the latest update to Oracle VM Server for x86 2.2 Release

    - by Honglin Su
    More and more customers have discovered that Oracle delivers more value with Oracle VM compared with other server virtualization solutions, and we've seen the momentum that customers succeed with Oracle VM and leading partners support Oracle VM. Recently, Oracle VM server for x86 with Windows PV Drivers passed Microsoft SVVP requirements for Windows servers, which provides customers more confidence to deploy Microsoft Windows guest OS onto Oracle VM server for x86. Today I'm pleased to announce the Oracle VM server for x86 2.2.2 release. See the new features introduced in the 2.2.2 release. Expand the guest OS support to include Oracle Linux 6.x and Oracle Solaris 11 Express OS. For a complete list of guest OS support, please refer to the Oracle VM server x86 release note. The VMPInfo system information and cluster troubleshooting utility is provided with this release. Additional information on VMPInfo is also available in the following My Oracle Support Notes: 1263293.1 Post-installation check list for new Oracle VM Server 1290587.1 Performing Site Reviews and Cluster Troubleshooting with VMPInfo A new storage repository option to provide NFS mount options when creating a storage repository. For more information on this new parameter, see Oracle VM Server User Guide "Adding a Storage Repository". Updated OCFS2 cluster file system, libdhcp, device drivers. See details and additional enhancements at Oracle VM Server User Guide: New Features in Release 2.2.2. The Oracle VM Server for x86 ISO image is available for download at Oracle's E-Delivery site. If you've subscribed to Oracle's Unbreakable Linux Network (ULN), you can simply run up2date command to update the server. Please refer to Oracle VM Upgrade Guide: Upgrading Oracle VM Server. There's no change to Oracle VM Manager, which remains at 2.2.0 with the patch 2.2-16. If you have any questions about Oracle VM Serer for x86, you may post your questions at OTN discussion forum; or purchase support for Oracle Unbreakable Linux and Oracle VM. For Oracle's x86 systems, Oracle VM support as well Oracle Linux and Oracle Solaris support are included in the Oracle Premier Support for Systems. For more information about Oracle's virtualization, visit oracle.com/virtualization.

    Read the article

  • AIIM Best Practice Awards to Two Oracle Customers

    - by [email protected]
    On Tuesday night at the AIIM Awards Banquet, two Oracle customers and their implementation partners won awards for their Oracle Enterprise 2.0 implementations. The Bureau of Indian Affairs, a division of the Department of Interior, won a Carl E. Nelson Best Practices Award for their implementation of Oracle WebCenter and Oracle Content Management to provide an interactive social media environment to engage and inform their constituent communities. The BIA Citizen Portal provides all the services of the Bureau of Indian Affairs to the community of 564 federally recognized tribes that include over 1.9 million American Indians and Alaska Natives. This integration was achieved with the support of Oracle partner Mythics. The Charles Town Police Department integrated Oracle Content Management to integrate with and support their police evidence system. This integration was created in partnership with Oracle partner EDAC Systems Inc. Diane Hoppe of EDAC Systems Inc. was on hand to receive the award for Charles Town Police Department. You can see pictures of our award winners here: Linus Chow, Oracle; John Mancini, President of AIIM; and Diane Hoppe, EDACS - Charles Town Police: John Mancini, President of AIIM; Linus Chow, Oracle; Chris Baker, Mythics; and Bureau of Indian Affairs Oracle, EDACS, Mythics, BIA You can read more in the AIIM press release.

    Read the article

  • Oracle VM Blade Cluster Reference Configuration

    - by Ferhat Hatay
    Today we are happy to announce the availability of the Oracle VM blade cluster reference configuration for Sun Blade 6000 modular systems.  The new Oracle VM blade cluster reference configuration can help reduce the time to deploy virtual infrastructure by up to 98 percent when compared to multi-vendor configurations. Oracle's virtualization strategy is to simplify the deployment, management, and support of the enterprise stack from application to disk. The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components. It enables quick and easy deployment of the virtualized infrastructure using components that have been tested together and are all supported together by one vendor — Oracle. All components listed in the reference configuration have been tested together by Oracle, reducing the need for customer testing and the time-consuming and complex effort of designing and deploying a stable configuration. Benefitting from pre-installed Oracle VM Server for x86 software on Oracle’s highly scalable and reliable Sun Blade servers with built-in networking and Oracle’s Sun ZFS Storage Appliance product line, the configuration provides high availability via the blade cluster as well as a documented best practice guide that helps reduce deployment time and cost for customers implementing highly virtualized applications or private cloud Infrastructure as a Service (IaaS) architectures. To further support easier, faster and lower-cost deployments, Oracle Linux, Oracle Solaris and Oracle VM are available for pre-install on select Sun x86 systems, and Oracle VM Templates are available for download for Oracle Applications, Oracle Fusion Middleware, Oracle Database, Oracle Real Application Clusters, and many other Oracle products. Key benefits of the Oracle VM blade cluster reference configuration include: Faster time to value – Begin deploying applications immediately because the optimized software stack is pre-configured for best practices and is ready-to-run on the recommended hardware platforms. Reduced deployment cost and risk – The entire hardware and software stack has been tested and is supported together by Oracle. Elastic scalability – As capacity needs grow, the system can be easily scaled in multiple dimensions with the ability to add compute, storage, and networking resources independently. For more information, see: Oracle white paper: Accelerating deployment of virtualized infrastructures with the Oracle VM blade cluster reference configuration Oracle technical white paper: Best Practices and Guidelines for Deploying the Oracle VM Blade Cluster Reference Configuration

    Read the article

  • The Other "C" in CRM

    - by Brian Dayton
    Folks who know me know that I rarely, if ever, talk politics. And I never talk politicians. Having grown up in a household with one parent leaning left and the other leaning to the right it was the best way to keep the peace. This isn't about politics. It's about "constituents" and the need to improve the services and service levels for people--at the city, county, state/province, etc. level all the way up to national governments. As a citizen and tax payer it's also important to me that these services be provided at a reasonable cost. If there's a better and more efficient way to do something then it's my hope that a public sector organization takes advantage of technology the same way private sector companies do. Social services organizations have a complex job. They provide the services that people need, from healthcare and children's assistance to helping people find jobs. But many of these organizations are still managing these processes manually or outdated, home-grown applications that could have been written up to 30 years ago. A lot has changed in technology. On the (this is as political as I'm going to get) political front, stakeholders like you and me are expecting greater transparency on where and how funds are spent. I'll admit that most of the time, when I think about CRM systems, I think about my experience as a customer of my bank, utilities company or cable operator. But now that I'm older, have children and a house--I find myself interacting more and more with agencies and services organizations. My experiences are sometimes good and sometimes not so good. Along those lines, last week's announcement of Siebel CRM 8.2 for Public Sector caught my eye. You may not work in the public sector, but you are a constituent of some--actually a lot--of public sector organizations. I don't know which CRM systems city and county utilize but I'm going to start paying closer attention.

    Read the article

  • Profile of Scott L Newman

    - by Ratman21
    To:       Whom It May Concern From: Scott L Newman Date:   4/23/2010 Re:      Profile Who is he, what can he do? Two very good questions. #1. I am a 20 + years experience Information Technology Professional (hold on don’t hit delete yet!). Who is not over the hill (I am on top of it) and still knows how to do (and can still do) that thing call work! #2. A can do attitude, that does not allow problems to sit unfixed. I have a broad range of skills, including: Certified CompTIA A+, Security+ and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I do not just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes fast. But I did it.)   The summarization is that I know what do, know keep things going and how to fix it when it breaks.   Scott L. Newman Confidential

    Read the article

  • build a Database from Ms Word list information...

    - by Jayron Soares
    Please someone can advise me how to approach a given problem: I have a sequential list of metadata in a document in MS Word. The basic idea is create a python algorithm to iterate over of the information, retrieving just the name of PROCESS, when is made a queue, from a database. for example. Process: Process Walker (1965) Exact reference: Walker Process Equipment., nc. v. Food Machinery Corp.. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari To The United States Court of Appeals for the SeventhCircuit. Parties: Walker Process Equipment, Inc. Sector: Systems is … Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patenton "knee ction swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond .. Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of creating an algorithm in python to be able to break this information sequenced and try to store them in a web database[open source application that I’m looking for] in order to allow for free consultations ...

    Read the article

  • Into Orbit (OBIEE 11g Launch)

    - by Darryn.Hinett
    After much anticipation, it appears that OBIEE 11g is about to hit the streets. Join Charles Phillips, President, and Thomas Kurian, Executive Vice President, Product Development, for the launch of the latest release of Oracle's business intelligence software. Be the first to hear about Oracle Business Intelligence Enterprise Edition 11g, the new, industry-leading technology platform for business intelligence, which offers: A powerful end-user experience with rich visualisation, search, and actionable collaboration Advancements in analytics, OLAP, and enterprise reporting, with unmatched performance and scalability Simplified system configuration, life-cycle management, and performance optimisation As well as the keynote and technical general session, break out sessions will cover the following topics: Business Intelligence: From Insight to Action In this session, you will learn about an exciting, industry-first innovation that connects business intelligence directly to your business processes. You can spot an opportunity or issue, and immediately initiate appropriate action directly from your dashboard. Oracle Business Intelligence Enterprise Edition 11g Systems Management and Deployment Learn how you can streamline the process of configuring your system, provisioning users, and monitoring and optimising query performance. Attend this session to hear how new integration with Oracle Enterprise Manager provides unique systems management, superior scalability, and high availability and security benefits, while making upgrades effortless. Extending Business Intelligence Analytics with Online Analytical Processing (OLAP) Learn how you can enhance the analytical power and business value of your BI solution with a unified environment for navigating and querying both OLAP and relational data sources. This session will focus on how Oracle Business Intelligence Enterprise Edition 11g, used with Oracle Essbase, can deliver insight at the speed of thought. Integrated Performance Management If your organisation is using or considering performance management applications such as Oracle's Hyperion Planning and Hyperion Financial Management, you will not want to miss this session. See how you can leverage Oracle's BI solution for accessing performance management applications and performing extended financial reporting and analysis. Visualisation and End-user Experience The latest release of Oracle Business Intelligence provides an unrivaled end user experience, including rich interactive dashboards, a vast range of animated charting options, integrated search, and more. This session will also include a close look at how you can leverage location data to visualise geo-spatial information.

    Read the article

  • Running a simple integration scenario using the Oracle Big Data Connectors on Hadoop/HDFS cluster

    - by hamsun
    Between the elephant ( the tradional image of the Hadoop framework) and the Oracle Iron Man (Big Data..) an english setter could be seen as the link to the right data Data, Data, Data, we are living in a world where data technology based on popular applications , search engines, Webservers, rich sms messages, email clients, weather forecasts and so on, have a predominant role in our life. More and more technologies are used to analyze/track our behavior, try to detect patterns, to propose us "the best/right user experience" from the Google Ad services, to Telco companies or large consumer sites (like Amazon:) ). The more we use all these technologies, the more we generate data, and thus there is a need of huge data marts and specific hardware/software servers (as the Exadata servers) in order to treat/analyze/understand the trends and offer new services to the users. Some of these "data feeds" are raw, unstructured data, and cannot be processed effectively by normal SQL queries. Large scale distributed processing was an emerging infrastructure need and the solution seemed to be the "collocation of compute nodes with the data", which in turn leaded to MapReduce parallel patterns and the development of the Hadoop framework, which is based on MapReduce and a distributed file system (HDFS) that runs on larger clusters of rather inexpensive servers. Several Oracle products are using the distributed / aggregation pattern for data calculation ( Coherence, NoSql, times ten ) so once that you are familiar with one of these technologies, lets says with coherence aggregators, you will find the whole Hadoop, MapReduce concept very similar. Oracle Big Data Appliance is based on the Cloudera Distribution (CDH), and the Oracle Big Data Connectors can be plugged on a Hadoop cluster running the CDH distribution or equivalent Hadoop clusters. In this paper, a "lab like" implementation of this concept is done on a single Linux X64 server, running an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0, and a single node Apache hadoop-1.2.1 HDFS cluster, using the SQL connector for HDFS. The whole setup is fairly simple: Install on a Linux x64 server ( or virtual box appliance) an Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 server Get the Apache Hadoop distribution from: http://mir2.ovh.net/ftp.apache.org/dist/hadoop/common/hadoop-1.2.1. Get the Oracle Big Data Connectors from: http://www.oracle.com/technetwork/bdc/big-data-connectors/downloads/index.html?ssSourceSiteId=ocomen. Check the java version of your Linux server with the command: java -version java version "1.7.0_40" Java(TM) SE Runtime Environment (build 1.7.0_40-b43) Java HotSpot(TM) 64-Bit Server VM (build 24.0-b56, mixed mode) Decompress the hadoop hadoop-1.2.1.tar.gz file to /u01/hadoop-1.2.1 Modify your .bash_profile export HADOOP_HOME=/u01/hadoop-1.2.1 export PATH=$PATH:$HADOOP_HOME/bin export HIVE_HOME=/u01/hive-0.11.0 export PATH=$PATH:$HADOOP_HOME/bin:$HIVE_HOME/bin (also see my sample .bash_profile) Set up ssh trust for Hadoop process, this is a mandatory step, in our case we have to establish a "local trust" as will are using a single node configuration copy the new public keys to the list of authorized keys connect and test the ssh setup to your localhost: We will run a "pseudo-Hadoop cluster", in what is called "local standalone mode", all the Hadoop java components are running in one Java process, this is enough for our demo purposes. We need to "fine tune" some Hadoop configuration files, we have to go at our $HADOOP_HOME/conf, and modify the files: core-site.xml hdfs-site.xml mapred-site.xml check that the hadoop binaries are referenced correctly from the command line by executing: hadoop -version As Hadoop is managing our "clustered HDFS" file system we have to create "the mount point" and format it , the mount point will be declared to core-site.xml as: The layout under the /u01/hadoop-1.2.1/data will be created and used by other hadoop components (MapReduce = /mapred/...) HDFS is using the /dfs/... layout structure format the HDFS hadoop file system: Start the java components for the HDFS system As an additional check, you can use the GUI Hadoop browsers to check the content of your HDFS configurations: Once our HDFS Hadoop setup is done you can use the HDFS file system to store data ( big data : )), and plug them back and forth to Oracle Databases by the means of the Big Data Connectors ( which is the next configuration step). You can create / use a Hive db, but in our case we will make a simple integration of "raw data" , through the creation of an External Table to a local Oracle instance ( on the same Linux box, we run the Hadoop HDFS one node cluster and one Oracle DB). Download some public "big data", I use the site: http://france.meteofrance.com/france/observations, from where I can get *.csv files for my big data simulations :). Here is the data layout of my example file: Download the Big Data Connector from the OTN (oraosch-2.2.0.zip), unzip it to your local file system (see picture below) Modify your environment in order to access the connector libraries , and make the following test: [oracle@dg1 bin]$./hdfs_stream Usage: hdfs_stream locationFile [oracle@dg1 bin]$ Load the data to the Hadoop hdfs file system: hadoop fs -mkdir bgtest_data hadoop fs -put obsFrance.txt bgtest_data/obsFrance.txt hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$ hadoop fs -ls /user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt [oracle@dg1 bg-data-raw]$hadoop fs -ls hdfs:///user/oracle/bgtest_data/obsFrance.txt Found 1 items -rw-r--r-- 1 oracle supergroup 54103 2013-10-22 06:10 /user/oracle/bgtest_data/obsFrance.txt Check the content of the HDFS with the browser UI: Start the Oracle database, and run the following script in order to create the Oracle database user, the Oracle directories for the Oracle Big Data Connector (dg1 it’s my own db id replace accordingly yours): #!/bin/bash export ORAENV_ASK=NO export ORACLE_SID=dg1 . oraenv sqlplus /nolog <<EOF CONNECT / AS sysdba; CREATE OR REPLACE DIRECTORY osch_bin_path AS '/u01/orahdfs-2.2.0/bin'; CREATE USER BGUSER IDENTIFIED BY oracle; GRANT CREATE SESSION, CREATE TABLE TO BGUSER; GRANT EXECUTE ON sys.utl_file TO BGUSER; GRANT READ, EXECUTE ON DIRECTORY osch_bin_path TO BGUSER; CREATE OR REPLACE DIRECTORY BGT_LOG_DIR as '/u01/BG_TEST/logs'; GRANT READ, WRITE ON DIRECTORY BGT_LOG_DIR to BGUSER; CREATE OR REPLACE DIRECTORY BGT_DATA_DIR as '/u01/BG_TEST/data'; GRANT READ, WRITE ON DIRECTORY BGT_DATA_DIR to BGUSER; EOF Put the following in a file named t3.sh and make it executable, hadoop jar $OSCH_HOME/jlib/orahdfs.jar \ oracle.hadoop.exttab.ExternalTable \ -D oracle.hadoop.exttab.tableName=BGTEST_DP_XTAB \ -D oracle.hadoop.exttab.defaultDirectory=BGT_DATA_DIR \ -D oracle.hadoop.exttab.dataPaths="hdfs:///user/oracle/bgtest_data/obsFrance.txt" \ -D oracle.hadoop.exttab.columnCount=7 \ -D oracle.hadoop.connection.url=jdbc:oracle:thin:@//localhost:1521/dg1 \ -D oracle.hadoop.connection.user=BGUSER \ -D oracle.hadoop.exttab.printStackTrace=true \ -createTable --noexecute then test the creation fo the external table with it: [oracle@dg1 samples]$ ./t3.sh ./t3.sh: line 2: /u01/orahdfs-2.2.0: Is a directory Oracle SQL Connector for HDFS Release 2.2.0 - Production Copyright (c) 2011, 2013, Oracle and/or its affiliates. All rights reserved. Enter Database Password:] The create table command was not executed. The following table would be created. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081035-74-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files would be created. osch-20131022081035-74-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt Then remove the --noexecute flag and create the external Oracle table for the Hadoop data. Check the results: The create table command succeeded. CREATE TABLE "BGUSER"."BGTEST_DP_XTAB" ( "C1" VARCHAR2(4000), "C2" VARCHAR2(4000), "C3" VARCHAR2(4000), "C4" VARCHAR2(4000), "C5" VARCHAR2(4000), "C6" VARCHAR2(4000), "C7" VARCHAR2(4000) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY "BGT_DATA_DIR" ACCESS PARAMETERS ( RECORDS DELIMITED BY 0X'0A' CHARACTERSET AL32UTF8 STRING SIZES ARE IN CHARACTERS PREPROCESSOR "OSCH_BIN_PATH":'hdfs_stream' FIELDS TERMINATED BY 0X'2C' MISSING FIELD VALUES ARE NULL ( "C1" CHAR(4000), "C2" CHAR(4000), "C3" CHAR(4000), "C4" CHAR(4000), "C5" CHAR(4000), "C6" CHAR(4000), "C7" CHAR(4000) ) ) LOCATION ( 'osch-20131022081719-3239-1' ) ) PARALLEL REJECT LIMIT UNLIMITED; The following location files were created. osch-20131022081719-3239-1 contains 1 URI, 54103 bytes 54103 hdfs://localhost:19000/user/oracle/bgtest_data/obsFrance.txt This is the view from the SQL Developer: and finally the number of lines in the oracle table, imported from our Hadoop HDFS cluster SQL select count(*) from "BGUSER"."BGTEST_DP_XTAB"; COUNT(*) ---------- 1151 In a next post we will integrate data from a Hive database, and try some ODI integrations with the ODI Big Data connector. Our simplistic approach is just a step to show you how these unstructured data world can be integrated to Oracle infrastructure. Hadoop, BigData, NoSql are great technologies, they are widely used and Oracle is offering a large integration infrastructure based on these services. Oracle University presents a complete curriculum on all the Oracle related technologies: NoSQL: Introduction to Oracle NoSQL Database Using Oracle NoSQL Database Big Data: Introduction to Big Data Oracle Big Data Essentials Oracle Big Data Overview Oracle Data Integrator: Oracle Data Integrator 12c: New Features Oracle Data Integrator 11g: Integration and Administration Oracle Data Integrator: Administration and Development Oracle Data Integrator 11g: Advanced Integration and Development Oracle Coherence 12c: Oracle Coherence 12c: New Features Oracle Coherence 12c: Share and Manage Data in Clusters Oracle Coherence 12c: Oracle GoldenGate 11g: Fundamentals for Oracle Oracle GoldenGate 11g: Fundamentals for SQL Server Oracle GoldenGate 11g Fundamentals for Oracle Oracle GoldenGate 11g Fundamentals for DB2 Oracle GoldenGate 11g Fundamentals for Teradata Oracle GoldenGate 11g Fundamentals for HP NonStop Oracle GoldenGate 11g Management Pack: Overview Oracle GoldenGate 11g Troubleshooting and Tuning Oracle GoldenGate 11g: Advanced Configuration for Oracle Other Resources: Apache Hadoop : http://hadoop.apache.org/ is the homepage for these technologies. "Hadoop Definitive Guide 3rdEdition" by Tom White is a classical lecture for people who want to know more about Hadoop , and some active "googling " will also give you some more references. About the author: Eugene Simos is based in France and joined Oracle through the BEA-Weblogic Acquisition, where he worked for the Professional Service, Support, end Education for major accounts across the EMEA Region. He worked in the banking sector, ATT, Telco companies giving him extensive experience on production environments. Eugen currently specializes in Oracle Fusion Middleware teaching an array of courses on Weblogic/Webcenter, Content,BPM /SOA/Identity-Security/GoldenGate/Virtualisation/Unified Comm Suite) throughout the EMEA region.

    Read the article

  • Slide-decks from recent Adelaide SQL Server UG meetings

    - by Rob Farley
    The UK has been well represented this summer at the Adelaide SQL Server User Group, with presentations from Chris Testa-O’Neill (isn’t that the right link? Maybe try this one) and Martin Cairney. The slides are available here and here. I thought I’d particularly mention Martin’s, and how it’s relevant to this month’s T-SQL Tuesday. Martin spoke about Policy-Based Management and the Enterprise Policy Management Framework – something which is remarkably under-used, and yet which can really impact your ability to look after environments. If you have policies set up, then you can easily test each of your SQL instances to see if they are still satisfying a set of policies as defined. Automation (the topic of this month’s T-SQL Tuesday) should mean that your life is made easier, thereby enabling to you to do more. It shouldn’t remove the human element, but should remove (most of) the human errors. People still need to manage the situation, and work out what needs to be done, etc. We haven’t reached a point where computers can replace people, but they are very good at replace the mundaneness and monotony of our jobs. They’ve made our lives more interesting (although many would rightly argue that they have also made our lives more complex) by letting us focus on the stuff that changes. Martin named his talk Put Your Feet Up, which nicely expresses the fact that managing systems shouldn’t be about running around checking things all the time. It must be about having systems in place which tell you when things aren’t going well. It’s never quite as simple as being able to actually put your feet up, but certainly no system should require constant attention. It’s definitely a policy we at LobsterPot adhere to, whether it’s an alert to let us know that an ETL package has run successfully, or a script that generates some code for a report. If things can be automated, it reduces the chance of error, reduces the repetitive nature of work, and in general, keeps both consultants and clients much happier.

    Read the article

  • A BYOD World in Mobile Enterprise Brings the Need to Adapt

    - by Webgui
    Yesterday brought a lot of news coverage that Cisco has stopped funding and planning its Cius enterprise-grade tablet.  Citing “market transitions” in which an increasing number of people b ring their own smartphones and tablets to work, Cisco General Manager OJ Winge said in a post on the company's official blog that “Cisco will no longer invest in the Cisco Cius tablet form factor, and no further enhancements will be made to the current Cius endpoint beyond what’s available today.”  Employees are “bringing their preferences to work” and collaboration “has to happen beyond a walled garden,” he said.The blog post also cited a recently released Cisco study which found that 95% of organizations surveyed allow employee-owned devices in some way, shape or form in the office, and, 36% of surveyed enterprises provide full support for employee-owned devices.   How is Cisco planning to move forward to adapt to this changing business environment?  Instead of focusing on tablets for enterprise customers, Cisco will instead "double down" on software that works across a variety of operating systems and smart phones and tablets, Winge said.See the post from the Cisco blog here - http://blogs.cisco.com/collaboration/empowering-choice-in-collaboration/ We at Gizmox recognize this need to adapt to the changing environment.  Our Enterprise Mobile solution is designed and built for that post-PC, BYOD business world.  We recognized the importance of providing a cross-platform solution that can easily target different devices and operating systems. We went with a web-based mobile application approach in order to achieve that and we decided to go with the new open web standard - HTML5.Our solution however provides both client and the server side programming and its uniqueness is that it allows those cross-platform HTML5 mobile applications while developing within Visual Studio using classic visual form based development. As a result, .NET developers can build secure, efficient, data-centric enterprise mobile application for cross platform mobile devices with their existing skills and tools.  See our new video about our EnterpriseMobile solution Enterprise applications today need to work on all devices, across different platforms and OS’s.  It’s just a fact of life.  How about you – do you bring your own device to work?  What’s your company’s BYOD policy?

    Read the article

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • Slide-decks from recent Adelaide SQL Server UG meetings

    - by Rob Farley
    The UK has been well represented this summer at the Adelaide SQL Server User Group, with presentations from Chris Testa-O’Neill (isn’t that the right link? Maybe try this one) and Martin Cairney. The slides are available here and here. I thought I’d particularly mention Martin’s, and how it’s relevant to this month’s T-SQL Tuesday. Martin spoke about Policy-Based Management and the Enterprise Policy Management Framework – something which is remarkably under-used, and yet which can really impact your ability to look after environments. If you have policies set up, then you can easily test each of your SQL instances to see if they are still satisfying a set of policies as defined. Automation (the topic of this month’s T-SQL Tuesday) should mean that your life is made easier, thereby enabling to you to do more. It shouldn’t remove the human element, but should remove (most of) the human errors. People still need to manage the situation, and work out what needs to be done, etc. We haven’t reached a point where computers can replace people, but they are very good at replace the mundaneness and monotony of our jobs. They’ve made our lives more interesting (although many would rightly argue that they have also made our lives more complex) by letting us focus on the stuff that changes. Martin named his talk Put Your Feet Up, which nicely expresses the fact that managing systems shouldn’t be about running around checking things all the time. It must be about having systems in place which tell you when things aren’t going well. It’s never quite as simple as being able to actually put your feet up, but certainly no system should require constant attention. It’s definitely a policy we at LobsterPot adhere to, whether it’s an alert to let us know that an ETL package has run successfully, or a script that generates some code for a report. If things can be automated, it reduces the chance of error, reduces the repetitive nature of work, and in general, keeps both consultants and clients much happier.

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • Tuning Red Gate: #5 of Multiple

    - by Grant Fritchey
    In the Tuning Red Gate series I've shown you how to look at a current load on the system and how to drill down to look at historical analysis of the system. I've also shown how you can see the top queries and other information from the current status of the system. I have one more thing I can show you before we need to start fixing things and showing how that affects the data collected, historical moments in time. For example, back in Post #3 I was looking at some spikes in some of the monitored resources that were taking place a couple of weeks back in time. Once I identify a moment in time that I'm interested in, I can go back to the first page of Monitor, Global Overview, and click on the icon: From this you can select the date and time you're interested in. For example, I saw some serious CPU queues last week: This then rolls back the time for all the information that's available to the Global Overview and the drill down to the server and the SQL Server instance there. This then allows me to look at the Top Queries running at this point, sort them by CPU and identify what was potentially the query that was causing the problem right when I saw the CPU queuing This ability to correlate a moment in time with the information available to you in the Analysis window makes for an excellent tool to investigate your systems going backwards in time. It really makes a huge difference in your knowledge. It's not enough to know that something happened at a particular time. You need to know what it was that was occurring. Remember, the key to tuning your systems is having enough knowledge about them. I'll post more on Tuning Red Gate as soon as I can get some queries rewritten. I'm working on that.

    Read the article

  • San Joaquin County, California Wins AIIM 2012 Carl E. Nelson Best Practice Award

    - by Peggy Chen
    Last month, AIIM, the global community of information professionals, announced the winners of the 2012 Carl E. Nelson Best Practices Awards. And San Joaquin County, California won in the small company category for 1-100 employees. The Carl E. Nelson Best Practices Award was established to recognize excellence in the area of information management. "Best practice" denotes a standard of excellence that has been achieved with an organization and refers to a process that can be quantified, adapted and repeated. Like many counties, San Joaquin County, California, was faced with huge challenges due to decreasing funds and staff, including decreased cost of building capability. It needed to streamline processes, cut costs per activity, modernize and strengthen the infrastructure, and adopt new technology and standards such as the National Information Exchange Model (NIEM). The Integrated Justice Information System (IJIS) provides a Web-based system to link more than 650,000 residents, 18 agencies countywide and other law enforcement systems nationwide. The county’s modernization initiative focused on replacing its outdated warrant system, implementing service-oriented architecture (SOA) to simplify integration between county law and justice systems, deploying Business Process Management (BPM), Case Management with content management, and Web technologies from Oracle. A critical part of their success has been the proper alignment of our Strategic Vision to the way the organization was enabled to plan and execute (and continues to execute) their modernization project. Congratulations to San Joaquin County!

    Read the article

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • HTG Explains: Do Non-Windows Platforms Like Mac, Android, iOS, and Linux Get Viruses?

    - by Chris Hoffman
    Viruses and other types of malware seem largely confined to Windows in the real world. Even on a Windows 8 PC, you can still get infected with malware. But how vulnerable are other operating systems to malware? When we say “viruses,” we’re actually talking about malware in general. There’s more to malware than just viruses, although the word virus is often used to talk about malware in general. Why Are All the Viruses For Windows? Not all of the malware out there is for Windows, but most of it is. We’ve tried to cover why Windows has the most viruses in the past. Windows’ popularity is definitely a big factor, but there are other reasons, too. Historically, Windows was never designed for security in the way that UNIX-like platforms were — and every popular operating system that’s not Windows is based on UNIX. Windows also has a culture of installing software by searching the web and downloading it from websites, whereas other platforms have app stores and Linux has centralized software installation from a secure source in the form of its package managers. Do Macs Get Viruses? The vast majority of malware is designed for Windows systems and Macs don’t get Windows malware. While Mac malware is much more rare, Macs are definitely not immune to malware. They can be infected by malware written specifically for Macs, and such malware does exist. At one point, over 650,000 Macs were infected with the Flashback Trojan. [Source] It infected Macs through the Java browser plugin, which is a security nightmare on every platform. Macs no longer include Java by default. Apple also has locked down Macs in other ways. Three things in particular help: Mac App Store: Rather than getting desktop programs from the web and possibly downloading malware, as inexperienced users might on Windows, they can get their applications from a secure place. It’s similar to a smartphone app store or even a Linux package manager. Gatekeeper: Current releases of Mac OS X use Gatekeeper, which only allows programs to run if they’re signed by an approved developer or if they’re from the Mac App Store. This can be disabled by geeks who need to run unsigned software, but it acts as additional protection for typical users. XProtect: Macs also have a built-in technology known as XProtect, or File Quarantine. This feature acts as a blacklist, preventing known-malicious programs from running. It functions similarly to Windows antivirus programs, but works in the background and checks applications you download. Mac malware isn’t coming out nearly as quick as Windows malware, so it’s easier for Apple to keep up. Macs are certainly not immune to all malware, and someone going out of their way to download pirated applications and disable security features may find themselves infected. But Macs are much less at risk of malware in the real world. Android is Vulnerable to Malware, Right? Android malware does exist and companies that produce Android security software would love to sell you their Android antivirus apps. But that isn’t the full picture. By default, Android devices are configured to only install apps from Google Play. They also benefit from antimalware scanning — Google Play itself scans apps for malware. You could disable this protection and go outside Google Play, getting apps from elsewhere (“sideloading”). Google will still help you if you do this, asking if you want to scan your sideloaded apps for malware when you try to install them. In China, where many, many Android devices are in use, there is no Google Play Store. Chinese Android users don’t benefit from Google’s antimalware scanning and have to get their apps from third-party app stores, which may contain infected copies of apps. The majority of Android malware comes from outside Google Play. The scary malware statistics you see primarily include users who get apps from outside Google Play, whether it’s pirating infected apps or acquiring them from untrustworthy app stores. As long as you get your apps from Google Play — or even another secure source, like the Amazon App Store — your Android phone or tablet should be secure. What About iPads and iPhones? Apple’s iOS operating system, used on its iPads, iPhones, and iPod Touches, is more locked down than even Macs and Android devices. iPad and iPhone users are forced to get their apps from Apple’s App Store. Apple is more demanding of developers than Google is — while anyone can upload an app to Google Play and have it available instantly while Google does some automated scanning, getting an app onto Apple’s App Store involves a manual review of that app by an Apple employee. The locked-down environment makes it much more difficult for malware to exist. Even if a malicious application could be installed, it wouldn’t be able to monitor what you typed into your browser and capture your online-banking information without exploiting a deeper system vulnerability. Of course, iOS devices aren’t perfect either. Researchers have proven it’s possible to create malicious apps and sneak them past the app store review process. [Source] However, if a malicious app was discovered, Apple could pull it from the store and immediately uninstall it from all devices. Google and Microsoft have this same ability with Android’s Google Play and Windows Store for new Windows 8-style apps. Does Linux Get Viruses? Malware authors don’t tend to target Linux desktops, as so few average users use them. Linux desktop users are more likely to be geeks that won’t fall for obvious tricks. As with Macs, Linux users get most of their programs from a single place — the package manager — rather than downloading them from websites. Linux also can’t run Windows software natively, so Windows viruses just can’t run. Linux desktop malware is extremely rare, but it does exist. The recent “Hand of Thief” Trojan supports a variety of Linux distributions and desktop environments, running in the background and stealing online banking information. It doesn’t have a good way if infecting Linux systems, though — you’d have to download it from a website or receive it as an email attachment and run the Trojan. [Source] This just confirms how important it is to only run trusted software on any platform, even supposedly secure ones. What About Chromebooks? Chromebooks are locked down laptops that only run the Chrome web browser and some bits around it. We’re not really aware of any form of Chrome OS malware. A Chromebook’s sandbox helps protect it against malware, but it also helps that Chromebooks aren’t very common yet. It would still be possible to infect a Chromebook, if only by tricking a user into installing a malicious browser extension from outside the Chrome web store. The malicious browser extension could run in the background, steal your passwords and online banking credentials, and send it over the web. Such malware could even run on Windows, Mac, and Linux versions of Chrome, but it would appear in the Extensions list, would require the appropriate permissions, and you’d have to agree to install it manually. And Windows RT? Microsoft’s Windows RT only runs desktop programs written by Microsoft. Users can only install “Windows 8-style apps” from the Windows Store. This means that Windows RT devices are as locked down as an iPad — an attacker would have to get a malicious app into the store and trick users into installing it or possibly find a security vulnerability that allowed them to bypass the protection. Malware is definitely at its worst on Windows. This would probably be true even if Windows had a shining security record and a history of being as secure as other operating systems, but you can definitely avoid a lot of malware just by not using Windows. Of course, no platform is a perfect malware-free environment. You should exercise some basic precautions everywhere. Even if malware was eliminated, we’d have to deal with social-engineering attacks like phishing emails asking for credit card numbers. Image Credit: stuartpilbrow on Flickr, Kansir on Flickr     

    Read the article

  • Virtualization in Solaris 11 Express

    - by lynn.rohrer(at)oracle.com
    In Oracle Solaris 10 we introduced Oracle Solaris Containers -- lightweight virtual application environments that allow you to consolidate your Oracle Solaris applications onto a single Oracle Solaris server and make the most of your system resources.The majority of our customers are now using Oracle Solaris Containers on their enterprise systems for applications ranging from web servers to Oracle Database installations. We can also make these Containers highly available with Oracle Solaris Cluster, the industry's first virtualization-aware enterprise cluster product. Using Oracle Solaris Cluster you can failover applications in a Container to another Container on a single system or across systems for additional availability.We've added significant features in Oracle Solaris 11 Express to improve and extend the Oracle Solaris Zone model:Integration of Zones with our new Solaris 11 packaging system (aka Image Packaging System) to provide easy software updates within a zoneSupport for Oracle Solaris 10 Zones to run your Solaris 10 applications unaltered on an Oracle Solaris 11 Express systemIntegration with the new Oracle Solaris 11 network stack architecture (more on this in a future blog post)Improved observability with the zonestat management interface and commandsDelegated administration rights for owners of individual non-global zonesTight integration with Oracle Solaris ZFS to allow dedicated datasets per zoneWith ZFS as the default file system we can now provide easy to manage Boot Environments for zonesThis quick summary is just to whet your appetite to learn more about Oracle Solaris 11 Express Zones enhancements. Fortunately we can serve a full meal at the Oracle Solaris 11 Express Technology Spotlight on Virtualization page on the Oracle Technical Network.

    Read the article

  • A new version of Oracle Enterprise Manager Ops Center Doctor (OCDoctor ) Utility released

    - by Anand Akela
    In February,  we posted a blog of Oracle Enterprise Manager Ops Center Doctor aka OCDoctor Utility. This utility assists in various stages of the Ops Center deployment and can be a real life saver. It is updated on a regular basis with additional knowledge (similar to an antivirus subscription) to help you identify and resolve known issues or suggest ways to improve performance.A new version ( Version 4.00 ) of the OCDoctor is now available . This new version adds full support for recently announced Oracle Enterprise Manager Ops Center 12c including prerequisites checks, troubleshoot tests, log collection, tuning and product metadata updates. In addition, it adds several bug fixes and enhancements to OCDoctor Utility.To download OCDoctor for new installations:https://updates.oracle.com/OCDoctor/OCDoctor-latest.zipFor existing installations, simply run:# /var/opt/sun/xvm/OCDoctor/OCDoctor.sh --updateTip : If you have Oracle Enterprise Manager Ops Center12c EC installed, your OCDoctor will automatically update overnight. Join Oracle Launch Webcast : Total Cloud Control for Systems on April 12th at 9 AM PST to learn more about  Oracle Enterprise Manager Ops Center 12c from Oracle Senior Vice President John Fowler, Oracle Vice President of Systems Management Steve Wilson and a panel of Oracle executive. Stay connected with  Oracle Enterprise Manager   :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • Partner Webcast - Is your Application Ready? Prove it with the Oracle Exastack Program

    - by Thanos
    At Oracle we design Engineered Systems that are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. Oracle innovates and optimizes performance at every IT layer to simplify business operations, drive down costs and accelerate business innovation.As the Engineered System foundation platform, Oracle Exadata and Oracle Exalogic, run all of Oracle Cloud's services across a range of global data centers, delivering extreme performance, massive scalability, and fault tolerance that has no single point of failure.The Oracle Exastack Program enables you as an ISV to leverage Oracle's scalable, integrated infrastructure to test, tune and optimize your applications for high performance. By getting Exastack Ready and Exastack Optimized, your applications get formal recognition from Oracle and additional visibility, while you as an ISV receive additional set of OPN benefits. Don't miss this opportunity to learn more about how you can optimize your applications to run faster and more reliably leveraging Oracle Exastack, but also become more competitive letting everybody know you are ready. Agenda: Oracle Engineered Systems Strategy OPN Exastack Program Benefits & Objectives Value for You Oracle is resourced for your success How to Apply –Demo Next Steps & Useful contacts Delivery FormatThis FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday 06 December 2012, 10.00 CET (GMT+1) Duration: 1 hour Register Now! " height="6"> For any questions please contact us at [email protected] our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix

    Read the article

  • WebCenter .NET Accelerator - Microsoft SharePoint Data via WSRP

    - by john.brunswick
    Platforms in the enterprise will never be homogeneous. As much as any vendor would enjoy having their single development or application technology be exclusively adopted by customers, too much legacy, time, education, innovation and vertical business needs exist to make using a single platform practical. JAVA and .NET are the two industry application platform heavyweights and more often than not, business users are leveraging various systems in their day to day activities that incorporate applications developed on top of both platforms. BEA Systems acquired Plumtree Software to complete their "liquid" view of data, stressing that regardless of a particular source system heterogeneous data could interoperate at not only through layers that allowed for data aggregation, but also at the "glass" or UI layer. The technical components that allowed the integration at the glass thrive today at Oracle, helping WebCenter to provide a rich composite application framework. Oracle Ensemble and the Oracle .NET Application Accelerator allow WebCenter to consume and interact with the UI layers provided by .NET applications and a series of other technologies. The beauty of the .NET accelerator is that it can consume any .NET application and act as a Web Services for Remote Portlets (WSRP) producer. I recently had a chance to leverage the .NET accelerator to expose a ASP .NET 2.0 (C#) application in the WebCenter UI (pictured above) and wanted to share a few tips to help others get started with similar integrations. I was using two virtual machines for the exercise - one with Windows Server 2003, running SharePoint and the other running WebCenter Spaces 11g. For my sample application data I ended up using SharePoint 2007 lists and calendars (MOSS 2007) to supply results using a .NET API for SharePoint.

    Read the article

  • Unleash AutoVue on Your Unmanaged Data

    - by [email protected]
    Over the years, I've spoken to hundreds of customers who use AutoVue to collaborate on their "managed" data stored in content management systems, product lifecycle management systems, etc. via our many integrations. Through these conversations I've also learned a harsh reality - we will never fully move away from unmanaged data (desktops, file servers, emails, etc). If you use AutoVue today you already know that even if your primary use is viewing content stored in a content management system, you can still open files stored locally on your computer. But did you know that AutoVue actually has - built-in - a great solution for viewing, printing and redlining your data stored on file servers? Using the 'Server protocol' you can point AutoVue directly to a top-level location on any networked file server and provide your users with a link or shortcut to access an interface similar to the sample page shown below. Many customers link to pages just like this one from their internal company intranets. Through this webpage, users can easily search and browse through file server data with a 'click-and-view' interface to find the specific image, document, drawing or model they're looking for. Any markups created on a document will be accessible to everyone else viewing that document and of course real-time collaboration is supported as well. Customers on maintenance can consult the AutoVue Admin guide or My Oracle Support Doc ID 753018.1 for an introduction to the server protocol. Contact your local AutoVue Solutions Consultant for help setting up the sample shown above.

    Read the article

  • Announcing Berkeley DB Java Edition Major Release

    - by Eric Jensen
    Berkeley DB Java Edition 5.0 was just released. There are a number of new features, enhancements, and options in there that our users have been asking for. Chief among them is a new class called DiskOrderedCursor, which greatly increases performance of systems using spinning platter magnetic hard drives. A number of users expressed interest in this feature, including Alex Feinberg of LinkedIn. Berkeley DB Java Edition is part of Project Voldemort, a distributed key/value database used by LinkedIn. There have been many other improvements and optimizations. Concurrency is significantly improved, as is the performance of update and delete operations. New and interesting methods include Environment.preload, which allows multiple databases to be preloaded simultaneously. New Cursor methods enable for more effective searching through the database. We continue to enhance Berkeley DB Java Edition’s High Availability as well. One new feature is the ability to open a replicated node read-only when the master is unavailable. This can allow critical systems to continue offering some functionality, even during a network or master node failure. There’s a lot more in release 5.0. I encourage you to take a look at the extensive changelog yourself. As always, you can download the new release and try it out here: http://www.oracle.com/technetwork/database/berkeleydb/downloads/index.html

    Read the article

  • Oracle VM VirtualBox 4.0 Now Available

    - by Paulo Folgado
    Delivering on Oracle's commitment to open source, Oracle VM VirtualBox 4.0 is now available, further enhancing the popular, open source, cross-platform virtualization software.   "Oracle VM VirtualBox 4.0 is the third major product release in just over a year, and adds to the many new product releases across the Oracle Virtualization product line, illustrating the investment and importance that Oracle places on providing a comprehensive desktop to datacenter virtualization solution," says Wim Coekaerts, senior vice president, Linux and Virtualization Engineering, Oracle. "With an improved user interface and added virtual hardware support, customers will find Oracle VM VirtualBox 4.0 provides a richer user experience." Part of Oracle's comprehensive portfolio of virtualization solutions, Oracle VM VirtualBox enables desktop or laptop computers to run multiple guest operating systems simultaneously, allowing users to get the most flexibility and utilization out of their PCs, and supports a variety of host operating systems, including Windows, Mac OS X, most popular flavors of Linux (including Oracle Linux), and Oracle Solaris. Oracle VM VirtualBox 4.0 delivers increased capacity and throughput to handle greater workloads, enhanced virtual appliance capabilities, and significant usability improvements. Support for the latest in virtual hardware, including chipsets supporting PCI Express, further extends the value delivered to customers, partners, and developers. Highlights of Oracle VM VirtualBox 4.0 include New Open Architecture - Oracle and community developers can now create extensions that customize Oracle VM VirtualBox and add features not previously available.Enhanced Usability - A new scalable display mode enables users to view more virtual displays on their existing monitors. Improvements to VM management, including visual VM previews, an optional attributes display, and easy launch shortcut creation enables administrators and power users to customize the interface to make it as simple or as comprehensive as required.Increased Capacity and Throughput - A new asynchronous I/O model for networked (iSCSI) and local storage delivers significant storage related performance improvements, while new optimizations allow larger datacenter-class workloads, such as Oracle's middeware, to be run on 32-bit Windows hosts for testing and demo purposes. Powerful Virtual Appliance Sharing Capabilities - Enhanced support for standards-compliant OVF appliances and added support for OVA format descriptors. All information about a VM may be stored in a single folder to facilitate easier direct sharing among VMs. Support for Latest Virtual Hardware - A new, modern virtual chipset supporting PCI Express and other hardware enhancements including high-definition audio devices helps ensure support for the most demanding virtual workloads.

    Read the article

  • IoT end-to-end demo – Remote Monitoring and Service By Harish Doddala

    - by JuergenKress
    Historically, data was generated from predictable sources, stored in storage systems and accessed for further processing. This data was correlated, filtered and analyzed to derive insights and/or drive well constructed processes. There was little ambiguity in the kinds of data, the sources it would originate from and the routes that it would follow. Internet of Things (IoT) creates many opportunities to extract value from data that result in significant improvements across industries such as Automotive, Industrial Manufacturing, Smart Utilities, Oil and Gas, High Tech and Professional Services, etc. This demo showcases how the health of remotely deployed machinery can be monitored to illustrate how data coming from devices can be analyzed in real-time, integrated with back-end systems and visualized to initiate action as may be necessary. Use-case: Remote Service and Maintenance Critical machinery once deployed on the field, is expected to work with minimal failures, while delivering high performance and reliability. In typical remote monitoring and industrial automation scenarios, although many physical objects from machinery to equipment may already be “smart and connected,” they are typically operated in a standalone fashion and not integrated into existing business processes. IoT adds an interesting dynamic to remote monitoring in industrial automation solutions in that it allows equipment to be monitored, upgraded, maintained and serviced in ways not possible before. Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: IoT,Iot demo,sales,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >