Search Results

Search found 41254 results on 1651 pages for 'oracle database 12c'.

Page 90/1651 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • What are Information Centers?

    - by user12244613
    Information Centers are similar to product pages in the Oracle Sun System Handbook Many customers like the Oracle Sun System Handbook concept of a home page with all the product attributes, troubleshooting etc. access from a single home page. This concept is now available for a range of Oracle Solaris, Systems, and Storage products. The Information Center for each product covers areas such as: Overview, Hot Topics, Patching and Maintenance. The Information Center pages are dynamically generated each night to ensure the latest content is available to you. Here are the top Solaris, Systems, and Storage Information Centers: Oracle Explorer Data Collector Oracle Solaris 10 Live Upgrade Oracle Solaris 11 Booting Information Center Oracle Solaris 11 Desktop and Graphics Information Center Oracle Solaris 11 Image Packaging System (IPS) Information Center Oracle Solaris 11 Installation Information Center Oracle Solaris 11 Product Information Center Oracle Solaris 11 Security Information Center Oracle Solaris 11 System Administration Information Center Oracle Solaris 11 Zones Information Center Oracle Solaris Crash Analysis Tool(SCAT) - Information Center Oracle Solaris Cluster Information Center Oracle Solaris Internet Protocol Multipathing (IPMP) Information Center Oracle Solaris Live Upgrade Information Center Oracle Solaris ZFS Information Center Oracle Solaris Zones Information Center CMT T1000/T2000 and Netra T2000 CMT T5120/T5120/T5140/T5220/T5240/T5440 Systems M3000/M4000/M5000/M8000/M9000-32/M9000-64 Management and Diagnostic Tools for Oracle Sun Systems Netra CT410/810 and Netra CT900 Network-Attached Storage (NAS) Oracle Explorer Data Collector Oracle VM Server for SPARC (LDoms) Pillar Axiom 600 SL3000 Tape Library Sun Disk Storage Patching and Updates Sun Fire 3800/4800/4810/6800/E2900/E4900/E6900/V1280 - Netra 1280/1290 Sun Fire 12K/15K/E20K/E25K Sun Fire X4270 M2 Server Sun x86 Servers T3 and T4 Systems Tape Domain Firmware V210/V240/V440/V215/V245/V445 Servers VSM (VTSS/VLE/VTCS)

    Read the article

  • Should we have a database independent SQL like query language in Django?

    - by Yugal Jindle
    Note : I know we have Django ORM already that keeps things database independent and converts to the database specific SQL queries. Once things starts getting complicated it is preferred to write raw SQL queries for better efficiency. When you write raw sql queries your code gets trapped with the database you are using. I also understand its important to use the full power of your database that can-not be achieved with the django orm alone. My Question : Until I use any database specific feature, why should one be trapped with the database. For instance : We have a query with multiple joins and we decided to write a raw sql query. Now, that makes my website postgres specific. Even when I have not used any postgres specific feature. I feel there should be some fake sql language which can translate to any database's sql query. Even Django's ORM can be built over it. So, that if you go out of ORM but not database specific - you can still remain database independent. I asked the same question to Jacob Kaplan Moss (In person) : He advised me to stay with the database that I like and endure its whole power, to which I agree. But my point was not that we should be database independent. My point is we should be database independent until we use a database specific feature. Please explain, why should be there a fake sql layer over the actual sql ?

    Read the article

  • SQL SERVER World Shapefile Download and Upload to Database Spatial Database

    During my recent, training I was asked by a student if I know a place where he can download spatial files for all the countries around the world, as well as if there is a way to upload shape files to a database. Here is a quick tutorial for it.VDS Technologies has all the spatial [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Database design and performance impact

    - by Craige
    I have a database design issue that I'm not quite sure how to approach, nor if the benefits out weigh the costs. I'm hoping some P.SE members can give some feedback on my suggested design, as well as any similar experiences they may have came across. As it goes, I am building an application that has large reporting demands. Speed is an important issue, as there will be peak usages throughout the year. This application/database has a multiple-level, many-to-many relationship. eg object a object b object c object d object b has relationship to object a object c has relationship to object b, a object d has relationship to object c, b, a Theoretically, this could go on for unlimited levels, though logic dictates it could only go so far. My idea here, to speed up reporting, would be to create a syndicate table that acts as a global many-to-many join table. In this table (with the given example), one might see: +----------+-----------+---------+ | child_id | parent_id | type_id | +----------+-----------+---------+ | b | a | 1 | | c | b | 2 | | c | a | 3 | | d | c | 4 | | d | b | 5 | | d | a | 6 | +----------+-----------+---------+ Where a, b, c and d would translate to their respective ID's in their respective tables. So, for ease of reporting all of a which exist on object d, one could query SELECT * FROM `syndicates` ... JOINS TO child and parent tables ... WHERE parent_id=a and type_id=6; rather than having a query with a join to each level up the chain. The Problem This table grows exponentially, and in a given year, could easily grow past 20,000 records for one client. Given multiple clients over multiple years, this table will VERY quickly explode to millions of records and beyond. Now, the database will, in time, be partitioned across multiple servers, but I would like (as most would) to keep the number of servers as low as possible while still offering flexibility. Also writes and updates would be exponentially longer (though possibly not noticeable to the end user) as there would be multiple inserts/updates/scans on this table to keep it in sync. Am I going in the right direction here, or am I way off track. What would you do in a similar situation? This solution seems overly complex, but allows the greatest flexibility and fastest read-operations. Sidenote 1 - This structure allows me to add new levels to the tree easily. Sidenote 2 - The database querying for this database is done through an ORM framework.

    Read the article

  • Oracle Identity Manager Role Management With API

    - by mustafakaya
    As an administrator, you use roles to create and manage the records of a collection of users to whom you want to permit access to common functionality, such as access rights, roles, or permissions. Roles can be independent of an organization, span multiple organizations, or contain users from a single organization. Using roles, you can: View the menu items that the users can access through Oracle Identity Manager Administration Web interface. Assign users to roles. Assign a role to a parent role Designate status to the users so that they can specify defined responses for process tasks. Modify permissions on data objects. Designate role administrators to perform actions on roles, such as enabling members of another role to assign users to the current role, revoke members from current role and so on. Designate provisioning policies for a role. These policies determine if a resource object is to be provisioned to or requested for a member of the role. Assign or remove membership rules to or from the role. These rules determine which users can be assigned/removed as direct membership to/from the role.  In this post, i will share some examples for role management with Oracle Identity Management API.  You can do role operations you can use Thor.API.Operations.tcGroupOperationsIntf interface. tcGroupOperationsIntf service =  getClient().getService(tcGroupOperationsIntf.class);     Assign an user to role :    public void assignRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.addMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     }  Revoke an user from role:     public void revokeRoleByUsrKey(String roleName, String usrKey) throws Exception {         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);         String groupKey = role.getStringValue("Groups.Key");         service.removeMemberUser(Long.parseLong(groupKey), Long.parseLong(usrKey));     } Get all members of a role :      public List<User> getRoleMembers(String roleName) throws Exception {         List<User> userList = new ArrayList<User>();         Map<String, String> filter = new HashMap<String, String>();         filter.put("Groups.Role Name", roleName);         tcResultSet role = service.findGroups(filter);       String groupKey = role.getStringValue("Groups.Key");         tcResultSet members = service.getAllMemberUsers(Long.parseLong(groupKey));         for (int i = 0; i < members.getRowCount(); i++) {                 members.goToRow(i);                 long userKey = members.getLongValue("Users.Key");                 User member = oimUserManager.findUserByUserKey(String.valueOf(userKey));                 userList.add(member);         }        return userList;     } About me: Mustafa Kaya is a Senior Consultant in Oracle Fusion Middleware Team, living in Istanbul. Before coming to Oracle, he worked in teams developing web applications and backend services at a telco company. He is a Java technology enthusiast, software engineer and addicted to learn new technologies,develop new ideas. Follow Mustafa on Twitter,Connect on LinkedIn, and visit his site for Oracle Fusion Middleware related tips.

    Read the article

  • Developing Schema Compare for Oracle (Part 1)

    - by Simon Cooper
    SQL Compare is one of Red Gate's most successful SQL Server tools; it allows developers and DBAs to compare and synchronize the contents of their databases. Although similar tools exist for Oracle, they are quite noticeably lacking in the usability and stability that SQL Compare is known for in the SQL Server world. We could see a real need for a usable schema comparison tools for Oracle, and so the Schema Compare for Oracle project was born. Over the next few weeks, as we come up to release of v1, I'll be doing a series of posts on the development of Schema Compare for Oracle. For the first post, I thought I would start with the main pitfalls that we stumbled across when developing the product, especially from a SQL Server background. 1. Schemas and Databases The most obvious difference is that the concept of a 'database' is quite different between Oracle and SQL Server. On SQL Server, one server instance has multiple databases, each with separate schemas. There is typically little communication between separate databases, and most databases are no more than about 1000-2000 objects. This means SQL Compare can register an entire database in a reasonable amount of time, and cross-database dependencies probably won't be an issue. It is a quite different scene under Oracle, however. The terms 'database' and 'instance' are used interchangeably, (although technically 'database' refers to the datafiles on disk, and 'instance' the running Oracle process that reads & writes to the database), and a database is a single conceptual entity. This immediately presents problems, as it is infeasible to register an entire database as we do in SQL Compare; in my Oracle install, using the standard recommended options, there are 63975 system objects. If we tried to register all those, not only would it take hours, but the client would probably run out of memory before we finished. As a result, we had to allow people to specify what schemas they wanted to register. This decision had quite a few knock-on effects for the design, which I will cover in a future post. 2. Connecting to Oracle The next obvious difference is in actually connecting to Oracle – in SQL Server, you can specify a server and database, and off you go. On Oracle things are slightly more complicated. SIDs, Service Names, and TNS A database (the files on disk) must have a unique identifier for the databases on the system, called the SID. It also has a global database name, which consists of a name (which doesn't have to match the SID) and a domain. Alternatively, you can identify a database using a service name, which normally has a 1-to-1 relationship with instances, but may not if, for example, using RAC (Real Application Clusters) for redundancy and failover. You specify the computer and instance you want to connect to using TNS (Transparent Network Substrate). The user-visible parts are a config file (tnsnames.ora) on the client machine that specifies how to connect to an instance. For example, the entry for one of my test instances is: SC_11GDB1 = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = simonctest)(PORT = 1521)) ) (CONNECT_DATA = (SID = 11gR1db1) ) ) This gives the hostname, port, and SID of the instance I want to connect to, and associates it with a name (SC_11GDB1). The tnsnames syntax also allows you to specify failover, multiple descriptions and address lists, and client load balancing. You can then specify this TNS identifier as the data source in a connection string. Although using ODP.NET (the .NET dlls provided by Oracle) was fine for internal prototype builds, once we released the EAP we discovered that this simply wasn't an acceptable solution for installs on other people's machines. Due to .NET assembly strong naming, users had to have installed on their machines the exact same version of the ODP.NET dlls as we had on our build server. We couldn't ship the ODP.NET dlls with our installer as the Oracle license agreement prohibited this, and we didn't want to force users to install another Oracle client just so they can run our program. To be able to list the TNS entries in the connection dialog, we also had to locate and parse the tnsnames.ora file, which was complicated by users with several Oracle client installs and intricate TNS entries. After much swearing at our computers, we eventually decided to use a third party Oracle connection library from Devart that we could ship with our program; this could use whatever client version was installed, parse the TNS entries for us, and also had the nice feature of being able to connect to an Oracle server without having any client installed at all. Unfortunately, their current license agreement prevents us from shipping an Oracle SDK, but that's a bridge we'll cross when we get to it. 3. Running synchronization scripts The most important difference is that in Oracle, DDL is non-transactional; you cannot rollback DDL statements like you can on SQL Server. Although we considered various solutions to this, including using the flashback archive or recycle bin, or generating an undo script, no reliable method of completely undoing a half-executed sync script has yet been found; so in this case we simply have to trust that the DBA or developer will check and verify the script before running it. However, before we got to that stage, we had to get the scripts to run in the first place... To run a synchronization script from SQL Compare we essentially pass the script over to the SqlCommand.ExecuteNonQuery method. However, when we tried to do the same for an OracleConnection we got a very strange error – 'ORA-00911: invalid character', even when running the most basic CREATE TABLE command. After much hair-pulling and Googling, we discovered that Oracle has got some very strange behaviour with semicolons at the end of statements. To understand what's going on, we need to take a quick foray into SQL and PL/SQL. PL/SQL is not T-SQL In SQL Server, T-SQL is the language used to interface with the database. It has DDL, DML, control flow, and many other nice features (like Turing-completeness) that you can mix and match in the same script. In Oracle, DDL SQL and PL/SQL are two completely separate languages, with different syntax, different datatypes and different execution engines within the instance. Oracle SQL is much more like 'pure' ANSI SQL, with no state, no control flow, and only the basic DML commands. PL/SQL is the Turing-complete language, but can only do DML and DCL (i.e. BEGIN TRANSATION commands). Any DDL or SQL commands that aren't recognised by the PL/SQL engine have to be passed back to the SQL engine via an EXECUTE IMMEDIATE command. In PL/SQL, a semicolons is a valid token used to delimit the end of a statement. In SQL, a semicolon is not a valid token (even though the Oracle documentation gives them at the end of the syntax diagrams) . When you execute the command CREATE TABLE table1 (COL1 NUMBER); in SQL*Plus the semicolon on the end is a command to SQL*Plus to execute the preceding statement on the server; it strips off the semicolon before passing it on. SQL Developer does a similar thing. When executing a PL/SQL block, however, the syntax is like so: BEGIN INSERT INTO table1 VALUES (1); INSERT INTO table1 VALUES (2); END; / In this case, the semicolon is accepted by the PL/SQL engine as a statement delimiter, and instead the / is the command to SQL*Plus to execute the current block. This explains the ORA-00911 error we got when trying to run the CREATE TABLE command – the server is complaining about the semicolon on the end. This also means that there is no SQL syntax to execute more than one DDL command in the same OracleCommand. Therefore, we would have to do a round-trip to the server for every command we want to execute. Obviously, this would cause lots of network traffic and be very slow on slow or congested networks. Our first attempt at a solution was to wrap every SQL statement (without semicolon) inside an EXECUTE IMMEDIATE command in a PL/SQL block and pass that to the server to execute. One downside of this solution is that we get no feedback as to how the script execution is going; we're currently evaluating better solutions to this thorny issue. Next up: Dependencies; how we solved the problem of being unable to register the entire database, and the knock-on effects to the whole product.

    Read the article

  • Oracle Enterprise Pack for Eclipse 12.1.1 update on OTN

    - by gstachni
    Oracle Enterprise Pack for Eclipse (OEPE) 12.1.1.0.1 was released to OTN last week with support for new standards and features including: Support for Eclipse Indigo SR2 (3.7.2) Updated server plugins for Glassfish 3.1.2 SSL configuration support for WebLogic Server deployment and debugging   The SSL configuration option can be found when configuring the domain for a new WebLogic Server connection. For Eclipse early adopters, an OEPE 12c update based on Eclipse Juno M6 will be available soon.

    Read the article

  • It was worth the wait… Welcome Oracle GoldenGate 11g Release 2

    - by Irem Radzik
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} It certainly was worth the wait to meet Oracle GoldenGate 11gR2, because it is full of new features on multiple fronts. In fact, this release has the longest and strongest list of new features in Oracle GoldenGate’s history. The new release brings GoldenGate closer to the Oracle Database while expanding the support for global implementations and heterogeneous systems. It is more secure, more flexible, and faster. We announced the availability of Oracle GoldenGate 11gR2 via a press release. If you haven’t seen it yet, please check it out. As covered in this announcement, there are a variety of improvements in the product: Integrated Capture for Oracle Database: brings Oracle GoldenGate’s Capture process closer to the Oracle Database engine and enables support for Advanced Compression among other benefits. Enhanced Conflict Detection & Resolution, speeds and simplifies the conflict detection and resolution process for Active-Active deployments. Globalization, meaning Oracle GoldenGate can be deployed for databases that use multi-byte/Unicode character sets. Security and Performance Improvements, includes support Federal Information Protection Standard (FIPS). Increased Extensibility by kicking off actions based on an event record in the transaction log or in the Trail file. Integration with Oracle Enterprise Manager 12c , in addition to the Oracle GoldenGate Monitor product. Expanded Heterogeneity, including capture from IBM DB2 for i on iSeries (AS/400) and delivery to Postgres We will explain these new features in more detail at our upcoming launch webcast: Harness the Power of the New Release of Oracle GoldenGate 11g- (Sept 12 8am/10am PT) In addition to learning more about these new features, the webcast will allow you to ask your questions to product management via live Q&A section. So, I hope you will not miss this opportunity to explore the new release of Oracle GoldenGate 11g and see how it can deliver enterprise-class real-time data integration solutions.. I look forward to a great webcast to unveil GoldenGate’s new capabilities.

    Read the article

  • MySql for Visual Studio 1.0.2 GA has been released

    - by fernando
    MySQL for Visual Studio is a new product including all of the Visual Studio integration previously available as part of Connector/Net.  The product is now released as GA and is appropriate for use in production environments.  It is compatible with MySQL Server versions 5.0-5.7 and Visual Studio versions 2008-2012.  It is now available as part of MySql Installer for Windows (http://dev.mysql.com/tech-resources/articles/mysql-installer-for-windows.html). The 1.0 version of MySQL for Visual Studio brings the following new features:   Workbench Launching.   MySql Utilities Launching.   Table Script Generation.   The functionality of the core libraries (ADO.NET, EF, ASP.NET providers is available as the separate download of Connector/NET 6.7). Features available from previous versions:        Server explorer connections     Design time support     Entity Framework designer (Database First & Model First)     Stored Routines Debugger     Intellisense     ASP.NET Website Configuration Tool Workbench Launching  ------------------------------------------- A context menu for connections in Server Explorer allows to launch Workbench (if Workbench is installed). MySql Utilities Launching  ------------------------------------------- A context menu for connections in Server Explorer allows to launch a prompt for MySql Utilities (if MySql Utilities is installed). Table Script Generation  ------------------------------------------- A context menu for tables is available in Server Explorer to generate the script for a table. The full list of bug fixes for "MySql for Visual Studio" 1.0 follows: 1.0.2 - Fix for Documentation not found (Oracle bug #6915712). - Fix for intellisense completion, now Views are displayed together with Tables calling intellisense (Oracle Bug #16881451). - Fix for parser syntax, now the parser supports the clause ALTER TABLE table_name RENAME {INDEX|KEY} old_index_name TO new_index_name introduced in MySql 5.7. (Oracle Bug #16881481) - Fix for Debugging a routine produces an error when binary log is enabled (Oracle bug #16941181). - Fix for WorkItem 552: MySql for Visual Studio Installer fails when installing against VS2008. - Fix for bug Vs plugin installer is not working (Oracle bug #16973339). - Fix for bug Release notes file has no notes about (Oracle bug #16973326). 1.0.1 - Fix for "README" file and "Release Notes" file referes to connector 6.6. - Fix for Parser fails to recognizes a complex view (Oracle bug #16815427). - Fix for Altering table's primary key in designer not working (Oracle bug #16866053). - Fix for Web configuration tool is not shown on mysql for visual studio (Oracle bug # 16902696). - Fix for Model first is not supported using mysql for visual studio (Oracle bug # 16902743). - Fix for Mysql for vs should not be installed with connector/net version < 6.7 (Oracle bug # 16902774). - Fix for Resolve assemblies dependencies between MySql.Data (Connector/Net version) and MySql.Data (WI # 460). - Fix for Showing an exception related to resources (Oracle bug #16903039). 1.0.0 - Added new option on Connection Node for Server Explorer Window in Visual Studio to give the user the option when WB is Installed to open the MySQL Utilities console window. - Added new option on Connection Node for Server Explorer Window in Visual Studio to give the user the option when WB is Installed to open the SQL Editor Window using the same connection. - Implemented a menu option to generate table script from server explorer context menu (Tracker task 433). - Fix for bug If using repair option, then vs2010 doesnt allow to connect to db (Oracle bug #16238242). - Fix for bug "Can't change the name for a view in view editor" (Oracle bug #13805346). - Fix for Debugger cannot debug stored procedures with a main begin labeled and declare statements included (Oracle bug #16002371). - Fix for bug If using repair option at Installer, then vs2010 doesnt allow to connect to db (Oracle bug #16238242). - Fix for "Cannot change the name for a Foreign Key in table designer" (Oracle bug #16238068). - Fix for error when trying to set primary key for a column with same name as mysql keyword (like INT) in table designer   (Oracle bug #16238102). - Fix for databases not displayed in connect dialog for mysql script when correcting credentials, after entering a bad password   (Oracle bug #13805337). - Fix for Debugger fails trying to debug a stored routine in a MySql server hosted in linux without lower_case_table_names option enabled   (MySql bug #69065, Oracle bug #16770384). - Fix for Debugger issue, Values through watch tab shouldn't allow to be modified (Oracle bug #14545448). - Fix for Visual Studio Mysql editor colors cannot be customized (Oracle bug #16453324, MySql bug #67994). The documentation is available as part of Connector/NET at http://dev.mysql.com/doc/refman/5.7/en/connector-net.html  Enjoy and thanks for the support!  --  Fernando Gonzalez Sanchez | Software Engineer |  Oracle MySQL Windows Experience Team, Connector/NET  Guadalajara | Jalisco | Mexico

    Read the article

  • RightNow CX @ OpenWorld: What to Experience

    - by Tony Berk
    We want to welcome our RightNow CX customers to Oracle OpenWorld next week. Get ready for a great week and a whole new experience! For a high level overview of what is going on during the week, please review these previous posts: Is There a Cloud Over OpenWorld? and What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12. Also, don't forget you can add on the Customer Experience Summit @ OpenWorld to make your week even more complete and get involved with the Experience Revolution! Below is a highlight of only some of the RightNow related sessions at OpenWorld. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. No better way to start off than hearing where Oracle RightNow is going! Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from David Vap and his team of Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Interested in the Cloud and want to know why some leading CIOs are moving to the cloud? You can hear first hand from CIOs from Emerson, Intuit and Overstock.com: CIOs and Governance in the Cloud (CON9767) - Oct 3, 11:45 AM.   And of course there are a number of sessions that drill down into more specific areas. Here are just a few: Deliver Outstanding Customer Experiences: Oracle RightNow Dynamic Agent Desktop Cloud Service (CON9771) - Oct 1, 4:45 PM. This session covers how companies have delivered exceptional customer experiences and how the Oracle RightNow Dynamic Agent Desktop Cloud Service roadmap will evolve in the future. The Oracle RightNow Contact Center Experience suite includes incident management, knowledge, guided processes, and other service capabilities to unify the customer experience across channels. Come learn about the powerful tools that enable even your junior agents to consistently provide outstanding service across all customer interaction channels. Self-Service in the Age of Data Intimacy (CON11516) - Oct 1, 3:15. Even though businesses are generating more and more data around their relationships and interactions with customers, very little of the information a business generates ends up available to the contact center and even less is made available to the online service experience. The generic one-size-fits-all approach that typifies most online service experiences ultimately fails to address all user needs, and that failure ultimately leads to the continued use of high-cost agent-assisted channels for low-value interactions. This session introduces Oracle RightNow Web Experience’s Virtual Assistant and discusses how you can deliver rich, engaging, highly personalized experiences with the quality of agent-assisted service at a much lower cost. Improve Chat Experiences: Best Practices for Chat Pilots and Deployments (CON11517) - Oct 1, 4:45 PM. Today’s organizations are challenged to grow revenue and retain customers with fewer resources, and many have turned to chat as an approach to improving the customer experience, increasing sales conversions, and reducing costs at the same time. From setting goals and metrics and training staff to customizing and tuning the solution, this session provides best practices and lessons learned from a broad set of implementations to help you get the most out of your chat solution. Differentiated Experience with Web Service (CON9770) - Oct 2, 1:15 PM. A reputation for excellent customer service can differentiate your brand and drive revenue. In this session, learn how to develop that reputation by transforming your online self-service into a highly interactive, branded customer experience. See live examples of how Oracle RightNow Web Experience has helped customers deliver on their Web service strategies. Unifying the Agent’s Engagement Console (CON11518) - Oct 2, 1:15 PM. Does your customer experience suffer because your agents are toggling between multiple tools? Do your agent productivity and morale suffer as well? Come to this session to learn how Oracle RightNow CX Cloud Service seamlessly unifies these disparate systems into a single engagement console. Regardless of channel, powerful adaptive tools consistently guide agents across contextually aware personalized workflows. Great agent experiences drive great customer experiences. Oracle RightNow CX Cloud Service and the Oracle Customer Experience Portfolio (CON9775) - Oct 3, 10:15 AM. This session covers how Oracle’s integrated suite of customer experience (CX) products fits with the Oracle CX portfolio of products (Oracle Fusion Customer Relationship Management; the Oracle ATG, Oracle Endeca, and Oracle Knowledge product families; and Oracle Business Intelligence) to increase revenues, strengthen customer relationships, and reduce costs across the entire end-to-end customer lifecycle for companies that sell to consumers and those that sell to businesses. Greater Insights from Customer Engagements (CON9773) Oct 4, 12:45 PM. In this session, hear how to leverage service interaction insights, customer feedback, and segmented service engagements to improve the customer experience. Discover how customers, such as J&P Cycles, learn and take action based on business insights gained through their customer engagements. Again, these are just some of the sessions, so check out the Content Catalog for details on Knowledge Management, Customization, Integration and more in the Oracle Develop stream for Customer Experience. Be sure to visit the Oracle DEMOgrounds in the Moscone West Exhibit Hall. If this is your first OpenWorld, welcome! If you are returning, hi again and enjoy!

    Read the article

  • Oracle EE 11.2g: how to generate fresh new redo logs

    - by Aikanaro
    Hi, In the company I work for we are heavy users of vmware machines. Almost all our projects are developed inside a virtual environment up to the point where we have to deploy them into a production system. While in development, some colleagues of mine deleted the redo log files for Oracle in the hopes of gaining some free space. Now they are unable to start the database instance. Is there a way of generating a fresh new redo log so that the instance can be started? This is urgent and even though I'm currently googling for an answer I have yet to find it. Thanks in advance.

    Read the article

  • Oracle Advanced Security Options is Blank

    - by mak4pi
    I just installed Oracle DB 10gR2 with Oracle Advanced Security, but cannot see the algorithms. [user@db-1] adapters Installed Oracle Net transport protocols are: IPC BEQ TCP/IP SSL RAW Installed Oracle Net naming methods are: Local Naming (tnsnames.ora) Oracle Directory Naming Oracle Host Naming Oracle Names Server Naming Installed Oracle Advanced Security options are: Where are all the algorithms for Oracle Advanced Security options please? I checked the $ORACLE_HOME/bin/adapters file and it's looking for naea256i, naemd5i, etc. in the naetab.so file, but none of these are listed in the naetab.so file. What's wrong with the naetab.so file? Thanks.

    Read the article

  • List of resources for database continuous integration

    - by David Atkinson
    Because there is so little information on database continuous integration out in the wild, I've taken it upon myself to aggregate as much as possible and post the links to this blog. Because it's my area of expertise, this will focus on SQL Server and Red Gate tooling, although I am keen to include any quality articles that discuss the topic in general terms. Please let me know if you find a resource that I haven't listed! General database Continuous Integration · What is Database Continuous Integration? (David Atkinson) · Continuous Integration for SQL Server Databases (Troy Hunt) · Installing NAnt to drive database continuous integration (David Atkinson) · Continuous Integration Tip #3 - Version your Databases as part of your automated build (Doug Rathbone) · How the "migrations" approach makes database continuous integration possible (David Atkinson) · Continuous Integration for the Database (Keith Bloom) Setting up Continuous Integration with Red Gate tools · Continuous integration for databases using Red Gate tools - A technical overview (White Paper, Roger Hart and David Atkinson) · Continuous integration for databases using Red Gate SQL tools (Product pages) · Database continuous integration step by step (David Atkinson) · Database Continuous Integration with Red Gate Tools (video, David Atkinson) · Database schema synchronisation with RedGate (Vincent Brouillet) · Database continuous integration and deployment with Red Gate tools (David Duffett) · Automated database releases with TeamCity and Red Gate (Troy Hunt) · How to build a database from source control (David Atkinson) · Continuous Integration Automated Database Update Process (Lance Lyons) Other · Evolutionary Database Design (Martin Fowler) · Recipes for Continuous Database Integration: Evolutionary Database Development (book, Pramod J Sadalage) · Recipes for Continuous Database Integration (book, Pramod Sadalage) · The Red Gate Guide to SQL Server Team-based Development (book, Phil Factor, Grant Fritchey, Alex Kuznetsov, Mladen Prajdic) · Using SQL Test Database Unit Testing with TeamCity Continuous Integration (Dave Green) · Continuous Database Integration (covers MySQL, Perason Education) Technorati Tags: SQL Server,Continous Integration

    Read the article

  • SQL SERVER – Repair a SQL Server Database Using a Transaction Log Explorer

    - by Pinal Dave
    In this blog, I’ll show how to use ApexSQL Log, a SQL Server transaction log viewer. You can download it for free, install, and play along. But first, let’s describe some disaster recovery scenarios where it’s useful. About SQL Server disaster recovery Along with database development and administration, you must work on a good recovery plan. Disasters do happen and no one’s immune. What you can do is take all actions needed to be ready for a disaster and go through it with minimal data loss and downtime. Besides creating a recovery plan, it’s necessary to have a list of steps that will be executed when a disaster occurs and to test them before a disaster. This way, you’ll know that the plan is good and viable. Testing can also be used as training for all team members, so they can all understand and execute it when the time comes. It will show how much time is needed to have your servers fully functional again and how much data you can lose in a real-life situation. If these don’t meet recovery-time and recovery-point objectives, the plan needs to be improved. Keep in mind that all major changes in environment configuration, business strategy, and recovery objectives require a new recovery plan testing, as these changes most probably induce a recovery plan changing and tweaking. What is a good SQL Server disaster recovery plan? A good SQL Server disaster recovery strategy starts with planning SQL Server database backups. An efficient strategy is to create a full database backup periodically. Between two successive full database backups, you can create differential database backups. It is essential is to create transaction log backups regularly between full database backups. Keep in mind that transaction log backups can be created only on databases in the full recovery model. In other words, a simple, but efficient backup strategy would be a full database backup every night, a transaction log backup every hour, or every 15 minutes. The frequency depends on how much data you can afford to lose and how busy the database is. Another option, instead of creating a full database backup every night, is to create a full database backup once a week (e.g. on Friday at midnight) and differential database backup every night until next Friday when you will create a full database backup again. Once you create your SQL Server database backup strategy, schedule the backups. You can do that easily using SQL Server maintenance plans. Why are transaction logs important? Transaction log backups contain transactions executed on a SQL Server database. They provide enough information to undo and redo the transactions and roll back or forward the database to a point in time. In SQL Server disaster recovery situations, transaction logs enable to repair a SQL Server database and bring it to the state before the disaster. Be aware that even with regular backups, there will be some data missing. These are the transactions made between the last transaction log backup and the time of the disaster. In some situations, to repair your SQL Server database it’s not necessary to re-create the database from its last backup. The database might still be online and all you need to do is roll back several transactions, such as wrong update, insert, or delete. The restore to a point in time feature is available in SQL Server, but for large databases, it is very time-consuming, as SQL Server first restores a full database backup, and then restores transaction log backups, one after another, up to the recovery point. During that time, the database is unavailable. This is where a SQL Server transaction log viewer can help. For optimal recovery, besides having a database in the full recovery model, it’s important that you haven’t manually truncated the online transaction log. This ensures that all transactions made after the last transaction log backup are still in the online transaction log. All you have to do is read and replay them. How to read a SQL Server transaction log? SQL Server doesn’t provide an option to read transaction logs. There are several SQL Server commands and functions that read the content of a transaction log file (fn_dblog, fn_dump_dblog, and DBCC PAGE), but they are undocumented. They require T-SQL knowledge, return a large number of not easy to read and understand columns, sometimes in binary or hexadecimal format. Another challenge is reading UPDATE statements, as it’s necessary to match it to a value in the MDF file. When you finally read the transactions executed, you have to create a script for it. How to easily repair a SQL database? The easiest solution is to use a transaction log reader that will not only read the transactions in the transaction log files, but also automatically create scripts for the read transactions. In the following example, I will show how to use ApexSQL Log to repair a SQL database after a crash. If a database has crashed and both MDF and LDF files are lost, you have to rely on the full database backup and all subsequent transaction log backups. In another scenario, the MDF file is lost, but the LDF file is available. First, restore the last full database backup on SQL Server using SQL Server Management Studio. I’ll name it Restored_AW2014. Then, start ApexSQL Log It will automatically detect all local servers. If not, click the icon right to the Server drop-down list, or just type in the SQL Server instance name. Select the Windows or SQL Server authentication type and select the Restored_AW2014 database from the database drop-down list. When all options are set, click Next. ApexSQL Log will show the online transaction log file. Now, click Add and add all transaction log backups created after the full database backup I used to restore the database. In case you don’t have transaction log backups, but the LDF file hasn’t been lost during the SQL Server disaster, add it using Add.   To repair a SQL database to a point in time, ApexSQL Log needs to read and replay all the transactions in the transaction log backups (or the LDF file saved after the disaster). That’s why I selected the Whole transaction log option in the Filter setup. ApexSQL Log offers a range of various filters, which are useful when you need to read just specific transactions. You can filter transactions by the time of the transactions, operation type (e.g. to read only data inserts), table name, SQL Server login that made the transaction, etc. In this scenario, to repair a SQL database, I’ll check all filters and make sure that all transactions are included. In the Operations tab, select all schema operations (DDL). If you omit these, only the data changes will be read so if there were any schema changes, such as a new function created, or an existing table modified, they will be ignored and database will not be properly repaired. The data repair for modified tables will fail. In the Tables tab, I’ll make sure all tables are selected. I will uncheck the Show operations on dropped tables option, to reduce the number of transactions. Click Next. ApexSQL Log offers three options. Select Open results in grid, to get a user-friendly presentation of the transactions. As you can see, details are shown for every transaction, including the old and new values for updated columns, which are clearly highlighted. Now, select them all and then create a redo script by clicking the Create redo script icon in the menu.   For a large number of transactions and in a critical situation, when acting fast is a must, I recommend using the Export results to file option. It will save some time, as the transactions will be directly scripted into a redo file, without showing them in the grid first. Select Generate reconstruction (REDO) script , change the output path if you want, and click Finish. After the redo T-SQL script is created, ApexSQL Log shows the redo script summary: The third option will create a command line statement for a batch file that you can use to schedule execution, which is not really applicable when you repair a SQL database, but quite useful in daily auditing scenarios. To repair your SQL database, all you have to do is execute the generated redo script using an integrated developer environment tool such as SQL Server Management Studio or any other, against the restored database. You can find more information about how to read SQL Server transaction logs and repair a SQL database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered, restored, or transactions rolled back. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Representation of data in application versus database

    - by user1815201
    I'm going to make an application that will be given data to put in a database. The data will for the most part be the same, but the way it is formatted will vary a lot (could be in anything from text files to .xls to .doc). I'm not a very experienced developer, but I can see some potential issues and I want to minimize them. First off I have decided to use the DAO pattern, so that I can easily support new file formats or file suddenly formatted in different ways. What I really wonder about though, is how I should manage the data itself within my application. I'm thinking that the database DAO should have models representing each table of the database with the same relations between them, to make the uploading process easy. But should the filesystem DAO's have to use the same models? I can imaging that when the database changes, the change will suddenly propagate throughout the entire system, all DAOs and models alike. And that is obviously a bad thing. I'm a little bit tired and out of time. Will update with what ever questions you have. Thanks!

    Read the article

  • Designing a Database Application with OOP

    - by Tim C
    I often develop SQL database applications using Linq, and my methodology is to build model classes to represent each table, and each table that needs inserting or updating gets a Save() method (which either does an InsertOnSubmit() or SubmitChanges(), depending on the state of the object). Often, when I need to represent a collection of records, I'll create a class that inherits from a List-like object of the atomic class. ex. public class CustomerCollection : CoreCollection<Customer> { } Recently, I was working on an application where end-users were experiencing slowness, where each of the objects needed to be saved to the database if they met a certain criteria. My Save() method was slow, presumably because I was making all kinds of round-trips to the server, and calling DataContext.SubmitChanges() after each atomic save. So, the code might have looked something like this foreach(Customer c in customerCollection) { if(c.ShouldSave()) { c.Save(); } } I worked through multiple strategies to optimize, but ultimately settled on passing a big string of data to a SQL stored procedure, where the string has all the data that represents the records I was working with - it might look something like this: CustomerID:34567;CurrentAddress:23 3rd St;CustomerID:23456;CurrentAddress:123 4th St So, SQL server parses the string, performs the logic to determine appropriateness of save, and then Inserts, Updates, or Ignores. With C#/Linq doing this work, it saved 5-10 records / s. When SQL does it, I get 100 records / s, so there is no denying the Stored Proc is more efficient; however, I hate the solution because it doesn't seem nearly as clean or safe. My real concern is that I don't have any better solutions that hold a candle to the performance of the stored proc solution. Am I doing something obviously wrong in how I'm thinking about designing database applications? Are there better ways of designing database applications?

    Read the article

  • Flashback Database

    - by Sebastian Solbach (DBA Community)
    Flashback Database bezeichnet die Funktionalität der Oracle Datenbank, die Datenbank zeitlich auf einen bestimmten Punkt, respektive eine bestimmte System Change Number (SCN) zurücksetzen zu können - vergleichbar mit einem Rückspulknopf eines Kassettenrekorders oder der Rücksetztaste eines CD-Players. Mag dieses Vorgehen bei Produktivsystemen eher selten Einsatz finden, da beim Rücksetzten alle Daten nach dem zurückgesetzten Zeitpunkt verloren wären (es sei denn man würde dieser vorher exportieren), gibt es gerade für Test- oder Standby Systeme viele Einsatzmöglichkeiten: Rücksetzten des Systems bei fehlgeschlagenen Applikations-Upgrade Alternatives Point in Time Recovery (PITR) mit anschließendem Roll Forward (besonders geeignet bei Standby Systemen) Testdatenbank mit definiertem, reproduzierbaren Ausgangspunkt (z.B. für Real Application Testing) Datenbank Upgrade Test Einige bestehende Datenbank Funktionalitäten verwenden Flashback Database implizit: Snapshot Standby Reinstanziierung der Standby (z.B. bei Fast Start Failover) Obwohl diese Funktionalität gerade für Standby Systeme und Testsysteme bestens geeignet ist, gibt es eine gewisse Zurückhaltung Flashback Database einzusetzen. Eine Ursache ist oft die Angst vor zusätzlicher Last, die das Schreiben der Flashback Logs erzeugt, sowie der zusätzlich benötigte Plattenplatz. Dabei ist die Last im Normalfall relativ gering (ca. 5%) und auch der zusätzlich benötigte Platz für die Flashback Logs lässt sich relativ genau bestimmen. Ebenfalls wird häufig nicht beachtet, dass es auch ohne das explizite Einschalten der Flashback Logs möglich ist, einen garantieren Rücksetzpunkt (Guaranteed Restore Point kurz GRP) festzulegen, und die Datenbank dann auf diesen Restore Point zurückzusetzen. Das Setzen eines garantierten Rücksetzpunktes funktioniert in 11gR2 im laufenden Betrieb. Wie dies genau funktioniert, welche Unterschiede es zum generellen Einschalten von Flashback Logs gibt, wie man Flashback Database monitoren kann und was es sonst noch zu berücksichtigen gibt, damit beschäftigt sich dieser Tipp.

    Read the article

  • SQL Server Database Settings

    - by rbishop
    For those using Data Relationship Management on Oracle DB this does not apply, but for those using Microsoft SQL Server it is highly recommended that you run with Snapshot Isolation Mode. The Data Governance module will not function correctly without this mode enabled. All new Data Relationship Management repositories are created with this mode enabled by default. This mode makes SQL Server (2005+) behave more like Oracle DB where readers simply see older versions of rows while a write is in progress, instead of readers being blocked by locks while a write takes place. Many common sources of deadlocks are eliminated. For example, if one user starts a 5 minute transaction updating half the rows in a table, without snapshot isolation everyone else reading the table will be blocked waiting. With snapshot isolation, they will see the rows as they were before the write transaction started. Conversely, if the readers had started first, the writer won't be stuck waiting for them to finish reading... the writes can begin immediately without affecting the current transactions. To make this change, make sure no one is using the target database (eg: put it into single-user mode), then run these commands: ALTER DATABASE [DB] SET ALLOW_SNAPSHOT_ISOLATION ONALTER DATABASE [DB] SET READ_COMMITTED_SNAPSHOT ON Please make sure you coordinate with your DBA team to ensure tempdb is appropriately setup to support snapshot isolation mode, as the extra row versions are stored in tempdb until the transactions are committed. Let me take this opportunity to extremely strongly highly recommend that you use solid state storage for your databases with appropriate iSCSI, FiberChannel, or SAN bandwidth. The performance gains are significant and there is no excuse for not using 100% solid state storage in 2013. Actually unless you need to store petabytes of archival data, there is no excuse for using hard drives in any systems, whether laptops, desktops, application servers, or database servers. The productivity benefits alone are tremendous, not to mention power consumption, heat, etc.

    Read the article

  • Service Catalogs for Database as a Service

    - by B R Clouse
    At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

    Read the article

  • La nueva version de Enterprise Manager 12c, Release 4!

    - by grantunez-Oracle
    El día 3 de Junio del 2014 se anuncio la nueva versión de Enterprise Manager, y en esta versión hay nuevas funcionalidades que realmente hace que uno se apasione por este producto. Aquí no voy a platicar de todas las nuevas características, pero si de unas cuantas que son las de mas interés para la Base de Datos. Gestión Avanzada de Umbrales (Advanced Threshold Management) Esta funcionalidad  lo que permite es tener una mayor flexibilidad en los umbrales de todos tus objetivos (Targets) Umbrales basados en tiempo: Auto ajusta los umbrales estáticos, basados ??en los cambios de carga de trabajo de tu ambiente de trabajo. Por ejemplo, no son las mismas cargas de trabajo que tiene tu ambiente los fines de semana a comparación en un cierre de mes. Umbrales adaptativos: Umbrales que se calculan automáticamente para alertar si tu objetivo, ya sea una BD o un Exadata  se desvía del comportamiento esperado de su ambiente normal de trabajo. Interfaz para el seguimiento de tus tareas programadas Anteriormente si tenias varias tareas programadas, tenias que entrar a cada uno de tus targets para verificar como iba progresando, ahora en esta nueva versión, existe una nueva interfaz en la que te podrá permitir ver todas las tareas programadas que tienes en tu ambiente, reduciendo el numero de clicks para poder llegar a esta. De igual manera, se introduce la capacidad de poder exportar e importar tus tareas programadas a través de emcli emcli export_jobs  emcli import_jobs   Almacén de AWR (AWR Warehouse) En esta nueva versión se introdujo un almacén de los snapshots de AWR, este almacén o repositorio por default tiene una retención infinita, significando, que nunca va a borrar los snapshots de AWR.  Este repositorio puede ser una BD en  11.2.0.4 + el ultimo PSU o una version mas nueva. Las siguientes caracteristicas de AWR van a encontrarse en este repositorio Pagina de Performance Reporte AWR ASH Analytics Comparar un periodo de ADDM Comparar un un periodo de tiempo  In-Memory Store Central  Enterprise Manager 12.1.0.4 viene listo para la nueva version de la base de datos 12c, en donde vamos a poder ver un heat-map de los objetos que se encuentran en el "memory-store". Aquí, vamos a poder ver la compresión que se efectúa sobre los segmentos "en-memoria". De la misma manera vamos a tener un tutor o advisor, que nos va a poder guiar en nuestro proceso de decisión para ver que segmentos podemos definir en nuestra "in-memory store" . Mas Información Enterprise Manager  Enterprise Manager Cloud Control Documentation

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Try the Oracle Database Appliance Manager Configurator - For Fun!

    - by pwstephe-Oracle
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you would like to get a first hand glimpse of how easy it is to configure an ODA, even if you don’t have access to one, it’s possible to download the Appliance Manager Configurator from the Oracle Technology Network, and run it standalone on your PC or Linux/Unix  workstation. The configurator is packaged in a zip file that contains the complete Java environment to run standalone. Once the package is downloaded and unzipped it’s simply a matter of launching it using the config command or shell depending on your runtime environment. Oracle Appliance Manager Configurator is a Java-based tool that enables you to input your deployment plan and validate your network settings before an actual deployment, or you can just preview and experiment with it. Simply download and run the configurator on a local client system which can be a Windows, Linux, or UNIX system. (For Windows launch the batch file config.bat for Linux/Unix environments, run  ./ config.sh). You will be presented with the very same dialogs and options used to configure a production ODA but on your workstation. At the end of a configurator session, you may save your deployment plan in a configuration file. If you were actually ready to deploy, you could copy this configuration file to a real ODA where the online Oracle Appliance Manager Configurator would use the contents to deploy your plan in production. You may also print the file’s content and use the printout as a checklist for setting up your production external network configuration. Be sure to use the actual production network addresses you intend to use it as this will only work correctly if your client system is connected to same network that will be used for the ODA. (This step is not necessary if you are just previewing the Configurator). This is a great way to get an introductory look at the simple and intuitive Database Appliance configuration interface and the steps to configure a system. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Can I use imp/exp tools to migrate database from Oracle 9 to Oracle 10

    - by Karol Kolenda
    I'm subcontractor and my client wants to upgrade Oracle database from 9 to 10. Other vendor is going to perform the upgrade process, and I was asked to create whatever backup I need before the upgrade, and then recreate the environment in Oracle 10. All my data is stored in a separate database in a single schema. No fancy relations, scripts or anything like this (actual app supports different dbs: Oracle, SQL Server, Postgres so we want to avoid any DB-specific code). I was hoping to use imp/exp but I'm not sure if imp/exp are backward compatible (exp from O9 and imp to O10)? If there is a better/recommended way of dealing with similar situation, I'll be grateful for any advice.

    Read the article

  • Oracle - Is there any effects of not having a primary key on a table ?

    - by Sathya
    We use sequence numbers for primary keys on the tables. There are some tables where we dont really use the primary key for any querying purpose. But, we have Indexes on other columns. These are non-unique indexes. The queries use these non-primary key columns in the WHERE conditions. So, I dont really see any benefit of having a primary key on such tables. My experience with SQL 2000 was that, it used to replicate tables which had some primary key. Otherwise it would not. I am using Oracle 10gR2. I would like to know if there are any such side-effects of having tables that dont have primary key.

    Read the article

  • Websphere/Oracle 11 - much more Heap Usage than with Oracle 10

    - by swalkner
    Hi all, while testing our application with Oracle 11 (previously, we had Oracle 10), we saw that our server uses much more heap space. It seems as it has something to do with T4CConnection; there are 500 objects of T4CConnection allocated. Someone told me, that Oracle 11 is using SoftReferences to keep the connection pool; but we don't need that. Is that correct? Could that be the problem for the increased heap space? If yes - how can we avoid connection pooling? Thanks a lot!!

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >