Search Results

Search found 41329 results on 1654 pages for 'oracle database vault'.

Page 63/1654 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • ASP.NET Website Administration Tool: Unable to connect to SQL Server database

    - by MedicineMan
    I am trying to get authentication and authorization working with my ASP MVC project. I've run the aspnet_regsql.exe tool without any problem and see the aspnetdb database on my server (using the Management Studio tool). my connection string in my web.config is: <connectionStrings> <add name="ApplicationServices" connectionString="data source=MYSERVERNAME;Integrated Security=SSPI;AttachDBFilename=|DataDirectory|aspnetdb.mdf;User Instance=true" providerName="System.Data.SqlClient" /> The error I get is: There is a problem with your selected data store. This can be caused by an invalid server name or credentials, or by insufficient permission. It can also be caused by the role manager feature not being enabled. Click the button below to be redirected to a page where you can choose a new data store. The following message may help in diagnosing the problem: Unable to connect to SQL Server database. In the past, I have had trouble connecting to my database because I've needed to add users. Do I have to do something similar here?

    Read the article

  • Firebird database corruption causes

    - by Rytis
    I am running several different Firebird versions (2.0, 2.1) on multiple entry level Windows-based servers with wildly varying hardware. The only matching thing between them is that they are running same home built application with the same database structure. Lately I've been seeing massive slowdowns on multiple servers. Turns out that database gets corrupted, so each time it breaks, I get to mend, backup and restore the database, and it all is fine for some time (1-2 weeks), and then it repeats once again. Thankfully, I haven't seen any data loss or damage... yet. The thing is that every such downtime results in lost productivity, and often quite some driving for me as some of the databases are in remote locations. I've been trying to find out what's causing the corruption, but I haven't been able to. The fact that it's running on different hardware hints that it should not be a hardware based problem. If we rule out hardware issues, I have a bad feeling that it's a bug in Firebird as I'm not doing anything fancy via SQL. Do you have any idea how to find out exactly what's causing the corruption and hopefully fix the problem?

    Read the article

  • Oracle Partner Network Specialized

    - by luca.maghernino(at)oracle.com
    Eventi specialized Eventi di specializzazione Il prezzo a listino del training è di 2.700 euro a partecipante. Per i nostri Partner che aderiscono a questa iniziativa il costo è di 700 euro per partecipante. Il numero massimo di partecipanti per ciascuna sessione è di 15 persone. Per iscriverti clicca sulla data di tuo interesse: Codice Corso Data Location D64292GC10 OPN Oracle BI EE 10.1.3 Implementation Boot Camp Ed 1 (5 gg) 28 febbraio Milano D50102GC10 Oracle Database 11g: Workshop di amministrazione Ed 2 PRV 21 marzo -- 21 marzo Empoli D64735GC10 OPN Oracle ECM 10g R3 Implementation Boot Camp Ed 1 PRV (3 gg) 28 marzo Milano D50317GC20 Oracle Database 11g: Performance Tuning Ed 1 PRV (5 gg) 4 aprile -- 4 aprile Milano D53946GC10 Oracle SOA Suite 11g: Build Composite Applications (5 gg) 18 aprile Milano D50081GC20 Oracle Database 11g: New Features for Administrators DBA Release 2 (5 gg) * 09 maggio Milano *Oracle Database 11g: New Features for Administrators DBA Release 2: questo corso si rivoge ad amministratori di database in possesso della certificazione Orale Certified Professional 10g che desiderano effettuare l'upgrade al livello Oracle Certified Professional 11g ed è propedeutico al superamento dell'esame 1Z0_050 Oracle Database 11g: New Features for Administrators oppure ad amministratori di database che hanno una buona conoscenza della versione 10g e desiderano aggiornare le proprie competenze alla release 11g.

    Read the article

  • Sun Ray Hardware Last Order Dates & Extension of Premier Support for Desktop Virtualization Software

    - by Adam Hawley
    In light of the recent announcement  to end new feature development for Oracle Virtual Desktop Infrastructure Software (VDI), Oracle Sun Ray Software (SRS), Oracle Virtual Desktop Client (OVDC) Software, and Oracle Sun Ray Client hardware (3, 3i, and 3 Plus), there have been questions and concerns regarding what this means in terms of customers with new or existing deployments.  The following updates clarify some of these commonly asked questions. Extension of Premier Support for Software Though there will be no new feature additions to these products, customers will have access to maintenance update releases for Oracle Virtual Desktop Infrastructure and Sun Ray Software, including Oracle Virtual Desktop Client and Sun Ray Operating Software (SROS) until Premier Support Ends.  To ensure that customer investments for these products are protected, Oracle  Premier Support for these products has been extended by 3 years to following dates: Sun Ray Software - November 2017 Oracle Virtual Desktop Infrastructure - March 2017 Note that OVDC support is also extended to the above dates since OVDC is licensed by default as part the SRS and VDI products.   As a reminder, this only affects the products listed above.  Oracle Secure Global Desktop and Oracle VM VirtualBox will continue to be enhanced with new features from time-to-time and, as a result, they are not affected by the changes detailed in this message. The extension of support means that customers under a support contract will still be able to file service requests through Oracle Support, and Oracle will continue to provide the utmost level of support to our customers as expected,  until the published Premier Support end date.  Following the end of Premier Support, Sustaining Support remains an 'indefinite' period of time.   Sun Ray 3 Series Clients - Last Order Dates For Sun Ray Client hardware, customers can continue to purchase Sun Ray Client devices until the following last order dates: Product Marketing Part Number Last Order Date Last Ship Date Sun Ray 3 Plus TC3-P0Z-00, TC3-PTZ-00 (TAA) September 13, 2013 February 28, 2014 Sun Ray 3 Client TC3-00Z-00 February 28, 2014 August 31, 2014 Sun Ray 3i Client TC3-I0Z-00 February 28, 2014 August 31, 2014 Payflex Smart Cards X1403A-N, X1404A-N February 28, 2014 August 31, 2014 Note the difference in the Last Order Date for the Sun Ray 3 Plus (September 13, 2013) compared to the other products that have a Last Order Date of February 28, 2014. The rapidly approaching date for Sun Ray 3 Plus is due to a supplier phasing-out production of a key component of the 3 Plus.   Given September 13 is unfortunately quite soon, we strongly encourage you to place your last time buy as soon as possible to maximize Oracle's ability fulfill your order. Keep in mind you can schedule shipments to be delivered as late as the end of February 2014, but the last day to order is September 13, 2013. Customers wishing to purchase other models - Sun Ray 3 Clients and/or Sun Ray 3i Clients - have additional time (until February 28, 2014) to assess their needs and to allow fulfillment of last time orders.  Please note that availability of supply cannot be absolutely guaranteed up to the last order dates and we strongly recommend placing last time buys as early as possible.  Warranty replacements for Sun Ray Client hardware for customers covered by Oracle Hardware Systems Support contracts will be available beyond last order dates, per Oracle's policy found on Oracle.com here.  Per that policy, Oracle intends to provide replacement hardware for up to 5 years beyond the last ship date, but hardware may not be available beyond the 5 year period after the last ship date for reasons beyond Oracle's control. In any case, by design, Sun Ray Clients have an extremely long lifespan  and mean time between failures (MTBF) - much longer than PCs, and over the years we have continued to see first- and second generations of Sun Rays still in daily use.  This is no different for the Sun Ray 3, 3i, and 3 Plus.   Because of this, and in addition to Oracle's continued support for SRS, VDI, and SROS, Sun Ray and Oracle VDI deployments can continue to expand and exist as a viable solution for some time in the future. Continued Availability of Product Licenses and Support Oracle will continue to offer all existing software licenses, and software and hardware support including: Product licenses and Premier Support for Sun Ray Software and Oracle Virtual Desktop Infrastructure Premier Support for Operating Systems (for Sun Ray Operating Software maintenance upgrades/support)  Premier Support for Systems (for Sun Ray Operating Software maintenance upgrades/support and hardware warranty) Support renewals For More Information For more information, please refer to the following documents for specific dates and policies associated with the support of these products: Document 1478170.1 - Oracle Desktop Virtualization Software and Hardware Lifetime Support Schedule Document 1450710.1 - Sun Ray Client Hardware Lifetime schedule Document 1568808.1 - Document Support Policies for Discontinued Oracle Virtual Desktop Infrastructure, Sun Ray Software and Hardware and Oracle Virtual Desktop Client Development For Sales Orders and Questions Please contact your Oracle Sales Representative or Saurabh Vijay ([email protected])

    Read the article

  • creating tables on remote database

    - by raj
    I created a database link using database link. create public database link REMOTEDB connect to REMOTEUSER identified by REMOTEPWD using 'REMOTEDB'; then i create a table in remote db like, create table MYTABLE@REMOTEDB (name varchar2(20))); It says, ORA-02021 DDL operations are not allowed on| a remote database.. Will this Not work on any cost, or am i just missing some permissions to create ?

    Read the article

  • Consolidation in a Database Cloud

    - by B R Clouse
    Consolidation of multiple databases onto a shared infrastructure is the next step after Standardization.  The potential consolidation density is a function of the extent to which the infrastructure is shared.  The three models provide increasing degrees of sharing: Server: each database is deployed in a dedicated VM. Hardware is shared, but most of the software infrastructure is not. Standardization is often applied incompletely since operating environments can be moved as-is onto the shared platform. The potential for VM sprawl is an additional downside. Database: multiple database instances are deployed on a shared software / hardware infrastructure. This model is very efficient and easily implemented with the features in the Oracle Database and supporting products. Many customers have moved to this model and achieved significant, measurable benefits. Schema: multiple schemas are deployed within a single database instance. The most efficient model, it places constraints on the environment. Usually this model will be implemented only by customers deploying their own applications.  (Note that a single deployment can combine Database and Schema consolidations.) Customer value: lower costs, better system utilization In this phase of the maturity model, under-utilized hardware can be used to host more workloads, or retired and those workloads migrated to consolidation platforms. Customers benefit from higher utilization of the hardware resources, resulting in reduced data center floor space, and lower power and cooling costs. And, the OpEx savings from Standardization are multiplied, since there are fewer physical components (both hardware and software) to manage. Customer value: higher productivity The OpEx benefits from Standardization are compounded since not only are there fewer types of things to manage, now there are fewer entities to manage. In this phase, customers discover that their IT staff has time to move away from "day-to-day" tasks and start investing in higher value activities. Database users benefit from consolidating onto shared infrastructures by relieving themselves of the requirement to maintain their own dedicated servers. Also, if the shared infrastructure offers capabilities such as High Availability / Disaster Recovery, which are often beyond the budget and skillset of a standalone database environment, then moving to the consolidation platform can provide access to those capabilities, resulting in less downtime. Capabilities / Characteristics In this phase, customers will typically deploy fixed-size clusters and consolidate on a cluster until that cluster is deemed "full," at which point a new cluster is built. Customers will define one or a few cluster architectures that are used wherever possible; occasionally there may be deployments which must be handled as exceptions. The "full" policy may be based on number of databases deployed on the cluster, or observed peak workload, etc. IT will own the provisioning of new databases on a cluster, making the decision of when and where to place new workloads. Resources may be managed dynamically, e.g., as a priority workload increases, it may be given more CPU and memory to handle the spike. Users will be charged at a fixed, relatively coarse level; or in some cases, no charging will be applied. Activities / Tasks Oracle offers several tools to plan a successful consolidation. Real Application Testing (RAT) has a feature to help plan and validate database consolidations. Enterprise Manager 12c's Cloud Management Pack for Database includes a planning module. Looking ahead, customers should start planning for the Services phase by defining the Service Catalog that will be made available for database services.

    Read the article

  • What is the best way to do testing database (MYSQL spesific)

    - by justjoe
    Right now i'm on testing something in a database. It's a wordpress database. i have to write and delete and do other operation on it. As you know it, it has indexing mechanism that will always make every new post inherit the next highest possible ID. Please consider that this database is a copying of used database. it has been written before. So, i will need to make sure when i finish my testing, it will be the same Right now, my only solution is making backup. So if i have end in some section of planned testing, i will backup it and start next testing on another copy of it. Fortunately, the size of database is only a small one. so delete and copy and backup it will be easy. but i know this way of database testing is only partial solution.It force me to create too many backup copy. I don't know what i will do if the database has bigger size. it will be a very long of testing nightmare. so i wonder is there any solution that work just like rollback. So it will just lock the database and just put new entry as some kind of cache. I can erase it or write it into the database. i use mysql and phpmyadmin and use it to developed some custom solution. EDIT ::: How to effectively doing testing on database when developing PHP solution ?

    Read the article

  • Need help choosing database server

    - by The Pretender
    Good day everyone. Recently I was given a task to develop an application to automate some aspects of stocks trading. While working on initial architecture, the database dilemma emerged. What I need is a fast database engine which can process huge amounts of data coming in very fast. I'm fairly experienced in general programming, but I never faced a task of developing a high-load database architecture. I developed a simple MSSQL database schema with several many-to-many relationships during one of my projects, but that's it. What I'm looking for is some advice on choosing the most suitable database engine and some pointers to various manuals or books which describe high-load database development. Specifics of the project are as follows: OS: Windows NT family (Server 2008 / 7) Primary platform: .NET with C# Database structure: one table to hold primary items and two or three tables with foreign keys to the first table to hold additional information. Database SELECT requirements: Need super-fast selection by foreign keys and by combination of foreign key and one of the columns (presumably DATETIME) Database INSERT requirements: The faster the better :) If there'll be significant performance gain, some parts can be written in C++ with managed interfaces to the rest of the system. So once again: given all that stuff I just typed, please give me some advice on what the best database for my project is. Links or references to some manuals and books on the subject are also greatly appreciated. EDIT: I'll need to insert 3-5 rows in 2 tables approximately once in 30-50 milliseconds and I'll need to do SELECT with 0-2 WHERE clauses queries with similar rate.

    Read the article

  • SQL SERVER – How to Recover SQL Database Data Deleted by Accident

    - by Pinal Dave
    In Repair a SQL Server database using a transaction log explorer, I showed how to use ApexSQL Log, a SQL Server transaction log viewer, to recover a SQL Server database after a disaster. In this blog, I’ll show you how to use another SQL Server disaster recovery tool from ApexSQL in a situation when data is accidentally deleted. You can download ApexSQL Recover here, install, and play along. With a good SQL Server disaster recovery strategy, data recovery is not a problem. You have a reliable full database backup with valid data, a full database backup and subsequent differential database backups, or a full database backup and a chain of transaction log backups. But not all situations are ideal. Here we’ll address some sub-optimal scenarios, where you can still successfully recover data. If you have only a full database backup This is the least optimal SQL Server disaster recovery strategy, as it doesn’t ensure minimal data loss. For example, data was deleted on Wednesday. Your last full database backup was created on Sunday, three days before the records were deleted. By using the full database backup created on Sunday, you will be able to recover SQL database records that existed in the table on Sunday. If there were any records inserted into the table on Monday or Tuesday, they will be lost forever. The same goes for records modified in this period. This method will not bring back modified records, only the old records that existed on Sunday. If you restore this full database backup, all your changes (intentional and accidental) will be lost and the database will be reverted to the state it had on Sunday. What you have to do is compare the records that were in the table on Sunday to the records on Wednesday, create a synchronization script, and execute it against the Wednesday database. If you have a full database backup followed by differential database backups Let’s say the situation is the same as in the example above, only you create a differential database backup every night. Use the full database backup created on Sunday, and the last differential database backup (created on Tuesday). In this scenario, you will lose only the data inserted and updated after the differential backup created on Tuesday. If you have a full database backup and a chain of transaction log backups This is the SQL Server disaster recovery strategy that provides minimal data loss. With a full chain of transaction logs, you can recover the SQL database to an exact point in time. To provide optimal results, you have to know exactly when the records were deleted, because restoring to a later point will not bring back the records. This method requires restoring the full database backup first. If you have any differential log backup created after the last full database backup, restore the most recent one. Then, restore transaction log backups, one by one, it the order they were created starting with the first created after the restored differential database backup. Now, the table will be in the state before the records were deleted. You have to identify the deleted records, script them and run the script against the original database. Although this method is reliable, it is time-consuming and requires a lot of space on disk. How to easily recover deleted records? The following solution enables you to recover SQL database records even if you have no full or differential database backups and no transaction log backups. To understand how ApexSQL Recover works, I’ll explain what happens when table data is deleted. Table data is stored in data pages. When you delete table records, they are not immediately deleted from the data pages, but marked to be overwritten by new records. Such records are not shown as existing anymore, but ApexSQL Recover can read them and create undo script for them. How long will deleted records stay in the MDF file? It depends on many factors, as time passes it’s less likely that the records will not be overwritten. The more transactions occur after the deletion, the more chances the records will be overwritten and permanently lost. Therefore, it’s recommended to create a copy of the database MDF and LDF files immediately (if you cannot take your database offline until the issue is solved) and run ApexSQL Recover on them. Note that a full database backup will not help here, as the records marked for overwriting are not included in the backup. First, I’ll delete some records from the Person.EmailAddress table in the AdventureWorks database.   I can delete these records in SQL Server Management Studio, or execute a script such as DELETE FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 Then, I’ll start ApexSQL Recover and select From DELETE operation in the Recovery tab.   In the Select the database to recover step, first select the SQL Server instance. If it’s not shown in the drop-down list, click the Server icon right to the Server drop-down list and browse for the SQL Server instance, or type the instance name manually. Specify the authentication type and select the database in the Database drop-down list.   In the next step, you’re prompted to add additional data sources. As this can be a tricky step, especially for new users, ApexSQL Recover offers help via the Help me decide option.   The Help me decide option guides you through a series of questions about the database transaction log and advises what files to add. If you know that you have no transaction log backups or detached transaction logs, or the online transaction log file has been truncated after the data was deleted, select No additional transaction logs are available. If you know that you have transaction log backups that contain the delete transactions you want to recover, click Add transaction logs. The online transaction log is listed and selected automatically.   Click Add if to add transaction log backups. It would be best if you have a full transaction log chain, as explained above. The next step for this option is to specify the time range.   Selecting a small time range for the time of deletion will create the recovery script just for the accidentally deleted records. A wide time range might script the records deleted on purpose, and you don’t want that. If needed, you can check the script generated and manually remove such records. After that, for all data sources options, the next step is to select the tables. Be careful here, if you deleted some data from other tables on purpose, and don’t want to recover them, don’t select all tables, as ApexSQL Recover will create the INSERT script for them too.   The next step offers two options: to create a recovery script that will insert the deleted records back into the Person.EmailAddress table, or to create a new database, create the Person.EmailAddress table in it, and insert the deleted records. I’ll select the first one.   The recovery process is completed and 11 records are found and scripted, as expected.   To see the script, click View script. ApexSQL Recover has its own script editor, where you can review, modify, and execute the recovery script. The insert into statements look like: INSERT INTO Person.EmailAddress( BusinessEntityID, EmailAddressID, EmailAddress, rowguid, ModifiedDate) VALUES( 70, 70, N'[email protected]' COLLATE SQL_Latin1_General_CP1_CI_AS, 'd62c5b4e-c91f-403f-b630-7b7e0fda70ce', '20030109 00:00:00.000' ); To execute the script, click Execute in the menu.   If you want to check whether the records are really back, execute SELECT * FROM Person.EmailAddress WHERE BusinessEntityID BETWEEN 70 AND 80 As shown, ApexSQL Recover recovers SQL database data after accidental deletes even without the database backup that contains the deleted data and relevant transaction log backups. ApexSQL Recover reads the deleted data from the database data file, so this method can be used even for databases in the Simple recovery model. Besides recovering SQL database records from a DELETE statement, ApexSQL Recover can help when the records are lost due to a DROP TABLE, or TRUNCATE statement, as well as repair a corrupted MDF file that cannot be attached to as SQL Server instance. You can find more information about how to recover SQL database lost data and repair a SQL Server database on ApexSQL Solution center. There are solutions for various situations when data needs to be recovered. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Curing the Database-Application mismatch

    - by Phil Factor
    If an application requires access to a database, then you have to be able to deploy it so as to be version-compatible with the database, in phase. If you can deploy both together, then the application and database must normally be deployed at the same version in which they, together, passed integration and functional testing.  When a single database supports more than one application, then the problem gets more interesting. I’ll need to be more precise here. It is actually the application-interface definition of the database that needs to be in a compatible ‘version’.  Most databases that get into production have no separate application-interface; in other words they are ‘close-coupled’.  For this vast majority, the whole database is the application-interface, and applications are free to wander through the bowels of the database scot-free.  If you’ve spurned the perceived wisdom of application architects to have a defined application-interface within the database that is based on views and stored procedures, any version-mismatch will be as sensitive as a kitten.  A team that creates an application that makes direct access to base tables in a database will have to put a lot of energy into keeping Database and Application in sync, to say nothing of having to tackle issues such as security and audit. It is not the obvious route to development nirvana. I’ve been in countless tense meetings with application developers who initially bridle instinctively at the apparent restrictions of being ‘banned’ from the base tables or routines of a database.  There is no good technical reason for needing that sort of access that I’ve ever come across.  Everything that the application wants can be delivered via a set of views and procedures, and with far less pain for all concerned: This is the application-interface.  If more than zero developers are creating a database-driven application, then the project will benefit from the loose-coupling that an application interface brings. What is important here is that the database development role is separated from the application development role, even if it is the same developer performing both roles. The idea of an application-interface with a database is as old as I can remember. The big corporate or government databases generally supported several applications, and there was little option. When a new application wanted access to an existing corporate database, the developers, and myself as technical architect, would have to meet with hatchet-faced DBAs and production staff to work out an interface. Sure, they would talk up the effort involved for budgetary reasons, but it was routine work, because it decoupled the database from its supporting applications. We’d be given our own stored procedures. One of them, I still remember, had ninety-two parameters. All database access was encapsulated in one application-module. If you have a stable defined application-interface with the database (Yes, one for each application usually) you need to keep the external definitions of the components of this interface in version control, linked with the application source,  and carefully track and negotiate any changes between database developers and application developers.  Essentially, the application development team owns the interface definition, and the onus is on the Database developers to implement it and maintain it, in conformance.  Internally, the database can then make all sorts of changes and refactoring, as long as source control is maintained.  If the application interface passes all the comprehensive integration and functional tests for the particular version they were designed for, nothing is broken. Your performance-testing can ‘hang’ on the same interface, since databases are judged on the performance of the application, not an ‘internal’ database process. The database developers have responsibility for maintaining the application-interface, but not its definition,  as they refactor the database. This is easily tested on a daily basis since the tests are normally automated. In this setting, the deployment can proceed if the more stable application-interface, rather than the continuously-changing database, passes all tests for the version of the application. Normally, if all goes well, a database with a well-designed application interface can evolve gracefully without changing the external appearance of the interface, and this is confirmed by integration tests that check the interface, and which hopefully don’t need to be altered at all often.  If the application is rapidly changing its ‘domain model’  in the light of an increased understanding of the application domain, then it can change the interface definitions and the database developers need only implement the interface rather than refactor the underlying database.  The test team will also have to redo the functional and integration tests which are, of course ‘written to’ the definition.  The Database developers will find it easier if these tests are done before their re-wiring  job to implement the new interface. If, at the other extreme, an application receives no further development work but survives unchanged, the database can continue to change and develop to keep pace with the requirements of the other applications it supports, and needs only to take care that the application interface is never broken. Testing is easy since your automated scripts to test the interface do not need to change. The database developers will, of course, maintain their own source control for the database, and will be likely to maintain versions for all major releases. However, this will not need to be shared with the applications that the database servers. On the other hand, the definition of the application interfaces should be within the application source. Changes in it have to be subject to change-control procedures, as they will require a chain of tests. Once you allow, instead of an application-interface, an intimate relationship between application and database, we are in the realms of impedance mismatch, over and above the obvious security problems.  Part of this impedance problem is a difference in development practices. Whereas the application has to be regularly built and integrated, this isn’t necessarily the case with the database.  An RDBMS is inherently multi-user and self-integrating. If the developers work together on the database, then a subsequent integration of the database on a staging server doesn’t often bring nasty surprises. A separate database-integration process is only needed if the database is deliberately built in a way that mimics the application development process, but which hampers the normal database-development techniques.  This process is like demanding a official walking with a red flag in front of a motor car.  In order to closely coordinate databases with applications, entire databases have to be ‘versioned’, so that an application version can be matched with a database version to produce a working build without errors.  There is no natural process to ‘version’ databases.  Each development project will have to define a system for maintaining the version level. A curious paradox occurs in development when there is no formal application-interface. When the strains and cracks happen, the extra meetings, bureaucracy, and activity required to maintain accurate deployments looks to IT management like work. They see activity, and it looks good. Work means progress.  Management then smile on the design choices made. In IT, good design work doesn’t necessarily look good, and vice versa.

    Read the article

  • New Oracle University Courses for June 2014 - Just Released!

    - by user12601713
    -Written by the Oracle University Marketing team  At Oracle University, we work hard to make sure our training and certification portfolio is current. Here's a quick summary of our hottest new courses.  Top Course Releases for June 1)   Oracle BI 11g R1: Build Repositories 2)   Oracle WebCenter Sites 11g for System Administrators 3)   Oracle Database 12c: Clusterware Administration 4)   Oracle Database 12c: ASM Administration 5)   Oracle Enterprise Manager Ops Center 12c R2 Virtualizing Systems 6)   XML Fundamentals Ed 1.1 7)   PeopleSoft Global Payroll Rel 9.2 8)   Oracle Agile 9.3.3 Administrator 9)   Oracle Agile 9.3.3 Product Collaboration This is just a sample of what we expect to be popular new releases. Explore all of Oracle's training and certifications at education.oracle.com.  Courses can be taken in the classroom or online. Choose your learning method based on your schedule.

    Read the article

  • 10gR2 10.2.0.4 Certified with EBS 12 on Windows Itanium x64

    - by Steven Chan
    Oracle Database 10g Release 2 version 10.2.0.4 is now certified with Oracle E-Business Suite Release 12 (12.0.4 or higher, 12.1.1 or higher) on the Windows Itanium x64 (64-bit) platform. The operating system supported on this platform is Windows Server 2003. This is a 'database-tier only' certification (previously known as a 'split configuration database tier' certification) where the application tier must be on a different fully certified E-Business Suite R12 platform.This 'database-tier only' platform was previously certified with 12.0 and 10.2.0.3 - customers can now apply the 12.1.1 Maintenance Pack to upgrade their application tier to 12.1.1 while running the 10gR2 database on this platform.

    Read the article

  • Don’t miss the live FY12 Oracle PartnerNetwork Kickoff event - 28/Jun/11

    - by pfolgado
    Register now for the live, interactive FY12 OPN Kickoff event on June 28th! Hosted by Judson Althoff, Oracle senior vice president of WW Alliances & Channels, this hour-long event will outline the opportunities for partners to increase revenue with Oracle in FY12. Oracle President, Mark Hurd, will update you on his focus for partners in FY12. You will also hear from Stein Surlien, senior vice president, EMEA Alliances & Channels, and have the opportuntity to ask him questions in a special Q&A session. In addition, we will be making a special announcement for our ISV partners, highlighting some exciting new offerings on how we will go to market together. You will also hear the latest from Oracle product executives, who will outline their priorities for the upcoming year. Please register for the OPN Partner Kickoff at Tuesday, June 28th at 2:00 pm UK/3pm CET! Don’t be left out, mark your calendar and register now!

    Read the article

  • IDC: Oracle Doubles Down on Life Sciences Sales & Marketing

    - by charles.knapp
    "This past week Oracle held its 5th annual Life Sciences Forum in Princeton, NJ. The conference provided a wide range of content focused on their products and partnerships across the life science spectrum. But this year's conference placed strong emphasis on Oracle's new CRM On Demand Life Sciences Edition R17, and deservedly so. R17 is the largest, and most impressive, CRM On Demand release that Oracle has had to date, and it provides many significant upgrades over earlier versions." Read more here.

    Read the article

  • OTN Developer Day: Oracle Database 11g Application Development

    - by stephen.garth
    When and Where: Tuesday June 15, 2010 from 8:00 am - 5:30 pm Hyatt Regency Reston, Reston VA This full-day FREE event offers a great learning and networking opportunity. With support from Oracle database application development experts, you'll get valuable hands-on experience developing database-backed apps with the latest Oracle tools and frameworks. Oh yeah, you get to use your own notebook and download some cool and very useful materials. Find out more and register here. var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Upcoming Conferences to Showcase Oracle's Latest Procurement Applications

    - by Paul Homchick
    The 2010 conference season is kicking off with a series of events featuring executive updates demos of Oracle's newest procurement products. Attendees will also have the chance to meet with Oracle customers and technical representatives to discuss best practices for optimizing procurement processes. New Procurement TechnologiesOracle will use the events to showcase a number of procurement applications introduced since last year's Oracle OpenWorld: Oracle Supplier Lifecycle Management--a supplier-development application released this year to simplify the qualification, assessment, and performance monitoring of vendors (see related story). Oracle Supplier Hub--another 2010 introduction, the Oracle Supplier Hub unifies and shares critical information about all the suppliers in an organization's stable (see related story). Oracle Spend Classification--an intelligence-based application that improves spend and performance visibility. Oracle Procurement On Demand--the adaptive solution that enables and accelerates procurement transformation. Oracle Procurement and Spend Analytics 7.9.6.1--the latest release of Oracle Business Intelligence extends new content and integration capabilities to additional platforms and languages. Click here to find an event near you: List of conferences by location.

    Read the article

  • News about Oracle Documaker Enterprise Edition

    - by Susanne Hale
    Updates come from the Documaker front on two counts: Oracle Documaker Awarded XCelent Award for Best Functionality Celent has published a NEW report entitled Document Automation Solution Vendors for Insurers 2011. In the evaluation, Oracle received the XCelent award for Functionality, which recognizes solutions as the leader in this category of the evaluation. According to Celent, “Insurers need to address issues related to the creation and handling of all sorts of documents. Key issues in document creation are complexity and volume. Today, most document automation vendors provide an array of features to cope with the complexity and volume of documents insurers need to generate.” The report ranks ten solution providers on Technology, Functionality, Market Penetration, and Services. Each profile provides detailed information about the vendor and its document automation system, the professional services and support staff it offers, product features, insurance customers and reference feedback, its technology, implementation process, and pricing.  A summary of the report is available at Celent’s web site. Documaker User Group in Wisconsin Holds First Meeting Oracle Documaker users in Wisconsin made the first Documaker User Group meeting a great success, with representation from eight companies. On April 19, over 25 attendees got together to share information, best practices, experiences and concepts related to Documaker and enterprise document automation; they were also able to share feedback with Documaker product management. One insurer shared how they publish and deliver documents to both internal and external customers as quickly and cost effectively as possible, since providing point of sale documents to the sales force in real time is crucial to obtaining and maintaining the book of business. They outlined best practices that ensure consistent development and testing strategies processes are in place to maximize performance and reliability. And, they gave an overview of the supporting applications they developed to monitor and improve performance as well as monitor and track each transaction. Wisconsin User Group meeting photos are posted on the Oracle Insurance Facebook page http://www.facebook.com/OracleInsurance. The Wisconsin User Group will meet again on October 26. If you and other Documaker customers in your area are interested in setting up a user group in your area, please contact Susanne Hale ([email protected]), (703) 927-0863.

    Read the article

  • What is the best database design and/or software to model a thesaurus?

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus : a long list of words with attributes, all of which are linked to each other. Wikipedia defines it as: In Information Science, Library Science, and Information Technology, specialized thesauri are designed for information retrieval. They are a type of controlled vocabulary, for indexing or tagging purposes. Such a thesaurus can be used as the basis of an index for online material. The Art and Architecture Thesaurus, for example, is used to index the Canadian Information retrieval thesauri are formally organized so that existing relationships between concepts are made explicit. What database software, design or model would best fit this? Are PHP and MySQL good technologies to handle it?

    Read the article

  • Winners of the Oracle Excellence Award—Eco-Enterprise Innovation

    - by Evelyn Neumayr
    Did you get a chance to attend Oracle OpenWorld in San Francisco? With 60,000 attendees and hundreds of sessions to choose from—there was a lot going on. One of my favorite sessions was the Eco-Enterprise Awards and Sustainability Executive Panel Discussion. During this session, Jeff Henley, Oracle Chairman of the Board, announced the winners of the 2013 Oracle Excellence Award—Eco-Enterprise Innovation. It was an enlightening session as we heard several of the winning customers discuss the importance of sustainability to their company and how they’re using various Oracle products to help with their sustainability initiatives. The winning customers include: Centennial Coal, Indaver nv, Korea Enterprise Data, National Guard Health Affairs, Schneider National, SThree, Telstra International Group, Trex Company, University of Salzburg, Walmart, and Yeoncheon County Office. Stay tuned for additional blogs where you’ll learn more about these winning companies’ environmental best practices and why they won this award. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Several partners were also recognized for helping these customers with their sustainability initiatives. Those partners include: CSS International, Daesang Information Technology, i4BI, Infosys, Knowledge Global, Solutions for Retails Brands Limited, and SysGen. During this same session, Jeff Henley also awarded Robert Kaplan, Director of Sustainability at Walmart, with Oracle’s Chief Sustainability Officer of the Year award. Robert was honored for helping improve Walmart’s supply chain efficiency with their Sustainability Hub. The Sustainability Hub, powered by Oracle Service Cloud, is a central location for Walmart suppliers, associates and business partners to learn, connect, inspire and drive sustainability through collaboration. While at Oracle OpenWorld, I also got a chance to hear Robert Kaplan discuss their Sustainability Hub during an Oracle OpenWorld Live taping. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Oracle releases Java SE 7 Update 7, and Java SE 6 Update 35

    - by Henrik Stahl
    This morning, Oracle released updates to JDK 6 and 7. For more information on these releases see: Security Alert for CVE-2012-4681 Released Release notes Oracle recommends that users apply these updates as soon as possible. Users of Oracle JRE 6 and 7 for Windows (32-bit) and the recently released JRE 7 for Mac OSX (64-bit) will be updated automatically. For more information see, this blog entry.

    Read the article

  • Ameristar Wins with Oracle GoldenGate’s Heterogeneous Real-Time Data Integration

    - by Irem Radzik
    Today we announced a press release about another successful project with Oracle GoldenGate. This time at Ameristar. Ameristar is a casino gaming company and needed a single data integration solution to connect multiple heterogeneous systems to its Teradata data warehouse. The project involves integration of Ameristar’s promotional and gaming data from 14 data sources across its 7 casino hotel properties in real time into a central Teradata data warehouse. The source systems include the Aristocrat gaming and MGT promotional management platforms running on Microsoft SQL Server 2000 databases. As you can notice, there was no Oracle Database involved in this project, but Ameristar’s IT leadership knew that  GoldenGate’s strong heterogeneous and real-time data integration capabilities is the right technology for their data warehousing project. With GoldenGate Ameristar was able to reduce data latency to the enterprise data warehouse, and use this real-time customer information for marketing teams in improving overall customer experience. Ameristar customers receive more targeted and timely campaign offers, and the company has more up-to-date visibility into financial metrics of the company. One other key benefit the company experienced with GoldenGate is in operational costs. The previous data capture solution Ameristar used was trigger based and required a lot of effort to manage. They needed dedicated IT staff to maintain it. With GoldenGate, the solution runs seamlessly without needing a fully-dedicated staff, giving the IT team at Ameristar more resources for their other IT projects. If you want to learn more about GoldenGate and the latest features for Oracle Database and non-Oracle databases, please watch our on demand webcast about Oracle GoldenGate 11g Release 2.

    Read the article

  • Why Cornell University Chose Oracle Data Masking

    - by Troy Kitch
    One of the eight Ivy League schools, Cornell University found itself in the unfortunate position of having to inform over 45,000 University community members that their personal information had been breached when a laptop was stolen. To ensure this wouldn’t happen again, Cornell took steps to ensure that data used for non-production purposes is de-identified with Oracle Data Masking. A recent podcast highlights why organizations like Cornell are choosing Oracle Data Masking to irreversibly de-identify production data for use in non-production environments. Organizations often copy production data, that contains sensitive information, into non-production environments so they can test applications and systems using “real world” information. Data in non-production has increasingly become a target of cyber criminals and can be lost or stolen due to weak security controls and unmonitored access. Similar to production environments, data breaches in non-production environments can cost millions of dollars to remediate and cause irreparable harm to reputation and brand. Cornell’s applications and databases help carry out the administrative and academic mission of the university. They are running Oracle PeopleSoft Campus Solutions that include highly sensitive faculty, student, alumni, and prospective student data. This data is supported and accessed by a diverse set of developers and functional staff distributed across the university. Several years ago, Cornell experienced a data breach when an employee’s laptop was stolen.  Centrally stored backup information indicated there was sensitive data on the laptop. With no way of knowing what the criminal intended, the university had to spend significant resources reviewing data, setting up service centers to handle constituent concerns, and provide free credit checks and identity theft protection services—all of which cost money and took time away from other projects. To avoid this issue in the future Cornell came up with several options; one of which was to sanitize the testing and training environments. “The project management team was brought in and they developed a project plan and implementation schedule; part of which was to evaluate competing products in the market-space and figure out which one would work best for us.  In the end we chose Oracle’s solution based on its architecture and its functionality.” – Tony Damiani, Database Administration and Business Intelligence, Cornell University The key goals of the project were to mask the elements that were identifiable as sensitive in a consistent and efficient manner, but still support all the previous activities in the non-production environments. Tony concludes,  “What we saw was a very minimal impact on performance. The masking process added an additional three hours to our refresh window, but it was well worth that time to secure the environment and remove the sensitive data. I think some other key points you can keep in mind here is that there was zero impact on the production environment. Oracle Data Masking works in non-production environments only. Additionally, the risk of exposure has been significantly reduced and the impact to business was minimal.” With Oracle Data Masking organizations like Cornell can: Make application data securely available in non-production environments Prevent application developers and testers from seeing production data Use an extensible template library and policies for data masking automation Gain the benefits of referential integrity so that applications continue to work Listen to the podcast to hear the complete interview.  Learn more about Oracle Data Masking by registering to watch this SANS Institute Webcast and view this short demo.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >