Search Results

Search found 40059 results on 1603 pages for 'database management'.

Page 31/1603 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • SQL SERVER – Three Methods to Insert Multiple Rows into Single Table – SQL in Sixty Seconds #024 – Video

    - by pinaldave
    One of the biggest ask I have always received from developers is that if there is any way to insert multiple rows into a single table in a single statement. Currently when developers have to insert any value into the table they have to write multiple insert statements. First of all this is not only boring it is also very much time consuming as well. Additionally, one has to repeat the same syntax so many times that the word boring becomes an understatement. In the following quick video we have demonstrated three different methods to insert multiple values into a single table. -- Insert Multiple Values into SQL Server CREATE TABLE #SQLAuthority (ID INT, Value VARCHAR(100)); Method 1: Traditional Method of INSERT…VALUE -- Method 1 - Traditional Insert INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'); INSERT INTO #SQLAuthority (ID, Value) VALUES (2, 'Second'); INSERT INTO #SQLAuthority (ID, Value) VALUES (3, 'Third'); Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 2: INSERT…SELECT -- Method 2 - Select Union Insert INSERT INTO #SQLAuthority (ID, Value) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Third'; Clean up -- Clean up TRUNCATE TABLE #SQLAuthority; Method 3: SQL Server 2008+ Row Construction -- Method 3 - SQL Server 2008+ Row Construction INSERT INTO #SQLAuthority (ID, Value) VALUES (1, 'First'), (2, 'Second'), (3, 'Third'); Clean up -- Clean up DROP TABLE #SQLAuthority; Related Tips in SQL in Sixty Seconds: SQL SERVER – Insert Multiple Records Using One Insert Statement – Use of UNION ALL SQL SERVER – 2008 – Insert Multiple Records Using One Insert Statement – Use of Row Constructor I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. If we like your idea we promise to share with you educational material. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Oracle E-Business Suite Plug-in 4.0 Released for OEM 11g (11.1.0.1)

    - by Steven Chan
    [Feb. 25, 12:40 PM Update: Removed incorrect references to RHEL 3, SLES 9, HP-UX 11.11, Solaris 8]We're very pleased to announce the release of Oracle E-Business Suite Plug-in 4.0, an integral part of Application Management Suite for Oracle E-Business Suite.The management suite combines features in the standalone Application Management Pack (AMP) for Oracle E-Business Suite and Application Change Management Pack (ACMP) for Oracle E-Business Suite with Oracle's real user monitoring and configuration management capabilities.  The features that were available in the standalone Application Management Pack and Application Change Management Pack for Oracle E-Business Suite are now packaged into the Oracle E-Business Suite Plug-in 4.0.  The Oracle E-Business Suite Plug-in 4.0 is now fully certified with Oracle Enterprise Manager 11g Grid Control.  This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support.  The Oracle E-Business Suite Plug-in is released via patch 8333939.  For the AMP and ACMP 4.0 installation guide, see:Getting Started with Oracle E-Business Suite Plug-in Release 4.0 (Note 1224313.1)General AMP & ACMP improvementsOracle Enterprise Manager 11g Grid Control SupportApplication Management Pack 4.0 and Application Change Management Pack 4.0 for Oracle E-Business Suite are certified with Oracle Enterprise Manager 11g Grid Control Release 1 (11.1.0.1.0).Built-in Diagnostic Ability Release 4.0 has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration process.

    Read the article

  • Which software to keep track of my project?

    - by Exa
    I'm about to start the first real phase of my game development which will consist of the acquisition of information, resources and the definition of where I want to go and what I will need for that. I just want to make sure that I'm prepared as best as possible before I actually start development. I don't like the thought of using Microsoft Word or Excel for my project management... I already worked with MS Project but I don't think it fits my needs. I need a software where I can easily maintain project steps, milestones, important issues, information about technologies and engines I use, as well as simple notes and thoughts I just want to write down. I usually prefer a whiteboard for stuff like that but unfortunately it's not a persistent way of storing. ;) Also writing it down the old-school way is something I can think of, but only for quick notes... Which software do you use for that? Are there commonly used programs? Is there any free software at all?

    Read the article

  • Automated backups for Windows Azure SQL Database

    - by Greg Low
    One of the questions that I've often been asked is about how you can backup databases in Windows Azure SQL Database. What we have had access to was the ability to export a database to a BACPAC. A BACPAC is basically just a zip file that contains a bunch of metadata along with a set of bcp files for each of the tables in the database. Each table in the database is exported one after the other, so this does not produce a transactionally-consistent backup at a specific point in time. To get a transactionally-consistent copy, you need a database that isn't in use.The easiest way to get a database that isn't in use is to use CREATE DATABASE AS COPY OF. This creates a new database as a transactionally-consistent copy of the database that you are copying. You can then use the export options to get a consistent BACPAC created.Previously, I've had to automate this process by myself. Given there was also no SQL Agent in Azure, I used a job in my on-premises SQL Server to do this, using a linked server configuration.Now there's a much simpler way. Windows Azure SQL Database now supports an automated export function. On the Configuration tab for the database, you need to enable the Automated Export function. You can configure how often the operation is performed for you, and which storage account will be used for the backups.It's important to consider the cost impacts of this as well. You are charged for how ever many databases are on your server on a given day. So if you enable a daily backup, you will double your database costs. Do not schedule the backups just before midnight UTC, as that could cause you to have three databases each day instead of one.This is a much needed addition to the capabilities. Scott Guthrie also posted about some other notable changes today, including a preview of a new premium offering for SQL Database. In addition to the Web and Business editions, there will now be a Premium edition that has reserved (rather than shared) resources. You can read about it all in Scott's post here: http://weblogs.asp.net/scottgu/archive/2013/07/23/windows-azure-july-updates-sql-database-traffic-manager-autoscale-virtual-machines.aspx

    Read the article

  • Planning milestones and time

    - by Ignas
    I was hired by a marketing company a year ago initially for link building / SEO stuff, but I'm actually a Web developer and took the job just in desperation to have one (I'm still quite young and just finished 2nd year of University). From the 3rd day my boss realised that I'm not into that stuff at all and since he had an idea of a web based app we started to plan it. I estimated that it shouldn't take me longer than two months to do it, but as I was making it we soon realised that we want to add more and more stuff to make it even better. So the development on my own lasted for about 4 months, but then it became an enterprise size app and we hired another programmer to work along me. The guy was awesome at what he did, but because I was assigned to be programmer/project manager I had to set up milestones with deadlines and we missed most of them, because most of the time it was too much work, and my lack of experience kept me setting really optimistic deadlines. We still kept adding features and had changed the architecture of the application twice. My boss is a great guy and he gets that when we add features it expands the time frame in which things should be done so he wasn't angry at me nor the other guy. But I was feeling bad (I still am) that I suck at planning. I gained loads of experience from the programming side, but I still lack the management/planning skills which make me go nuts. So over the last year I have dedicated probably about 8 months of work to this app (obviously my studies affected it) and we're launching as a closed beta this month. So my question is how do I get better at planning/managing a project, how do you estimate the times? What do you take into consideration when setting goals. I'm working alone again because the other guy moved from the city. But I'm sure we'll be hiring to help me maintain it so I need to get better at it. Any hints, points or anything on the topic are appreciated.

    Read the article

  • Who should have full visibility of all (non-data) requirements information?

    - by ebyrob
    I work at a smallish mid-size company where requirements are sometimes nothing more than an email or brief meeting with a subject matter manager requiring some new feature. Should a programmer working on a feature reasonably expect to have access to such "request emails" and other requirements information? Is it more appropriate for a "program manager" (PGM) to rewrite all requirements before sharing with programmers? The company is not technology-centric and has between 50 and 250 employees. (fewer than 10 programmers in sum) Our project management "software" consists of a "TODO.txt" checked into source control in "/doc/". Note: This is nothing to do with "sensitive data access". Unless a particular subject matter manager's style of email correspondence is top secret. Given the suggested duplicate, perhaps this could be a turf war, as the PGM would like to specify HOW. Whereas WHY is absent and WHAT is muddled by the time it gets through to the programmer(s)... Basically. Should specification be transparent to programmers? Perhaps a history of requirements might exist. Shouldn't a programmer be able to see that history of reqs if/when they can tell something is hinky in the spec? This isn't a question about organizing requirements. It is a question about WHO should have full VISIBILITY of requirements. I'd propose it should be ALL STAKEHOLDERS. Please point out where I'm wrong here.

    Read the article

  • What’s the Difference Between Succession Management and Talent Reviews?

    - by HCM-Oracle
    By Marcie Van Houten Is there a difference or are they pieces of one holistic strategic talent process? And can you have one without the other?  First, let me give a quick definition of each.  Succession planning (or management) is about creating succession slates or talent pools in support of a critical job or position or sets thereof. And then using those plans to help mitigate risk and plan talent needs for the organization.  Talent reviews (known by other names often) are sets of meetings where managers and executives come together to review, discuss and often heatedly debate the merits and potential of their employees, and then place and sometimes calibrate that talent on a performance to potential matrix.  These are some of the most strategic conversations happening in conference rooms across the globe. I speak with a lot of organizations about their practices in this area and the answers to these questions are as varied and nuanced as there are organizations thinking about them.  Some are passionate about their talent review processes and have a very evolved and thoughtful approach.  They really know their people, where their talent is, and the opportunities they plan to offer them.  And to them that is their succession process.  They may never create a slate of named candidates for a job or assign employees to formal talent pools.   On the flip side there are other organizations that create slates and slates and often multiple talent pools to support their strategic positions.  Through these, they are able to mitigate the risk associated with having a key player leave their organization.  And for them, that is their succession process.  Some will start from the lower levels of their organization and roll up their succession plans, while other organizations only cover their top 200 executives and key positions with plans.  And then there are organizations that leverage some of all of these.  Ultimately, the goals are to increase employee engagement, reduce talent-related risk, ensure the right talent is aligned to the strategic initiatives and to drive business value.  The approaches are as unique as the organizations they represent and the business opportunities they are looking to seize upon.   And that's ok.  It's great in fact. Because one thing that is common is the recognition that the need to know your people and align your top talent to the future needs of the organization is mission critical. Sure, there are a set of commonly recognized best practices and guiding principles for all of this.  There is no one right or perfect answer.  And that is what makes this all so much darn fun.  With Talent Review and Succession Management from Oracle HCM Cloud, we’ve blended the ability to support your strategic talent review conversations with both succession plans and talent pools allowing for one very seamless and interactive process. So whether you create a lot of succession plans, only focus on talent pools, have a robust talent review process, or all of the above, Oracle has you covered. I’m looking forward to spending time with our customers at the upcoming OHUG Global Conference 2014 happening June 9-13 in Las Vegas.  It’s an opportunity for me to talk to customers about their business and how they are doing strategic talent processes like talent reviews and succession.  I hope to see you there. Marcie Van Houten brings over 20 years of management consulting, information systems and human capital management experience to her role as director of product strategy at Oracle. Ms. Van Houten has spent the past several years at Oracle working closely with customers to help drive the direction of the company's talent and succession management applications. Additionally, she spent nine years at PeopleSoft as Director of Information Systems leading human capital management implementation projects. Marcie Van Houten lives in Walnut Creek, California, and holds a MBA from Southern Methodist University in Dallas, Texas.  You can follow her on Twitter: @MarcieVH

    Read the article

  • creating tables on remote database

    - by raj
    I created a database link using database link. create public database link REMOTEDB connect to REMOTEUSER identified by REMOTEPWD using 'REMOTEDB'; then i create a table in remote db like, create table MYTABLE@REMOTEDB (name varchar2(20))); It says, ORA-02021 DDL operations are not allowed on| a remote database.. Will this Not work on any cost, or am i just missing some permissions to create ?

    Read the article

  • SQL SERVER – Display Datetime in Specific Format – SQL in Sixty Seconds #033 – Video

    - by pinaldave
    A very common requirement of developers is to format datetime to their specific need. Every geographic location has different need of the date formats. Some countries follow the standard of mm/dd/yy and some countries as dd/mm/yy. The need of developer changes as geographic location changes. In SQL Server there are various functions to aid this requirement. There is function CAST, which developers have been using for a long time as well function CONVERT which is a more enhanced version of CAST. In the latest version of SQL Server 2012 a new function FORMAT is introduced as well. In this SQL in Sixty Seconds video we cover two different methods to display the datetime in specific format. 1) CONVERT function and 2) FORMAT function. Let me know what you think of this video. Here is the script which is used in the video: -- http://blog.SQLAuthority.com -- SQL Server 2000/2005/2008/2012 onwards -- Datetime SELECT CONVERT(VARCHAR(30),GETDATE()) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),10) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),110) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),5) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),105) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; GO -- SQL Server 2012 onwards -- Various format of Datetime SELECT CONVERT(VARCHAR(30),GETDATE(),113) AS DateConvert; SELECT FORMAT ( GETDATE(), 'dd mon yyyy HH:m:ss:mmm', 'en-US' ) AS DateConvert; SELECT CONVERT(VARCHAR(30),GETDATE(),114) AS DateConvert; SELECT FORMAT ( GETDATE(), 'HH:m:ss:mmm', 'en-US' ) AS DateConvert; GO -- Specific usage of Format function SELECT FORMAT(GETDATE(), N'"Current Time is "dddd MMMM dd, yyyy', 'en-US') AS CurrentTimeString; This video discusses CONVERT and FORMAT in simple manner but the subject is much deeper and there are lots of information to cover along with it. I strongly suggest that you go over related blog posts in next section as there are wealth of knowledge discussed there. Related Tips in SQL in Sixty Seconds: Get Date and Time From Current DateTime – SQL in Sixty Seconds #025 Retrieve – Select Only Date Part From DateTime – Best Practice Get Time in Hour:Minute Format from a Datetime – Get Date Part Only from Datetime DATE and TIME in SQL Server 2008 Function to Round Up Time to Nearest Minutes Interval Get Date Time in Any Format – UDF – User Defined Functions Retrieve – Select Only Date Part From DateTime – Best Practice – Part 2 Difference Between DATETIME and DATETIME2 Saturday Fun Puzzle with SQL Server DATETIME2 and CAST What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • SQL SERVER – Select and Delete Duplicate Records – SQL in Sixty Seconds #036 – Video

    - by pinaldave
    Developers often face situations when they find their column have duplicate records and they want to delete it. A good developer will never delete any data without observing it and making sure that what is being deleted is the absolutely fine to delete. Before deleting duplicate data, one should select it and see if the data is really duplicate. In this video we are demonstrating two scripts – 1) selects duplicate records 2) deletes duplicate records. We are assuming that the table has a unique incremental id. Additionally, we are assuming that in the case of the duplicate records we would like to keep the latest record. If there is really a business need to keep unique records, one should consider to create a unique index on the column. Unique index will prevent users entering duplicate data into the table from the beginning. This should be the best solution. However, deleting duplicate data is also a very valid request. If user realizes that they need to keep only unique records in the column and if they are willing to create unique constraint, the very first requirement of creating a unique constraint is to delete the duplicate records. Let us see how to connect the values in Sixty Seconds: Here is the script which is used in the video. USE tempdb GO CREATE TABLE TestTable (ID INT, NameCol VARCHAR(100)) GO INSERT INTO TestTable (ID, NameCol) SELECT 1, 'First' UNION ALL SELECT 2, 'Second' UNION ALL SELECT 3, 'Second' UNION ALL SELECT 4, 'Second' UNION ALL SELECT 5, 'Second' UNION ALL SELECT 6, 'Third' GO -- Selecting Data SELECT * FROM TestTable GO -- Detecting Duplicate SELECT NameCol, COUNT(*) TotalCount FROM TestTable GROUP BY NameCol HAVING COUNT(*) > 1 ORDER BY COUNT(*) DESC GO -- Deleting Duplicate DELETE FROM TestTable WHERE ID NOT IN ( SELECT MAX(ID) FROM TestTable GROUP BY NameCol) GO -- Selecting Data SELECT * FROM TestTable GO DROP TABLE TestTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – Delete Duplicate Records – Rows SQL SERVER – Count Duplicate Records – Rows SQL SERVER – 2005 – 2008 – Delete Duplicate Rows Delete Duplicate Records – Rows – Readers Contribution Unique Nonclustered Index Creation with IGNORE_DUP_KEY = ON – A Transactional Behavior What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • Best Practices For Database Consolidation On Exadata - New Whitepapers

    - by Javier Puerta
     Best Practices For Database Consolidation On Exadata Database Machine (Nov. 2011) Consolidation can minimize idle resources, maximize efficiency, and lower costs when you host multiple schemas, applications or databases on a target system. Consolidation is a core enabler for deploying Oracle database on public and private clouds.This paper provides the Exadata Database Machine (Exadata) consolidation best practices to setup and manage systems and applications for maximum stability and availability:Download here Oracle Exadata Database Machine Consolidation: Segregating Databases and Roles (Sep. 2011) This paper is focused on the aspects of segregating databases from each other in a platform consolidation environment on an Oracle Exadata Database Machine. Platform consolidation is the consolidation of multiple databases on to a single Oracle Exadata Database Machine. When multiple databases are consolidated on a single Database Machine, it may be necessary to isolate certain database components or functions in order to meet business requirements and provide best practices for a secure consolidation. In this paper we outline the use of Oracle Exadata database-scoped security to securely separate database management and provide a detailed case study that illustrates the best practices. Download here

    Read the article

  • Which design better when use foreign key instead of a string to store a list of id

    - by Kien Thanh
    I'm building online examination system. I have designed to table, Question and GeneralExam. The table GeneralExam contains info about the exam like name, description, duration,... Now I would like to design table GeneralQuestion, it will contain the ids of questions belongs to a general exam. Currently, I have two ideas to design GeneralQuestion table: It will have two columns: general_exam_id, question_id. It will have two columns: general_exam_id, list_question_ids (string/text). I would like to know which designing is better, or pros and cons of each designing. I'm using Postgresql database.

    Read the article

  • Database-as-a-Service on Exadata Cloud

    - by Gagan Chawla
    Note – Oracle Enterprise Manager 12c DBaaS is platform agnostic and is designed to work on Exadata/non-Exadata, physical/virtual, Oracle/non Oracle platforms and it’s not a mandatory requirement to use Exadata as the base platform. Database-as-a-Service (DBaaS) is an important trend these days and the top business drivers motivating customers towards private database cloud model include constant pressure to reduce IT Costs and Complexity, and also to be able to improve Agility and Quality of Service. The first step many enterprises take in their journey towards cloud computing is to move to a consolidated and standardized environment and Exadata being already a proven best-in-class popular consolidation platform, we are seeing now more and more customers starting to evolve from Exadata based platform into an agile self service driven private database cloud using Oracle Enterprise Manager 12c. Together Exadata Database Machine and Enterprise Manager 12c provides industry’s most comprehensive and integrated solution to transform from a typical silo’ed environment into enterprise class database cloud with self service, rapid elasticity and pay-per-use capabilities.   In today’s post, I’ll list down the important steps to enable DBaaS on Exadata using Enterprise Manager 12c. These steps are chalked down based on a recent DBaaS implementation from a real customer engagement - Project Planning - First step involves defining the scope of implementation, mapping functional requirements and objectives to use cases, defining high availability, network, security requirements, and delivering the project plan. In a Cloud project you plan around technology, business and processes all together so ensure you engage your actual end users and stakeholders early on in the project right from the scoping and planning stage. Setup your EM 12c Cloud Control Site – Once the project plan approval and sign off from stakeholders is achieved, refer to EM 12c Install guide and these are some important tips to follow during the site setup phase - Review the new EM 12c Sizing paper before you get started with install Cloud, Chargeback and Trending, Exadata plug ins should be selected to deploy during install Refer to EM 12c Administrator’s guide for High Availability, Security, Network/Firewall best practices and options Your management and managed infrastructure should not be combined i.e. EM 12c repository should not be hosted on same Exadata where target Database Cloud is to be setup Setup Roles and Users – Cloud Administrator (EM_CLOUD_ADMINISTRATOR), Self Service Administrator (EM_SSA_ADMINISTRATOR), Self Service User (EM_SSA_USER) are the important roles required for cloud lifecycle management. Roles and users are managed by Super Administrator via Setup menu –> Security option. For Self Service/SSA users custom role(s) based on EM_SSA_USER should be created and EM_USER, PUBLIC roles should be revoked during SSA user account creation. Configure Software Library – Cloud Administrator logs in and in this step configures software library via Enterprise menu –> provisioning and patching option and the storage location is OMS shared filesystem. Software Library is the centralized repository that stores all software entities and is often termed as ‘local store’. Setup Self Update – Self Update is one of the most innovative and cool new features in EM 12c framework. Self update can be accessed via Setup -> Extensibility option by Super Administrator and is the unified delivery mechanism to get all new and updated entities (Agent software, plug ins, connectors, gold images, provisioning bundles etc) in EM 12c. Deploy Agents on all Compute nodes, and discover Exadata targets – Refer to Exadata discovery cookbook for detailed walkthrough to ensure successful discovery of Exadata targets. Configure Privilege Delegation Settings – This step involves deployment of privilege setting template on all the nodes by Super Administrator via Setup menu -> Security option with the option to define whether to use sudo or powerbroker for all provisioning and patching operations. Provision Grid Infrastructure with RAC Database on Compute Nodes – Software is provisioned in this step via a provisioning profile using EM 12c database provisioning. In case of Exadata, Grid Infrastructure and RAC Database software is already deployed on compute nodes via OneCommand from Oracle, so SSA Administrator just needs to discover Oracle Homes and Listener as EM targets. Databases will be created as and when users request for databases from cloud. Customize Create Database Deployment Procedure – the actual database creation steps are "templatized" in this step by Self Service Administrator and the newly saved deployment procedure will be used during service template creation in next step. This is an important step and make sure you have locked all the required variables marked as locked as ‘Y’ in this table. Setup Self Service Portal – This step involves setting up of zones, user quotas, service templates, chargeback plan. The SSA portal is setup by Self Service Administrator via Setup menu -> Cloud -> Database option and following guided workflow. Refer to DBaaS cookbook for details. You also have an option to customize SSA login page via steps documented in EM 12c Cloud Administrator’s guide Final Checks – Define and document process guidelines for SSA users and administrators. Get your SSA users trained on Self Service Portal features and overall DBaaS model and SSA administrators should be familiar with Self Service Portal setup pieces, EM 12c database lifecycle management capabilities and overall EM 12c monitoring framework. GO LIVE – Announce rollout of Database-as-a-Service to your SSA users. Users can login to the Self Service Portal and request/monitor/view their databases in Exadata based database cloud. Congratulations! You just delivered a successful database cloud implementation project! In future posts, we will cover these additional useful topics around database cloud – DBaaS Implementation tips and tricks – right from setup to self service to managing the cloud lifecycle ‘How to’ enable real production databases copies in DBaaS with rapid provisioning in database cloud Case study of a customer who recently achieved success with their transformational journey from traditional silo’ed environment on to Exadata based database cloud using Enterprise Manager 12c. More Information – Podcast on Database as a Service using Oracle Enterprise Manager 12c Oracle Enterprise Manager 12c Installation and Administration guide, Cloud Administration guide DBaaS Cookbook Exadata Discovery Cookbook Screenwatch: Private Database Cloud: Set Up the Cloud Self-Service Portal Screenwatch: Private Database Cloud: Use the Cloud Self-Service Portal Stay Connected: Twitter |  Face book |  You Tube |  Linked in |  Newsletter

    Read the article

  • Design Pattern for Complex Data Modeling

    - by Aaron Hayman
    I'm developing a program that has a SQL database as a backing store. As a very broad description, the program itself allows a user to generate records in any number of user-defined tables and make connections between them. As for specs: Any record generated must be able to be connected to any other record in any other user table (excluding itself...the record, not the table). These "connections" are directional, and the list of connections a record has is user ordered. Moreover, a record must "know" of connections made from it to others as well as connections made to it from others. The connections are kind of the point of this program, so there is a strong possibility that the number of connections made is very high, especially if the user is using the software as intended. A record's field can also include aggregate information from it's connections (like obtaining average, sum, etc) that must be updated on change from another record it's connected to. To conserve memory, only relevant information must be loaded at any one time (can't load the entire database in memory at load and go from there). I cannot assume the backing store is local. Right now it is, but eventually this program will include syncing to a remote db. Neither the user tables, connections or records are known at design time as they are user generated. I've spent a lot of time trying to figure out how to design the backing store and the object model to best fit these specs. In my first design attempt on this, I had one object managing all a table's records and connections. I attempted this first because it kept the memory footprint smaller (records and connections were simple dicts), but maintaining aggregate and link information between tables became....onerous (ie...a huge spaghettified mess). Tracing dependencies using this method almost became impossible. Instead, I've settled on a distributed graph model where each record and connection is 'aware' of what's around it by managing it own data and connections to other records. Doing this increases my memory footprint but also let me create a faulting system so connections/records aren't loaded into memory until they're needed. It's also much easier to code: trace dependencies, eliminate cycling recursive updates, etc. My biggest problem is storing/loading the connections. I'm not happy with any of my current solutions/ideas so I wanted to ask and see if anybody else has any ideas of how this should be structured. Connections are fairly simple. They contain: fromRecordID, fromTableID, fromRecordOrder, toRecordID, toTableID, toRecordOrder. Here's what I've come up with so far: Store all the connections in one big table. If I do this, either I load all connections at once (one big db call) or make a call every time a user table is loaded. The big issue here: the size of the connections table has the potential to be huge, and I'm afraid it would slow things down. Store in separate tables all the outgoing connections for each user table. This is probably the worst idea I've had. Now my connections are 'spread out' over multiple tables (one for each user table), which means I have to make a separate DB called to each table (or make a huge join) just to find all the incoming connections for a particular user table. I've avoided making "one big ass table", but I'm not sure the cost is worth it. Store in separate tables all outgoing AND incoming connections for each user table (using a flag to distinguish between incoming vs outgoing). This is the idea I'm leaning towards, but it will essentially double the total DB storage for all the connections (as each connection will be stored in two tables). It also means I have to make sure connection information is kept in sync in both places. This is obviously not ideal but it does mean that when I load a user table, I only need to load one 'connection' table and have all the information I need. This also presents a separate problem, that of connection object creation. Since each user table has a list of all connections, there are two opportunities for a connection object to be made. However, connections objects (designed to facilitate communication between records) should only be created once. This means I'll have to devise a common caching/factory object to make sure only one connection object is made per connection. Does anybody have any ideas of a better way to do this? Once I've committed to a particular design pattern I'm pretty much stuck with it, so I want to make sure I've come up with the best one possible.

    Read the article

  • Channel Revenue Management and General Ledger Integration

    - by LuciaC-Oracle
    Back in February of this year, we told you about the EBS Business Process Advisor: CRM Channel Revenue Management document which has detailed information about the Channel Revenue Management application business flow and explains integration points with other applications.  But we thought that you might like to have even more information on exactly how Channel Revenue Management passes data to General Ledger. Take a look at Integration Troubleshooting: Oracle Channel Revenue Management to GL via Subledger Accounting (Doc ID 1604094.2).  This note includes comprehensive information about the data flow between Channel Revenue Management and GL, offers troubleshooting tips and explains some key setups. Let us know what you think - start a discussion in the My Oracle Support Channel Revenue Management Community!

    Read the article

  • Is the "One Description Table to rule them all" approch good?

    - by DavRob60
    Long ago, I worked (as a client) with a software which use a centralized table for it's codified element. Here, as far as I remember, how the table look like : Table_Name (PK) Field_Name (PK) Code (PK) Sort_Order Description So, instead of creating a table every time they need a codified field, they where just adding row in this table with the new Table_Name and Field_Name. I'm sometime tempted to use this pattern in some database I design, but I have resisted to this as from now, I think there's something wrong with this, but I cannot put the finger on it. It is just because you land with some of the structure logic within the Data or something else?

    Read the article

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • 24 Hours of PASS

    - by andyleonard
    I am honored to participate in 24 Hours of PASS starting at 8:00 AM 19 May 2010! My presentation is titled Database Development Patterns and is the second session - starting at 9:00 AM EDT 19 May 2010. It's free, but you have to register to attend - register today! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Where is a postgresql 9.1 database stored in ubuntu 12.04?

    - by celenius
    I installed and created a Postgresql database on ubuntu. I then created the database using the following command: sudo su postgres createdb mydatabase However, I can't figure out where the database was initialized. I would like to be able to edit the hba.conf file and postgresl.conf files. When I view the database using pgadmin I see the following information: CREATE DATABASE mydatabase WITH OWNER = postgres ENCODING = 'UTF8' TABLESPACE = pg_default LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1; Any thoughts on how I can find the database cluster location?

    Read the article

  • How to determine if someone is accessing our database remotely?

    - by Vednor
    I own a content publishing website developed using CakePHP(tm) v 2.1.2 and 5.1.63 MySQL. It was developed by a freelance developer who kept remote access to the database which I wasn’t aware of. One day he accessed to the site and overwrote all the data. After the attack, my hosting provider disabled the remote access to our database and changed the password. But somehow he accessed the site database again and overwrote some information. We’ve managed to stop the attack second time by taking the site down immediately. But now we’re suspecting that he’ll attack again. What we could identified that he’s running a query and changing every information from the database in matter of a sec. Is there any possible way to detect the way he’s accessing our database without remote access or knowing our Cpanel password? Or to identify whether he has left something inside the site that granting him access to our database?

    Read the article

  • Oracle Database 12c ?????????????????? (8/1 ????)

    - by OTN-J Master
    Oracle Database 12c???????????????????????????????7?17????????????????????????????????????????????????????????????: ”????”????????????????Oracle Database 12c? ??: ·?????????????? Oracle Database 12c ·???????????????? ·??????????????? ·?????????????????? ·?????????·????????????? ·Oracle Database 12c ??????? ??????????????????????????????????????(??????????????????)?Oracle Database 12c???????????????????????????¦ @IT (ITmedia) ????????????????????5????????!Oracle Database 12c????? ¦ ZDNet Japan ?????????????? ¦?????? ????+IT??????????????????????????????????????????????????????????????????“???????”??????

    Read the article

  • Oracle's PeopleSoft Customer Advisory Boards Convene to Discuss Roadmap at Pleasanton Campus

    - by john.webb(at)oracle.com
    Last week we hosted all of the PeopleSoft CABs (Customer Advisory Boards) at our Pleasanton Development Center to review our detailed designs for future Feature Packs, PeopleSoft 9.2, and beyond. Over 150 customers from 79 companies attended representing a variety of industries, geographies, and company sizes. The PeopleSoft team relies heavily on this group to provide key input on our roadmap for applications as well as technology direction. A good product strategy is one part well thought out idea with many handfuls of customer validation, and very often our best ideas originate from these customer discussions. While the individual CABs have frequent interactions with our teams, it's always great to have all of them in one place and in person. Our attendance was up from last year which I attribute to two things: (1) More interest as a result of PeopleSoft 9.1 upgrade; (2) An improving economy allowing for more travel. Maybe we should index the second item meeting-to-meeting and use it as a market indicator - we'll see! We kicked off the day one session with an overview of the PeopleSoft Roadmap and I outlined our strategy around Feature Packs and PeopleSoft 9.2. Given the high adoption rate of PeopleSoft 9.1 (over 4x that of 9.0 given the same time lapse since the release date), there was a lot of interest around the 9.1 Feature Packs as a vehicle for continuous value. We provided examples of our 3 central design themes: Simplicity, Productivity, and lower TCO, including those already delivered via Feature Packs in 2010. A great example of this is the Company Directory feature in PeopleSoft HCM. The configuration capabilities and the new actionable links our CAB advised us on last Spring were made available to all customers late last year. We reviewed many more future Navigation changes that will fundamentally change the way users interact with PeopleSoft. Our old friend, the menu tree, is being relegated from center stage to a bit part, with new concepts like Activity Guides, Train Stops, Related Actions, Work Centers, Collaborative Workspaces, and Secure Enterprise Search bringing users what they need in a contextual, role based manner with fewer clicks. Paco Aubrejuan, our PeopleSoft GM, and Steve Miranda, the SVP for Fusion Applications, then discussed our plans around Oracle's Application Investment Strategy.  This included our continued investment in developing both PeopleSoft and Fusion as well as the co-existence strategy with new Fusion Apps integrating to PeopleSoft Apps. Should you want to view this presentation, a recording is available. Jeff Robbins, our lead PeopleTools Strategist, provided the roadmap for PeopleTools and discussed our continuing plan to deliver annual releases to further evolve the user experience. Numerous examples were highlighted with the Navigation techniques I mentioned previously. Jeff also provided a lot of food for thought around Lifecycle Management topics and how to remain current on releases with a  lower cost of ownership. Dennis Mesler, from Boise, was the guest speaker in this slot, who spoke about the new PeopleSoft Test Framework (PTF). Regression Testing is a key cost component when product updates are applied. This new tool (which is free to all PeopleSoft customers as part of PeopleTools 8.51) provides a meta data driven approach to recording and executing test scripts. Coupled with what our Usage Monitor enables, PTF provides our customers a powerful tool to lower costs and manage product updates more efficiently and at the time of their choosing. Beyond the general session, we broke out into the individual CABs: HCM, Financials, ESA/ALM, SRM, SCM, CRM, and PeopleTools/ Technology. A day and half of very engaging discussions around our plans took place for each product pillar. More about that to follow in future posts.      We capped the first day with a reception sponsored by our partners: InfoSys, SmartERP (represented by Doris Wong), and Grey Sparling  Solutions (represented by Chris Heller and Larry Grey). Great to see these old friends actively engaged in the very busy PeopleSoft ecosystem!   Jeff Robbins previews the roadmap for PeopleTools with the PeopleSoft CAB  

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >