Search Results

Search found 1027 results on 42 pages for 'msps dba'.

Page 36/42 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • ArchBeat Link-o-Rama Top 10 for August 2012

    - by Bob Rhubart
    The Top 10 most popular items shared via the OTN ArchBeat Facebook page for the month of August 2012. Now Available: Oracle SQL Developer 3.2 (3.2.09.23) New features include APEX listener, UI enhancements, and 12c database support. The Role of Oracle VM Server for SPARC in a Virtualization Strategy In this article, Matthias Pfutzner discusses hardware, desktop, and operating system virtualization, along with various Oracle virtualization technologies, including Oracle VM Server for SPARC. How to Manually Install Flash Player Plugin to see the Oracle Enterprise Manager Performance Page | Kai Yu So, you're a DBA and you want to check the Performance page in Oracle Enterprise Manager (11g or 12c). So you click the Performance tab and… nothing. Zip. Nada. The Flash plugin is a no-show. Relax! Oracle ACE Director Kai Yu shows you what you need to do to see all the pretty colors instead of that dull grey screen. Relationally Challenged (CX - CRM - EQ/RQ/CRQ) | Chris Warticki Self-proclaimed Oracle Support "spokesmodel" Chris Chris Warticki has some advice for those interested in Customer Relationship Management: "How about we just dumb it down, strip it to the core, keep it simple and LISTEN?! No more focus groups, no more surveys, and no need to gather more data. We have plenty of that. Why not just provide the customer what they are asking for?" Free WebLogic Server Course | Middleware Magic So you want to sharpen your Oracle WebLogic Server skills, but you prefer to skip the whole classroom bit and don't want to be bothered with dealing with an instructor? No problem! Oracle ACE Rene van Wijk, a prolific Middleware Magic blogger, has information on an Oracle WebLogic course you can take on your own time, at your own pace. Oracle VM VirtualBox 4.1.20 released Oracle VM VirtualBox 4.1.20 was just released at the community and Oracle download sites, reports the Fat Bloke. This is a maintenance release containing bug fixes and stability improvements. Optimizing OLTP Oracle Database Performance using Dell Express Flash PCIe SSDs | Kai Yu Oracle ACE Director Kai Yu shares resources based on "several extensive performance studies on a single node Oracle 11g R2 database as well as a two node 11gR2 Oracle Real Application clusters (RAC) database running on Dell PowerEdge R720 servers with Dell Express Flash PCIe SSDs on Oracle Enterprise Linux 6.2 platform." Oracle ACE sessions at Oracle OpenWorld With so many great sessions at this year's event, building your Oracle OpenWorld schedule can involve making a lot of tough choices. But you'll find that the sessions led by Oracle ACEs just might be the icing on the cake for your OpenWorld experience. MySQL Update: The Cleveland MySQL Meetup (Independence, OH) Oracle MySQL team member Benjamin Wood, a MySQL engineer and five year veteran of the MySQL organization, will speak at the Cleveland MySQL Meetup event on September 12. The presentation will include a MySQL 5.5 Overview, Oracle's Roadmap for MySQL, including specifics on MySQL 5.6, best practices and how to overcome development and operational MySQL challenges, and the new MySQL commercial extensions. Click the link for time and location information. Parsing XML in Oracle Database | Martijn van der Kamp Martijn van der Kamp's post deals with processing XML in PL/SQL code and processing the data into the database. Thought for the Day "Walking on water and developing software from a specification are easy if both are frozen." — Edward V. Berard Source: SoftwareQuotes.com

    Read the article

  • SQL SERVER – BACKUPIO, BACKUPBUFFER – Wait Type – Day 14 of 28

    - by pinaldave
    Backup is the most important task for any database admin. Your data is at risk if you are not performing database backup. Honestly, I have seen many DBAs who know how to take backups but do not know how to restore it. (Sigh!) In this blog post we are going to discuss about one of my real experiences with one of my clients – BACKUPIO. When I started to deal with it, I really had no idea how to fix the issue. However, after fixing it at two places, I think I know why this is happening but at the same time, I am not sure the fix is the best solution. The reality is that the fix is not a solution but a workaround (which is not optimal, but get your things done). From Book On-Line: BACKUPIO Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPBUFFER Occurs when a backup task is waiting for data, or is waiting for a buffer in which to store data. This type is not typical, except when a task is waiting for a tape mount. BACKUPIO and BACKUPBUFFER Explanation: This wait stats will occur when you are taking the backup on the tape or any other extremely slow backup system. Reducing BACKUPIO and BACKUPBUFFER wait: In my recent consultancy, backup on tape was very slow probably because the tape system was very old. During the time when I explained this wait type reason in the consultancy, the owners immediately decided to replace the tape drive with an alternate system. They had a small SAN enclosure not being used on side, which they decided to re-purpose. After a week, I had received an email from their DBA, saying that the wait stats have reduced drastically. At another location, my client was using a third party tool (please don’t ask me the name of the tool) to take backup. This tool was compressing the backup along with taking backup. I have had a very good experience with this tool almost all the time except this one sparse experience. When I tried to take backup using the native SQL Server compressed backup, there was a very small value on this wait type and the backup was much faster. However, when I attempted with the third party backup tool, this value was very high again and was taking much more time. The third party tool had many other features but the client was not using these features. We end up using the native SQL Server Compressed backup and it worked very well. If I get to see this higher in my future consultancy, I will try to understand this wait type much more in detail and so probably I would able to come to some solid solution. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Best of OTN - Week of Oct 21st

    - by CassandraClark-OTN
    This week's Best of OTN, for you, the best devs, dba's, sysadmins and architects out there!  In these weekly posts the OTN team will highlight the top content from each community; Architect, Database, Systems and Java.  Since we'll be publishing this on Fridays, we'll also mix in a little fun! Architect Community Top Content- The Road Ahead for WebLogic 12c | Edwin BiemondOracle ACE Edwin Biemond shares his thoughts on announced new features in Oracle WebLogic 12.1.3 & 12.1.4 and compares those upcoming releases to Oracle WebLogic 12.1.2. A Roadmap for SOA Development and Delivery | Mark NelsonDo you know the way to S-O-A? Mark Nelson does. His latest blog post, part of an ongoing series, will help to keep you from getting lost along the way. Updated ODI Statement of Direction | Robert SchweighardtHeads up Oracle Data Integrator fans! A new statement of product direction document is available, offering an overview of the strategic product plans for Oracle’s data integration products for bulk data movement and transformation, specifically Oracle Data Integrator (ODI) and Oracle Warehouse Builder (OWB). Bob Rhubart, Architect Community Manager Friday Funny - "Some people approach every problem with an open mouth." — Adlai E. Stevenson (October 23, 1835 – June 14, 1914) 23rd Vice President of the United States Database Community Top Content - Pre-Built Developer VMs (for Oracle VM VirtualBox)Heard all the chatter about Oracle VirtualBox? Over 1 million downloads per week and look: pre-built virtual appliances designed specifically for developers. Video: Big Data, or BIG DATA?Oracle Ace Director Ben Prusinski explains the differences.?? Webcast Series - Developing Applications in Oracle's Public CloudTime to get started on developing and deploying cloud applications by moving to the cloud. Good friend Gene Eun from Oracle's Cloud team posted this two-part Webcast series that has an overview and demonstration of the Oracle Database Cloud Service. Check out the demos on how to migrate your data to the cloud, extend your application with interactive reporting, and create and access RESTful Web services. Registration required, but so worth it! Laura Ramsey, Database Community Manager Friday Funny - Systems Community Top Content - Video: What Kind of Scalability is Better, Horizontal or Vertical?Rick Ramsey asks the question "Is Oracle's approach to large vertically scaled servers at odds with today's trend of combining lots and lots of small, low-cost servers systems with networking to build a cloud, or is it a better approach?" Michael Palmeter, Director of Solaris Product Management, and Renato Ribeiro, Director Product Management for SPARC Servers, discuss.Video: An Engineer Takes a Minute to Explain CloudBart Smaalders, long-time Oracle Solaris core engineer, takes a minute to explain cloud from a sysadmin point of view. ?Hands-On Lab: How to Deploy and Manage a Private IaaS Cloud Soup to nuts. This lab shows you how to set up and manage a private cloud with Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service (IaaS) model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Rick Ramsey, Systems Community Manager Friday Funny - Video: Drunk Airline Pilot - Dean Martin - Foster Brooks Java Community Top Content - Video: NightHacking Interview with James GoslingJames Gosling, the Father of Java, discusses robotics, Java and how to keep his autonomous WaveGliders in the ocean for weeks at a time. Live from Hawaii.  Video: Raspberry Pi Developer Challenge: Remote Controller A developer who knew nothing about Java Embedded or Raspberry Pi shows how he can now control a robot with his phone. The project was built during the Java Embedded Challenge for Raspberry Pi at JavaOne 2013.Java EE 7 Certification Survey - Participants NeededHelp us define how to server your training and certification needs for Java EE 7. Tori Wieldt, Java Community Manager Friday Funny - Programmers have a strong sensitivity to Yak's pheromone. Causes irresistible desire to shave said Yak. Thanks, @rickasaurus! To follow and take part in the conversation follow/like etc. at one or all of the resources below -  OTN TechBlog The Java Source Blog The OTN Garage Blog The OTN ArchBeat Blog @oracletechnet @java @OTN_Garage @OTNArchBeat @OracleDBDev OTN I Love Java OTN Garage OTN ArchBeat Oracle DB Dev OTN Java

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • Protecting Consolidated Data on Engineered Systems

    - by Steve Enevold
    In this time of reduced budgets and cost cutting measures in Federal, State and Local governments, the requirement to provide services continues to grow. Many agencies are looking at consolidating their infrastructure to reduce cost and meet budget goals. Oracle's engineered systems are ideal platforms for accomplishing these goals. These systems provide unparalleled performance that is ideal for running applications and databases that traditionally run on separate dedicated environments. However, putting multiple critical applications and databases in a single architecture makes security more critical. You are putting a concentrated set of sensitive data on a single system, making it a more tempting target.  The environments were previously separated by iron so now you need to provide assurance that one group, department, or application's information is not visible to other personnel or applications resident in the Exadata system. Administration of the environments requires formal separation of duties so an administrator of one application environment cannot view or negatively impact others. Also, these systems need to be in protected environments just like other critical production servers. They should be in a data center protected by physical controls, network firewalls, intrusion detection and prevention, etc Exadata also provides unique security benefits, including a reducing attack surface by minimizing packages and services to only those required. In addition to reducing the possible system areas someone may attempt to infiltrate, Exadata has the following features: 1.    Infiniband, which functions as a secure private backplane 2.    IPTables  to perform stateful packet inspection for all nodes               Cellwall implements firewall services on each cell using IPTables 3.    Hardware accelerated encryption for data at rest on storage cells Oracle is uniquely positioned to provide the security necessary for implementing Exadata because security has been a core focus since the company's beginning. In addition to the security capabilities inherent in Exadata, Oracle security products are all certified to run in an Exadata environment. Database Vault Oracle Database Vault helps organizations increase the security of existing applications and address regulatory mandates that call for separation-of-duties, least privilege and other preventive controls to ensure data integrity and data privacy. Oracle Database Vault proactively protects application data stored in the Oracle database from being accessed by privileged database users. A unique feature of Database Vault is the ability to segregate administrative tasks including when a command can be executed, or that the DBA can manage the health of the database and objects, but may not see the data Advanced Security  helps organizations comply with privacy and regulatory mandates by transparently encrypting all application data or specific sensitive columns, such as credit cards, social security numbers, or personally identifiable information (PII). By encrypting data at rest and whenever it leaves the database over the network or via backups, Oracle Advanced Security provides the most cost-effective solution for comprehensive data protection. Label Security  is a powerful and easy-to-use tool for classifying data and mediating access to data based on its classification. Designed to meet public-sector requirements for multi-level security and mandatory access control, Oracle Label Security provides a flexible framework that both government and commercial entities worldwide can use to manage access to data on a "need to know" basis in order to protect data privacy and achieve regulatory compliance  Data Masking reduces the threat of someone in the development org taking data that has been copied from production to the development environment for testing, upgrades, etc by irreversibly replacing the original sensitive data with fictitious data so that production data can be shared safely with IT developers or offshore business partners  Audit Vault and Database Firewall Oracle Audit Vault and Database Firewall serves as a critical detective and preventive control across multiple operating systems and database platforms to protect against the abuse of legitimate access to databases responsible for almost all data breaches and cyber attacks.  Consolidation, cost-savings, and performance can now be achieved without sacrificing security. The combination of built in protection and Oracle’s industry-leading data protection solutions make Exadata an ideal platform for Federal, State, and local governments and agencies.

    Read the article

  • To SYNC or not to SYNC – Part 4

    - by AshishRay
    This is Part 4 of a multi-part blog article where we are discussing various aspects of setting up Data Guard synchronous redo transport (SYNC). In Part 1 of this article, I debunked the myth that Data Guard SYNC is similar to a two-phase commit operation. In Part 2, I discussed the various ways that network latency may or may not impact a Data Guard SYNC configuration. In Part 3, I talked in details regarding why Data Guard SYNC is a good thing, and the distance implications you have to keep in mind. In this final article of the series, I will talk about how you can nicely complement Data Guard SYNC with the ability to failover in seconds. Wait - Did I Say “Seconds”? Did I just say that some customers do Data Guard failover in seconds? Yes, Virginia, there is a Santa Claus. Data Guard has an automatic failover capability, aptly called Fast-Start Failover. Initially available with Oracle Database 10g Release 2 for Data Guard SYNC transport mode (and enhanced in Oracle Database 11g to support Data Guard ASYNC transport mode), this capability, managed by Data Guard Broker, lets your Data Guard configuration automatically failover to a designated standby database. Yes, this means no human intervention is required to do the failover. This process is controlled by a low footprint Data Guard Broker client called Observer, which makes sure that the primary database and the designated standby database are behaving like good kids. If something bad were to happen to the primary database, the Observer, after a configurable threshold period, tells that standby, “Your time has come, you are the chosen one!” The standby dutifully follows the Observer directives by assuming the role of the new primary database. The DBA or the Sys Admin doesn’t need to be involved. And - in case you are following this discussion very closely, and are wondering … “Hmmm … what if the old primary is not really dead, but just network isolated from the Observer or the standby - won’t this lead to a split-brain situation?” The answer is No - It Doesn’t. With respect to why-it-doesn’t, I am sure there are some smart DBAs in the audience who can explain the technical reasons. Otherwise - that will be the material for a future blog post. So - this combination of SYNC and Fast-Start Failover is the nirvana of lights-out, integrated HA and DR, as practiced by some of our advanced customers. They have observed failover times (with no data loss) ranging from single-digit seconds to tens of seconds. With this, they support operations in industry verticals such as manufacturing, retail, telecom, Internet, etc. that have the most demanding availability requirements. One of our leading customers with massive cloud deployment initiatives tells us that they know about server failures only after Data Guard has automatically completed the failover process and the app is back up and running! Needless to mention, Data Guard Broker has the integration hooks for interfaces such as JDBC and OCI, or even for custom apps, to ensure the application gets automatically rerouted to the new primary database after the database level failover completes. Net Net? To sum up this multi-part blog article, Data Guard with SYNC redo transport mode, plus Fast-Start Failover, gives you the ideal triple-combo - that is, it gives you the assurance that for critical outages, you can failover your Oracle databases: very fast without human intervention, and without losing any data. In short, it takes the element of risk out of critical IT operations. It does require you to be more careful with your network and systems planning, but as far as HA is concerned, the benefits outweigh the investment costs. So, this is what we in the MAA Development Team believe in. What do you think? How has your deployment experience been? We look forward to hearing from you!

    Read the article

  • SQL Authority News – FalafelCON 2014: 2 days with the Best Developers in the World

    - by Pinal Dave
    I love presenting at various forums on various technologies. I am extremely excited that I got invited to speak at Falafel Conference 2014 in San Francisco. I will present two technology sessions on SQL Server. If you are into web development or if you just want to attend a conference with the best of the industry speakers, this may be the right conference for you. What set apart this conference from other conference is technology presented as well as speakers. Usually one has to attend very expensive and high scale event when they have to hear good speakers. At this conference, you will find quite a many industry legends are available to present on the bleeding edge technology. Here are few of the reasons why I believe you should attend this conference: Choose from four tracks covering Web, Mobile development and testing, Sitefinity, and Automated Testing, or attend sessions from all four! Learn from the best developers and testers in the business in an intimate setting. Surround yourself with your peers and the opportunity to network Learn about the latest platforms and technologies including Kendo UI, AngularJS, ASP.NET MVC, WebAPI, and more! Here are the details for the sessions which I am going to present at Falafel Conference. Secrets of SQL Server: Database Worst Practices Abstract: Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session. The Unsung Hero Abstract: Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Register Now! I have learned from the Falafel Team that they are running out of tickets and soon they will close the registration.  For next 10 days the price for the registration is only USD 149. Trust me, you can’t get such a world class training and networking opportunity at such a low price. Click to Register Here! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • How to automate a monitoring system for ETL runs

    - by Jeffrey McDaniel
    Upon completion of the Primavera ETL process there are a few ways to determine if the process finished successfully.  First, in the <installation directory>\log folder,  there is a staretlprocess.log and staretl.html files. These files will give the output results of the ETL run. The staretl.html file will give a detailed summary of each step of the process, its run time, and its status. The .log file, based on the logging level set in the Configuration tool, can give extensive information about the ETL process. The log file can be used as a validation for process completion.  To automate the monitoring of these log files, perform the following steps: 1. Write a custom application to parse through the log file and search for [ERROR] . In most cases,  a major [ERROR] could cause the ETL process to fail. Searching the log and finding this value is worthy of an alert. 2. Determine the total number of steps in the ETL process, and validate that the log file recorded and entry for the final step.  For example validate that your log file contains an entry for Step 39/39 (could be different based on the version you are running). If there is no Step 39/39, then either the process is taking longer than expected or it didn't make it to the end.  Either way this would be a good cause for an alert. 3. Check the last line in the log file. The last line of the log file should contain an indication that the ETL run completed successfully. For example, the last line of a log file will say (results could be different based on Reporting Database versions):   [INFO] (Message) Finished Writing Report 4. You could write an Ant script to execute the ETL process and have it set to - failonerror="true" - and from there send results to an external tool to monitor the jobs, send to email, or send to database. With each ETL run, the log file appends to the existing log file by default. Because of this behavior, I would recommend renaming the existing log files before running a new ETL process. By doing this,  only log entries for the currently running ETL process is recorded in the new log files. Based on these log entries, alerts can be setup to notify the administrator or DBA. Another way to determine if the ETL process has completed successfully is to monitor the etl_processmaster table.  Depending on the Reporting Database version this could be in the Stage or Star databases. As of Reporting Database 2.2 and higher this would be in the Star database.  The etl_processmaster table records entries for the ETL run along with a Start and Finish time.  If the ETl process has failed the Finish date should be null. This table can be queried at a time when ETL process is expected to be finished and if null send an alert.  These are just some options. There are additional ways this can be accomplished based around these two areas - log files or database. Here is an additional query to gather more information about your ETL run (connect as Staruser): SELECT SYSDATE,test_script,decode(loc, 0, PROCESSNAME, trim(SUBSTR(PROCESSNAME, loc+1))) PROCESSNAME ,duration duration from ( select (e.endtime - b.starttime) * 1440 duration, to_char(b.starttime, 'hh24:mi:ss') starttime, to_char(e.endtime, 'hh24:mi:ss') endtime,  b.PROCESSNAME, instr(b.PROCESSNAME, ']') loc, b.infotype test_script from ( select processid, infodate starttime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'BEGIN' ) b  inner Join ( select processid, infodate endtime, PROCESSNAME, INFOMSG, INFOTYPE from etl_processinfo  where processid = (select max(PROCESSID) from etl_processinfo) and infotype = 'END' ) e on b.processid = e.processid  and b.PROCESSNAME = e.PROCESSNAME order by b.starttime)

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • 11??OTN????????

    - by OTN-J Master
    11??OTN???????????????????????????????????????????????????????????????????????????????????????URL?????????????????https://blogs.oracle.com/otnjp/category/Event ????????????? [11/14(?)??]  WebLogic Server??????????? & ???????????????? [11/21(?)??] Oracle Database Appliance ???????? [11/22(?)??] ?30? WebLogic Server???? 11?20?????DBA & Developer Day 2012?????????????????????????????????OTN???????????????????????????????????????????????????OTN???????????????????????????????>>??????????????????(oracle.com???)??????????????? [11/ 9(?)??]  JavaOne 2012 San Francisco ???  (??Java????????) [11/10(?)??] JJUG ???????????????? 2012 Fall (??Java????????)[11/28(?)??] 90?????!Oracle Database??????????????? (????????) JavaOne 2012 San Francisco ??? (??Java????????) ???: 11?9?(?)13:00~19:00???: ??(???·???????) ???: ???2012?9?30???10?4?????????????????JavaOne 2012?????????JavaOne?????????????????????????????????!>> ??????????? ?????? JJUG ???????????????? 2012 Fall (??Java????????)???: 11?10?(?) 10:00~19:15???: ??(??:???????) ???: ??Java??????????????????????????? 2012 Fall(??:JJUG CCC 2012 Fall)?????????Java????????????????????????????????????????????CCC?????????????????????????????????????????????! >>??·???????? ?????? ?93? ????! ???????? -WebLogic Server??????????? & ???????????????? ???: 11?14?(?)18:30 ~20:30???: ??(?????? ????????????) ???: 18?????????????????????! ????????????????????????????!???????????WebLogic Server ??????????????????Java???????????????WebLogic Server?JRockit????????????????????????????????????????????????????????????????????????????????WebLogic Server????????????????????????????????2???????????????WebLogic Server???????????????????????????????????????JDBC?????????????WebLogic Server??????????????????????????????????????????????????????????????????????????????????????????????????????????????????iPad??????????????????????????????????WebLogic Server???Oracle JDeveloper????? ?Oracle Application Development Framework (ADF)????????? ?WebCenter Framework ????????????????????????????????????????????????????????????????????Java EE??????WebLogic Server?????????????????????????????????????????????????????????????????????????????????????????????????????Java EE6???????????? >> ??·???????? ??????  Oracle Database Appliance ???????? ???: 11?21?(?)15:30 ~ 17:00???: ??(?????????? ?? 13F???????) ???:??????????????????????????????????·?????Oracle Database Appliance??????????????????????????????????????????????????????????????????·?????Oracle Database Appliance ?????????????????????? >>??????????? ?????? ?30? WebLogic Server??? ???: 11?22?(?)18:30~20:40???: ??(????????????) ???:?????WebLogic Server?????:??????JSF2.0????2???????????????WebLogic Server????????????????2?????????????????????·???????????????????? WebLogic Server???????????????????????????????WebLogic Server????????????????TIPS?????????WebLogic Server???????????????????????????JSF2.0???????Java EE 6?????JSF2.0???????????????JSF2.0????????????????JSF2.0????????RIA(??????????????????)????????????????JSF2.0??????Java EE 6?????????Web???????????????????????????????????????????????????WebLogic Server????????????????????????????????????WebLogic Server????????????WebLogic Server?????????????????????????????????????!>>??????????? ?????? 90?????!Oracle Database??????????????? [????????] ???: 11?28?(?) 19:00~20:30???: ??(??????????) ???:Oracle Database????????·?????????????????????Oracle Database??????/????????????????- ???????????????????????????????????? ?????- ???????????????????Oracle Database???????? ?????????????¦???????????¦???????????¦???????????(NetCA, Net Manager)¦???????¦Oracle?????????¦??????????????>> ??·???????? ??????

    Read the article

  • Oracle????????(2013?7?)

    - by Steve He(???)
    ???? ?? ?? ???? ?? ????? ???????,??25??????5???Q&A??,????????????,???????,???????????,????????????????????? 7?11?14:00 ?? ?My Oracle Support????????? "?My Oracle Support??????"??????????????????Oracle??????????????????My Oracle Support??????????????????????????? ??????????My Oracle Support?????????,?????????????????????,????My Oracle Support???????,???????????????????:30??? 7?25?14:00 ?? WebLogic Server ????????????? ????WebLogic Server??????????,?????????1???????????????????WebLogic Server ????????????????????????????????? 7?17?14:00 ?? Exadata????: Diagnostic Assistant   ??Exadata???????Diagnostic Assistent (DA). ??????????????,????????????????,??ADR,RDA,OCM,Explorer? ?????????????: ????????,??Diagnostic Assistant?Exadata??????? ?????????????,????????????,????????????? 7?18?14:00 ?? E-Business Suite ?????? ????????????E-Business ???????????,??????????????? ?60???????????E-Business Suite???????????,????????????????DBA? 7?16?14:00 ?? ?????? My Oracle Support ??????????????????????,??? world clock.??????? Oracle ?????????????,??? note 603505.1 ????????????,??????????????(Mandarin)?????? Internet Explorer ??? My Oracle Support ????????????????? ?? ?? ?? ?? Oracle Support Best Practices(Oracle??????)(?) ???? ?? ?? GetProactive: Resolve Fast(??????)(?) ???? ?? ?? WebLogic GC & OutOfMemory Diagnostic(?) ????? ?? ?? Creating Customer Value ???? ?? ?? Oracle Support Basics ???? ?? ?? An Introduction to My Oracle Support ???? ?? ?? Service Request Management ???? ?? ?? Customer User Administration ???? ?? ?? Managing Favorite ???? ?? ?? Quick Search ???? ?? ?? Hot Topic Email ???? ?? ?? Patch and Update ???? ?? ?? Site Alert ???? ?? ?? Search and Browse Features in My Oracle Support ???? ?? ?? Why Use Configuration Manager In The My Oracle Support ???? ?? ?? Enterprise Manager 11g and My Oracle Support ???? ?? ?? Oracle Collaborative Support ???? ?? ?? How to Escalate a Service Request within Oracle Support ???? ?? ?? ????????,?? Support Training Community ??????????

    Read the article

  • Parsing XML in a non-XML column

    - by slugster
    Hi, i am reasonably proficient with SQLServer, but i'm not a DBA so i'm not sure how to approach this. I have an XML chunk stored in an ntext column. Due to it being a legacy database and the requirements of the project i cannot change the table (yet). This is an example of the data i need to manipulate: <XmlSerializableHashtable xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Entries> <Entry> <key xsi:type="xsd:string">CurrentYear</key><value xsi:type="xsd:string">2010</value> </Entry> <Entry> <key xsi:type="xsd:string">CurrentMonth</key><value xsi:type="xsd:string">4</value> </Entry> </Entries> </XmlSerializableHashtable> each row will have a chunk like this, but obviously with different keys/values in the XML. Is there any clever way i can parse this XML in to a name/value pairs style view? Or should i be using SQLServer's XML querying abilities even though it isn't an XML column? If so, how would i query a specific value out of that column? (Note: adding a computed XML column on the end of the table is a possibility, if that helps). Thanks for any assistance!

    Read the article

  • no statement parsed and wrong number or types of arguments - cfstoredproc

    - by Travis
    I have an Oracle procedure - editBacklog which I'm calling from a CFM page via cfstoredproc. After several changes to the procedure I started getting ORA-06550: line 1, column 7: PLS-00306: wrong number or types of arguments in call to 'EDITBACKLOG'. I've gotten this before and found that if I changed the name of the procedure it starts working again. I changed the name to editBacklog2 and it worked as I expected it to. I changed the name back to editBacklog and got the same error. I changed the name back to editBacklog2 again and started getting ORA-01003: no statement parsed. NOTHING has changed at this point except for the names. I changed the name yet again to editBacklog3 and it works as expected. As of right now editBacklog = ORA-06550 editBacklog2 = ORA-01003 editBacklog3 = works (kinda) This whole thing started when I was trying to fix an ORA-01821: date format not recognized error. I fear when I start changing things I'll start getting the same lame behavior described above. Either Oracle or CF is messing with me and I'll end up liking one of them less because of it. I assume it's probably cfstoredproc caching metadata or something but neither google, livedocs, or OTN have much to say about my situation. I'm not the SA or DBA. Anyone have any ideas?

    Read the article

  • What are some strategies for maintaining a common database schema with a team of developers and no D

    - by Mahmoud Abdelkader
    I'm curious about how others have approached the problem of maintaining and synchronizing database changes across many (10+) developers without a DBA? What I mean, basically, is that if someone wants to make a change to the database, what are some strategies to doing that? (i.e. I've created a 'Car' model and now I want to apply the appropriate DDL to the database, etc..) We're primarily a Python shop and our ORM is SQLAlchemy. Previously, we had written our models in such a way to create the models using our ORM, but we recently ditched this because: We couldn't track changes using the ORM The state of the ORM wasn't in sync with the database (e.g. lots of differences primarily related to indexes and unique constraints) There was no way to audit database changes unless the developer documented the database change via email to the team. Our solution to this problem was to basically have a "gatekeeper" individual who checks every change into the database and applies all accepted database changes to an accepted_db_changes.sql file, whereby the developers who need to make any database changes put their requests into a proposed_db_changes.sql file. We check this file in, and, when it's updated, we all apply the change to our personal database on our development machine. We don't create indexes or constraints on the models, they are applied explicitly on the database. I would like to know what are some strategies to maintain database schemas and if ours is seems reasonable. Thanks!

    Read the article

  • Cannot connect Linux XAMPP PHP to SQL Server database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and SQL Server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • Using HttpClient with SSL and certificates

    - by ChrisCM
    While I've been familiar with HTTPS and the concept of SSL, I have recently begun some development and found I am a little confused. The requirement was that I write a small Java application that runs on a machine attached to a scanner. When a document is scanned this is picked up and the file (usually PDF) sent over the internet to our application server that will then process it. I've written the application using Apache Commons libraries and HTTPClient. The second requirement was to connect over SSL, requiring a certificate. Following guidance on the HTTPclient page I am using AuthSSLProtocolSocketFactory from the contributions page. The constructor can take a keystore, keystore password, truststore and truststore password. As an initial test our DBA enabled SSL on one of our development webservers and provided me with a .p12 file which when I imported into IE allows me to connect successfully. I am a bit confused between keystores and truststores and what steps I need to take using the keytool. I tried importing the p12 into a keystore file but get the error: keytool error: java.lang.Exception: Input not an X.509 certificate I followed a suggestion of importing the p12 into Internet Explorer and exporting as a .cer which I can then successfully import into a keystore. When I supply this as a keystore argument of the AuthSSLProtocolSocketFactory I get a meaningless errror, but if I try it as a truststore it seems like it reads it fine but ultimately I get Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate I am unsure if I have missed some steps, I am misunderstanding SSL and mutual authentication altogether or this is mis-configuration on the server side. Can anyone provide suggestions or point me towards resources that might help me figure this out please?

    Read the article

  • Service php-fpm does not support chkconfig

    - by ychian
    Everything is working fine. Just that when i chkconfig –add php-fpm It throws me an error Service php-fpm does not support chkconfig php-5.2.13 php-5.2.13-fpm-0.5.13.diff.gz Below is the configuration i use ./configure --enable-fastcgi --enable-fpm --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=x86_64-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --cache-file=../config.cache --with-libdir=lib64 --with-config-file-path=/etc --with-config-file-scan-dir=/etc/php.d --disable-debug --with-pic --disable-rpath --with-pear --with-bz2 --with-curl --with-exec-dir=/usr/bin --with-freetype-dir=/usr --with-png-dir=/usr --enable-gd-native-ttf --without-gdbm --with-gettext --with-gmp --with-iconv --with-jpeg-dir=/usr --with-openssl --with-png --with-expat-dir=/usr --with-pcre-regex=/usr --with-zlib --with-layout=GNU --enable-exif --enable-ftp --enable-magic-quotes --enable-sockets --enable-sysvsem --enable-sysvshm --enable-sysvmsg --enable-track-vars --enable-trans-sid --enable-yp --enable-wddx --with-kerberos --enable-ucd-snmp-hack --with-unixODBC=shared,/usr --enable-memory-limit --enable-shmop --enable-calendar --enable-dbx --enable-dio --with-mime-magic=/usr/share/file/magic.mime --without-sqlite --with-libxml-dir=/usr --with-xml --with-system-tzdata --without-mysql --without-gd --without-odbc --disable-dom --disable-dba --without-unixODBC --disable-pdo --disable-xmlreader --disable-xmlwriter

    Read the article

  • Cannot connect Linux XAMPP PHP to MS SQL database.

    - by Jim
    I've searched many sites without success. I'm using XAMPP 1.7.3a on Ubuntu 9.1. I have used the methods found at http://www.webcheatsheet.com/PHP/connect_mssql_database.php, they all fail. I am able to "connect" with a linked database through MS Access, however, that is not an acceptable solution as not all users will have Access. The first method (at webcheatsheet) uses mssql_connect, et.al. but I get this error from the mssql_connect() call: Warning: mssql_connect() [function.mssql-connect]: Unable to connect to server: [my server] in [my code] [my server] is the server address, I have used both the host name and the IP address. [my code] is a reference to the file and line number in my .php file. Is there a log file somewhere that would have more information about the failure, both on my machine and the MS SQL server? We do not have a bona-fide DBA, so I will need specific information to pass on if the issue seems to be on the server side. All assistance is appreciated, including RTFM when the location of the M is provided! Thanks

    Read the article

  • How to refactor this database to allow sync. with smart clients?

    - by Anton Zvgny
    Howdy, I have inherited the following database: http://i46.tinypic.com/einvjr.png (EDIT: I cannot post images, so here goes the link) This currently runs 100% online thru an Asp.Net web app , but some employees will need to have offline access to this database to either get documents or to insert new documents for later synchronization when back to the office. What changes need to be made to this database scheme to support such scenario (sync smart clients with central web database)? Thanks! ps. I'm not an developer/dba, i'm just an mechanic engineer tasked with changing this app, so, take it easy on me :D ps1. We're using MSSQL Server for the online central database and MSSQL Compact for the smart client. ps2. the ImageGuid field of the images table is used to save the images to the file system. The web app takes the guid and create folders according to the first 3 initial letters to persist the image to the file system. ps3. The users of the smart client not always get server data first and then go modifying to sync later the changes. Most of the time they start working completely offline (with a blank local database) and later sync the data to the server. ps4. All the users of the smart client app have an account at the web app.

    Read the article

  • SQL GUID Vs Integer

    - by Dal
    Hi I have recently started a new job and noticed that all the SQL tables use the GUID data type for the primary key. In my previous job we used integers (Auto-Increment) for the primary key and it was a lot more easier to work with in my opinion. For example, say you had two related tables; Product and ProductType - I could easily cross check the 'ProductTypeID' column of both tables for a particular row to quickly map the data in my head because its easy to store the number (2,4,45 etc) as opposed to (E75B92A3-3299-4407-A913-C5CA196B3CAB). The extra frustration comes from me wanting to understand how the tables are related, sadly there is no Database diagram :( A lot of people say that GUID's are better because you can define the unique identifer in your C# code for example using NewID() without requiring SQL SERVER to do it - this also allows you to know provisionally what the ID will be.... but I've seen that it is possible to still retrieve the 'next auto-incremented integer' too. A DBA contractor reported that our queries could be up to 30% faster if we used the Integer type instead of GUIDS... Why does the GUID data type exist, what advantages does it really provide?... Even if its a choice by some professional there must be some good reasons as to why its implemented?

    Read the article

  • How to trace the connection pool in a Java Web application - DBMS_APPLICATION_INFO

    - by Cleiton Garcia
    Hello, I need improve the traceability in a Web Application that usually run on fixed db user. The DBA should have a fast access for the information about the heavy users that are degrading the database. 5 years ago, I implemented a .NET ORM engine which makes a log of user and the server using the DBMS_APPLICATION_INFO package. Using a wrapper above the connection manager with the following code: DBMS_APPLICATION_INFO.SET_MODULE('" + User + " - " + appServerMachine + "',''); Each time that a connection get a connection from the pool, the package is executed to log the information in the V$SESSION. Has anyone discover or implemented a solution for this problem using the Toplink or Hibernate? Is there a default implementation for this problem? I found here a solutions as I implemented 5 years ago, but I'd like to know with anyone have a better solution and integrated with the ORM. http://stackoverflow.com/questions/53379/using-dbmsapplicationinfo-with-jboss My application is above Spring, the DAO are implemented with JPA (using hibernate) and actually running directly in Tomcat, with plans to (next year) migrate to SAP Netwevare Application Server. Thanks.

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • What do these errors mean? ISOC++ forbids assignment of arrays...

    - by xunlinkx
    I'm trying to compile some code on one of our systems for our DBA...I've edited the makefiles to include the pertinent libraries listed in the documentation, but I keep getting these errors... Can you discern any obvious problems from my command lines in reference to the errors listed? Thank you! make -f /u01/app/banner/ban8/TEST3/links/Makefile_tm_linux64_redhat5_ban8.mk gcc -m64 -D_NOFIXARGPTR -fpic -shared -DTMCILIB_EXPORTS -D_TMUNICODE -I/usr/local/ban_icu -I/usr/local/src/icu/source/i18n/ -I/usr/local/src/icu/source/common/ -I/usr/local/src/icu/source/extra/ustdio/ -I/usr/local/src/icu/source/io -L/usr/lib64 -L/usr/lib -L/usr/local/src/icu/source/data/ -L/usr/local/src/icu/source/data/out/ -L/usr/local/src/icu/source/tools/toolutil/ -L/usr/lib/im/icuconv/ -L/usr/local/lib/ -L. -licui18n -licudata -licuuc -licu-toolutil -licuio msgfmttm.cpp umsgtm.cpp tmcilib.cpp -o /u01/app/banner/ban8/TEST3/general/exe/libtmciuc.so umsgtm.cpp: In function ‘void fixArgPtr(const UChar*, __va_list_tag (*)[1])’: umsgtm.cpp:158: error: array must be initialized with a brace-enclosed initializer umsgtm.cpp:194: error: ISO C++ forbids assignment of arrays umsgtm.cpp: In function ‘int32_t tmumsg_vformat(void*, UChar, int32_t, __va_list_tag*, UErrorCode*)’: umsgtm.cpp:305: error: cannot convert ‘__va_list_tag**’ to ‘__va_list_tag ()[1]’ for argument ‘2’ to ‘void fixArgPtr(const UChar, __va_list_tag (*)[1])’ tmcilib.cpp: In function ‘int tmprintf(TMBundle*, const UChar*, ...)’: tmcilib.cpp:743: error: array must be initialized with a brace-enclosed initializer tmcilib.cpp: In function ‘int tmfprintf(TMBundle*, UFILE*, const UChar*, ...)’: tmcilib.cpp:757: error: array must be initialized with a brace-enclosed initializer tmcilib.cpp: In function ‘int tmsprintf(TMBundle*, UChar*, const UChar*, ...)’: tmcilib.cpp:808: error: array must be initialized with a brace-enclosed initializer

    Read the article

  • Oracle DBMS_PROFILER only shows Anonymous in the results tables

    - by Greg Reynolds
    I am new to DBMS_PROFILER. All the examples I have seen use a simple top-level procedure to demonstrate the use of the profiler, and from there get all the line numbers etc. I deploy all code in packages, and I am having great difficulty getting my profile session to populate the plsql_profiler_units with useful data. Most of my runs look like this: RUNID RUN_COMMENT UNIT_OWNER UNIT_NAME SECS PERCEN ----- ----------- ----------- -------------- ------- ------ 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler 5 Test <anonymous> <anonymous> .00 2.1 Profiler I have just embedded the calls to the dbms_profiler.start_profiler, flush_data and stop_profiler as per all the examples. The main difference is that my code is in a package, and calls in to other package. Do you have to profile every single stored procedure in your call stack? If so that makes this tool a little useless! I have checked http://www.dba-oracle.com/t_plsql_dbms_profiler.htm for hints, among other similar sites.

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42  | Next Page >