Search Results

Search found 79317 results on 3173 pages for 'sql error messages'.

Page 78/3173 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • SQL Server: "This filegroup cannot be used as a backup destination" error when attempting restore

    - by Ariel
    When running a command like the following: "RESTORE FILELISTONLY FROM DISK='\\\\server\\folder\\DummyDB.bak'" I'm getting this error: Backup destination "\\server" supports a FILESTREAM filegroup. This filegroup cannot be used as a backup destination. Rerun the BACKUP statement with a valid backup destination. RESTORE FILELIST is terminating abnormally. Unless someone comes up with a better idea, it seems that the drive from which restore is being attempted must not contain any database file contained in a filegroup. Is that the case? Thanks in advance.

    Read the article

  • SQL cluster instance names for large project

    - by Sam
    We're setting up two clusters. One dev and one prod. The Production will host two SQL instances - a OLTP and a DW. The development will host 4 OLTP non-production environments and at least one DW non-production. We're working on getting more DW non-prods and possibly more OLTP systems. I'm considering a naming scheme like this, where PROJ would be 3 initials for the project name. Dev Cluster MSSQLPROJD1\D1 (DEV) MSSQLPROJD2\D2 (TEST) MSSQLPROJD3\D3 (QA) MSSQLPROJD4\D4 (STAGE) MSSQLPROJD5\D5 (DW) Prd Cluster MSSQLPROJP1\P1 (PRD) MSSQLPROJP2\P2 (DW) To the left of the slash, each name must be unique network wide. On each server, the instance name, to the right of the slash, must be unique. Any thoughts on this? I'm trying to avoid having instance names drifting from reality as the project progresses - say we change what we call a certain environment or want to repurpose one. Then we can update a listing of the purposes for the instances and be done with it. How has a scheme like this worked out for you? Maybe you do things another way in your shop - tell me about it. Thanks.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Script to mirror MS SQL Server databases between 2 servers

    - by David W
    Hi I have about 200 sites each of which have 2 servers running MSSQL (2k5 at some sites, 2k8 at others) One server is production and the other is primarily there as a backup. We're rebuilding all of these servers this year and as part of that we will have to set up mirroring for ... a lot ... of databases. Some of these sites have 45 databases so mirroring them manually is going to be a huge pain. I was going to write a batch script which uses SQLCMD to backup the database and log, copies to the secondary server, restores the backup and log with norecovery, creates the endpoints and sets the partner. This in itself isn't too complicated, but i'd love to see what other people have done as i'm not very confident in catching errors using the process i've outlined above. I've seen Tools to manage sql 2008 database mirroring? Which looks really good, but the formatting is jumbled and I can't get it to work. If anyone has any other scripts they've written and are willing to share I'd be eternally grateful. Ideally I'd love to be able to use a script to ensure there are matching endpoints (same ports) on both servers, backup the database, backup the log, copy the backups to second server, restore database and log with norecovery, set the partners on both servers, and somehow confirm that the databases are linked and synchronized. Well, thanks for reading :)

    Read the article

  • SQL Server Replication Backup

    - by user18039
    Hi We have a new system that runs on SQL Server 2008 r2 64-bit. There is a primary on-line transactional processing (OLTP) database that accepts a high volume of updates from several thousand Point of Sale systems at stores around the country. In order to protect this vital function, I have decided to introduce a dedicated reporting database server - from which multiple users will run some pretty complex reports. I realise that there were a number of choices but I decided to use Transaction Replication as the mechanism for copying the data from the OLTP database to the new reporting database - one way replication. The solution has worked well in test. I'm now being asked what changes need to be made to the backup policy to cover the architectural changes. I have read pages such as MSDN:Strategies for Backing Up and Restoring Snapshot and Transactional Replication but I think these are overkill for my solution. In fact, my current thinking is that we simply need to continue making backups of the OLTP data and logs. If the Reporting db or any of the system replication (eg distribution) databases fail then it's no big deal - we can clear all down then re-create the replication. I realise that taking a complete snapshot of the OLTP would be time consuming (approx 5 hours) but I'd be more relaxed about this that trying to restore backups of the various data and log files in the correct sequence. My view is that the complex strategies set out in the MSDN article would only be the way to go for a more complex replication solution than I have, eg if there were multiple subscribers with 2-way replication. Would you agree? I'd be grateful for any advice. Many thanks, Rob.,

    Read the article

  • SQL Profiler: Read/Write units

    - by Ian Boyd
    i've picked a query out of SQL Server Profiler that says it took 1,497 reads: EventClass: SQL:BatchCompleted TextData: SELECT Transactions.... CPU: 406 Reads: 1497 Writes: 0 Duration: 406 So i've taken this query into Query Analyzer, so i may try to reduce the number of reads. But when i turn on SET STATISTICS IO ON to see the IO activity for the query, i get nowhere close to one thousand reads: Table Scan Count Logical Reads =================== ========== ============= FintracTransactions 4 20 LCDs 2 4 LCTs 2 4 FintracTransacti... 0 0 Users 1 2 MALs 0 0 Patrons 0 0 Shifts 1 2 Cages 1 1 Windows 1 3 Logins 1 3 Sessions 1 6 Transactions 1 7 Which if i do my math right, there is a total of 51 reads; not 1,497. So i assume Reads in SQL Profiler is an arbitrary metric. Does anyone know the conversion of SQL Server Profiler Reads to IO Reads? See also SQL Profiler CPU / duration unit Query Analyzer VS. Query Profiler Reads, Writes, and Duration Discrepencies

    Read the article

  • How can I show SQL Server LOGS (2005)

    - by Marcin Rybacki
    Hello, I'm trying to track the error thrown by SQL Server 2005. The problem is SQL Server reports it in my native language so it's hard for me to google it. I think that the core issue would be avialable in English in SQL Server LOGS. I'm running SQL Server Management Studio Express, going to "Management" node, and then SQL Server Logs. I can see the list of logs but I cannot enter them, the only available option in context menu is Refresh. Could you help me to show the contents of those logs?

    Read the article

  • Seeking a GUI auto-format feature for T-SQL

    - by dvanaria
    Is there a freely available GUI tool that will allow interaction with Microsoft SQL Server (via T-SQL) that provides an auto-format feature? I constantly find myself writing queries in SQL Query Analyzer (Microsoft’s standard GUI tool for T-SQL) and cutting/pasting the whole thing into SQLyog (a GUI tool for MySQL), where I can press F12 and have it reformatted into an easily readable, industry standard format. I then cut/paste this back into Query Analyzer to execute. I do this all the time at work and haven’t been able to find an alternative. I realize that SQLyog is no longer free software, but what I’m looking for is a specific alternative to a MS SQL Server interface (with auto-formatting). Thanks in advance for your help.

    Read the article

  • Migrating shape sql to something equally powerful

    - by daRoBBie
    Hi, we are currently investigating a migration of an application that doesn't meet company standards. The application is built using VB6 and Shape SQL/Access. The application has about 120 reports by storing Shape SQL strings in a database which the user can modify using a wizard. Shape sql is not allowed at this company. We have investigated plain SQL, Linq, Entity Framework as alternatives... but all result in more complex solutions. Does anyone have another suggestion? Update: Shape SQL is an ADO command to get hierarchical datasets, for further info: http://support.microsoft.com/kb/189657

    Read the article

  • Cannot connect to sql server

    - by Tony
    Hi I cannot connect to Sql server remotely from management studio , It is corrrect User name and password, but how to enable remote connections to a sql server? what is other chances? Cannot connect to xxxx.xxxx.xxxx.xxxx =================================== A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (.Net SqlClient Data Provider) thanks

    Read the article

  • How do I remove duplicate SQL Server 2008 instances after upgrading from SQL Server 2005?

    - by andypike
    I've just upgraded an existing SQL Server 2005 to 2008 by running the installer (not the platform installer). It all seems to have worked - there were no errors reported and my code that connects to these databases still works fine. The problem is, when I try installing SQL Server Management Studio Express 2008 I am shown then following error message when I select to add new features to an existing instance of SQL Server 2008: The SQL Server instance 'SQL1MINUS102' already has an Instance ID '2' that is different than the specified Instance ID 'SQL1MINUS102'. Specifying more than one instance ID for the same SQL Server instance is not supported. Here is a screenshot of the installation dialog and the setup discovery report: Screenshot Notice that there are two instances with the same name. So any ideas how I should recifiy this so that I can install Management studio? Thanks in advance

    Read the article

  • Required Parameters [SSIS Denali]

    - by jamiet
    SQL Server Integration Services (SSIS) in its 2005 and 2008 incarnations expects you to set a property values within your package at runtime using Configurations. SSIS developers tend to have rather a lot of issues with SSIS configurations; in this blog post I am going to highlight one of those problems and how it has been alleviated in SQL Server code-named Denali.   A configuration is a property path/value pair that exists outside of a package, typically within SQL Server or in a collection of one or more configurations in a file called a .dtsConfig file. Within the package one defines a pointer to a configuration that says to the package “When you execute, go and get a configuration value from this location” and if all goes well the package will fetch that configuration value as it starts to execute and you will see something like the following in your output log: Information: 0x40016041 at Package: The package is attempting to configure from the XML file "C:\Configs\MyConfig.dtsConfig". Unfortunately things DON’T always go well, perhaps the .dtsConfig file is unreachable or the name of the SQL Sever holding the configuration value has been defined incorrectly – any one of a number of things can go wrong. In this circumstance you might see something like the following in your log output instead: Warning: 0x80012014 at Package: The configuration file "C:\Configs\MyConfig.dtsConfig" cannot be found. Check the directory and file name. The problem that I want to draw attention to here though is that your package will ignore the fact it can’t find the configuration and executes anyway. This is really really bad because the package will not be doing what it is supposed to do and worse, if you have not isolated your environments you might not even know about it. Can you imagine a package executing for months and all the while inserting data into the wrong server? Sounds ridiculous but I have absolutely seen this happen and the root cause was that no-one picked up on configuration warnings like the one above. Happily in SSIS code-named Denali this problem has gone away as configurations have been replaced with parameters. Each parameter has a property called ‘Required’: Any parameter with Required=True must have a value passed to it when the package executes. Any attempt to execute the package will result in an error. Here we see that error when attempting to execute using the SSMS UI: and similarly when executing using T-SQL: Error is: Msg 27184, Level 16, State 1, Procedure prepare_execution, Line 112 In order to execute this package, you need to specify values for the required parameters.   As you can see, SSIS code-named Denali has mechanisms built-in to prevent the problem I described at the top of this blog post. Specifying a Parameter required means that any packages in that project cannot execute until a value for the parameter has been supplied. This is a very good thing. I am loathe to make recommendations so early in the development cycle but right now I’m thinking that all Project Parameters should have Required=True, certainly any that are used to define external locations should be anyway. @Jamiet

    Read the article

  • DATEFROMPARTS

    - by jamiet
    I recently overheard a remark by Greg Low in which he said something akin to "the most interesting parts of a new SQL Server release are the myriad of small things that are in there that make a developer's life easier" (I'm paraphrasing because I can't remember the actual quote but it was something like that). The new DATEFROMPARTS function is a classic example of that . It simply takes three integer parameters and builds a date out of them (if you have used DateSerial in Reporting Services then you'll understand). Take the following code which generates the first and last day of some given years: SELECT 2008 AS Yr INTO #Years UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM   #Years y here are the results: That code is pretty gnarly though with those CONVERTs in there and, worse, if the character string is constructed in a certain way then it could fail due to localisation, check this out: SET LANGUAGE french;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d;SET LANGUAGE us_english;SELECT dt,Month_Name=DATENAME(mm,dt)FROM   (       SELECT  dt = CONVERT(DATETIME,CONVERT(CHAR(4),y.[Yr]) + N'-01-02')       FROM    #Years y       )d; Notice how the datetime has been converted differently based on the language setting. When French, the string "2012-01-02" gets interpreted as 1st February whereas when us_english the same string is interpreted as 2nd January. Instead of all this CONVERTing nastiness we have DATEFROMPARTS: SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),    [LasttDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM   #Years y How much nicer is that? The bad news of course is that you have to upgrade to SQL Server 2012 or migrate to SQL Azure if you want to use it, as is the way of the world! Don't forget that if you want to try this code out on SQL Azure right this second, for free, you can do so by connecting up to AdventureWorks On Azure. You don't even need to have SSMS handy - a browser that runs Silverlight will do just fine. Simply head to https://mhknbn2kdz.database.windows.net/ and use the following credentials: Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly One caveat, SELECT INTO doesn't work on SQL Azure so you'll have to use this instead: DECLARE @y TABLE ( [Yr] INT);INSERT @y([Yr])SELECT 2008 AS Yr UNION ALL SELECT 2009 UNION ALL SELECT 2010 UNION ALL SELECT 2011 UNION ALL SELECT 2012;SELECT [FirstDayOfYear] = DATEFROMPARTS(y.[Yr],1,1),      [LastDayOfYear] = DATEFROMPARTS(y.[Yr],12,31)FROM @y y;SELECT [FirstDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 101))),      [LastDayOfYear] = CONVERT(DATE,CONVERT(CHAR(8),((y.[Yr] * 10000) + 1231)))FROM @y y; @Jamiet

    Read the article

  • Error logging in C#

    - by rschuler
    I am making my switch from coding in C++ to C#. I need to replace my C++ error logging/reporting macro system with something similar in C#. In my C++ source I can write LOGERR("Some error"); or LOGERR("Error with inputs %s and %d", stringvar, intvar); The macro & supporting library code then passes the (possibly varargs) formatted message into a database along with the source file, source line, user name, and time. The same data is also stuffed into a data structure for later reporting to the user. Does anybody have C# code snippets or pointers to examples that do this basic error reporting/logging? Edit: At the time I asked this question I was really new to .NET and was unaware of System.Diagnostics.Trace. System.Diagnostics.Trace was what I needed at that time. Since then I have used log4net on projects where the logging requirements were larger and more complex. Just edit that 500 line XML configuration file and log4net will do everything you will ever need :)

    Read the article

  • php error reporting - having trouble matching local & web server settings

    - by Andrew Heath
    I'm trying to add a custom error handler to my site, but in doing so have discovered that my webhost's PHP error reporting settings and those of my localhost (default XAMPP) vary considerably. While I thought I was programming to E_STRICT like a good little boy, adding the error handler to my webhost revealed craploads of Runtime Notices. Example: Runtime notice strtotime() [function.strtotime]: It is not safe to rely on the system's timezone settings. Please use the date.timezone setting, the TZ environment variable or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead In /home/... Clearly this isn't a red-alert, showstopping error. But what bothers me is that it doesn't show up on my localhost. I'd certainly like to improve my code by addressing these sorts of issues if I could see them! I've looked through both php.ini files, and my webhost's setting is error_reporting = E_ALL & ~E_NOTICE whereas mine was error_reporting = E_STRICT, which I had thought was better. However, changing mine to match and rebooting the server doesn't seem to have accomplished anything. Could someone please point me in the right direction?

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Error caused by Dropbox in update manager

    - by Olivier Lalonde
    I am getting the following error message when the update manager runs: Apt Authentication issue Problem during package list update. The package list update failed with a authentication failure. This usually happens behind a network proxy server. Please try to click on the "Run this action now" button to correct the problem or update the list manually by running Update Manager and clicking on "Check". W: A error occurred during the signature verification. The repository is not updated and the previous index files will be used.GPG error: http://linux.dropbox.com lucid Release: The following signatures were invalid: NODATA 1 NODATA 2 W: Failed to fetch http://linux.dropbox.com/ubuntu/dists/lucid/Release W: Some index files failed to download, they have been ignored, or old ones used instead. This error started to appear recently and for no obvious reason (maybe because I created myself a private PGP key?). I'm running Dropbox v0.7.11 on Ubuntu Lucid 10.04.

    Read the article

  • Wammu, Samsung J700 error GetNextMemory code: 56

    - by Tamas
    I have got a (old) Samsung J700i. When connecting with a USB cabel to Wammu first the access was denied. Now it is oké... However, when I try to get info out from the phone... I get error message: Error whlie communicating with phone Desciption: Internal phone error. Function: GetNextMemory Error code: 56 I am using Ubuntu 12.04 and Wammu 0.36 Running on Python 2.7.3 Using wxPython 2.8.12.1 Using python-gammu 1.31.0 and Gammu 1.31.0 How may I access data on the phone? Thanks, Tamas

    Read the article

  • Catching multiple exceptions on the client is robust and easy

    - by Alexander Kuznetsov
    Maria Zakourdaev has just demonstrated that if our T-SQL throws multiple exceptions, ERROR_MESSAGE() in TRY..CATCH block will only expose one. When we handle errors in C#, we have a very easy access to all errors. The following procedure throws two exceptions: CREATE PROCEDURE dbo.ThrowsTwoExceptions AS BEGIN ; RAISERROR ( 'Error 1' , 16 , 1 ) ; RAISERROR ( 'Error 2' , 16 , 1 ) ; END ; GO EXEC dbo.ThrowsTwoExceptions ; Both exceptions are shown by SSMS: Msg 50000 , LEVEL 16 , State 1 , PROCEDURE...(read more)

    Read the article

  • Is goto to improve DRY-ness OK?

    - by Marco Scannadinari
    My code has many checks to detect errors in various cases (many conditions would result in the same error), inside a function returning an error struct. Instead of looking like this: err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) { error.error = true; error.description = "invalid input"; return error; } ... case 1024: error.error = true; error.description = "invalid input"; // same error, but different detection scenario return error; break; // don't comment on this break please (EDIT: pun unintended) ... Is use of goto in the following context considered better than the previous example? err_struct myfunc(...) { err_struct error = { .error = false }; ... if(something) goto invalid_input; ... case 1024: goto invalid_input; break; return error; invalid_input: error.error = true; error.description = "invalid input"; return error;

    Read the article

  • ?????create or replace???PL/SQL??

    - by Liu Maclean(???)
    ????T.Askmaclean.com?????10gR2??????procedure,?????????create or replace ??????????????????,????Oracle???????????????????procedure? ??Maclean ??2?10gR2???????????PL/SQL?????: ??1: ??Flashback Query ????,?????????????flashback database,??????????create or replace???SQL??source$??????????undo data,????????????: SQL> select * from V$version; BANNER ---------------------------------------------------------------- Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bi PL/SQL Release 10.2.0.5.0 - Production CORE 10.2.0.5.0 Production TNS for Linux: Version 10.2.0.5.0 - Production NLSRTL Version 10.2.0.5.0 - Production SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; SQL> select current_scn from v$database; CURRENT_SCN -----------     2660057 create or replace procedure maclean_proc as begin -- I am new procedure execute immediate 'select 2 from dual'; end; / Procedure created. SQL> select current_scn from v$database; CURRENT_SCN -----------     2660113 SQL> select * from dba_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 -- I am new procedure SYS        MACLEAN_PROC                   PROCEDURE             4 execute immediate 'select 2 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             5 end; SQL> create table old_source as select * from dba_source as of scn 2660057 where name='MACLEAN_PROC'; Table created. SQL> select * from old_source where name='MACLEAN_PROC'; OWNER      NAME                           TYPE               LINE TEXT ---------- ------------------------------ ------------ ---------- -------------------------------------------------- SYS        MACLEAN_PROC                   PROCEDURE             1 procedure maclean_proc as SYS        MACLEAN_PROC                   PROCEDURE             2 begin SYS        MACLEAN_PROC                   PROCEDURE             3 execute immediate 'select 1 from dual'; SYS        MACLEAN_PROC                   PROCEDURE             4 end; ?????????scn??flashback query????,????????as of timestamp??????????,????PL/SQL????????????????undo??????????,????????????replace/drop ??????PL/SQL??? ??2 ??logminer??replace/drop PL/SQL?????SQL???DELETE??,??logminer?UNDO SQL???PL/SQL?????? ????????????????archivelog????,??????????????? minimal supplemental logging,??????????Unsupported SQLREDO???: create or replace?? ?? procedure???????SQL??????, ??????procedure????????????????, source$??????????????: SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; Database altered. SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 1 from dual';   4  end;   5  / Procedure created. SQL> SQL> oradebug setmypid; Statement processed. SQL> SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> SQL> create or replace procedure maclean_proc as   2  begin   3  execute immediate 'select 2 from dual';   4  end;   5  / Procedure created. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4305.trc [oracle@vrh8 ~]$ egrep  "update|insert|delete|merge"  /s01/admin/G10R25/udump/g10r25_ora_4305.trc delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 insert into procedureinfo$(obj#,procedure#,overload#,procedurename,properties,itypeobj#) values (:1,:2,:3,:4,:5,:6) insert into argument$( obj#,procedure$,procedure#,overload#,position#,sequence#,level#,argument,type#,default#,in_out,length,precision#,scale,radix,charsetid,charsetform,properties,type_owner,type_name,type_subname,type_linkname,pls_type) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23) insert into procedureplsql$(obj#,procedure#,entrypoint#) values (:1,:2,:3) update procedure$ set audit$=:2,options=:3 where obj#=:1 delete from source$ where obj#=:1 insert into source$(obj#,line,source) values (:1,:2,:3) delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_char$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub2$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_sb4$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub1$ where obj#=:1 and part=:2 and version<>:3 delete from idl_char$ where obj#=:1 and part=:2 and version<>:3 delete from idl_ub2$ where obj#=:1 and part=:2 and version<>:3 delete from idl_sb4$ where obj#=:1 and part=:2 and version<>:3 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 update idl_sb4$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 update idl_ub1$ set piece#=:1 ,length=:2 , piece=:3 where obj#=:4 and part=:5 and piece#=:6 and version=:7 delete from idl_char$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from idl_ub2$ where obj#=:1 and part=:2 and (piece#<:3 or piece#>:4) and version=:5 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 insert into settings$(obj#, param, value) values (:1, :2, :3) delete from warning_settings$ where obj# = :1 insert into warning_settings$(obj#, warning_num, global_mod, property) values (:1, :2, :3, :4) delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 insert into dependency$(d_obj#,d_timestamp,order#,p_obj#,p_timestamp, property, d_attrs)values (:1,:2,:3,:4,:5,:6, :7) insert into access$(d_obj#,order#,columns,types) values (:1,:2,:3,:4) update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ?drop procedure??????source$???PL/SQL?????: SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10046 trace name context forever,level 12; Statement processed. SQL> drop procedure maclean_proc; Procedure dropped. SQL> oradebug tracefile_name /s01/admin/G10R25/udump/g10r25_ora_4331.trc delete from context$ where obj#=:1 delete from dir$ where obj#=:1 delete from type_misc$ where obj#=:1 delete from library$ where obj#=:1 delete from procedure$ where obj#=:1 delete from javaobj$ where obj#=:1 delete from operator$ where obj#=:1 delete from opbinding$ where obj#=:1 delete from opancillary$ where obj#=:1 delete from oparg$ where obj# = :1 delete from com$ where obj#=:1 delete from source$ where obj#=:1 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from idl_ub1$ where obj#=:1 and part=:2 delete from idl_char$ where obj#=:1 and part=:2 delete from idl_ub2$ where obj#=:1 and part=:2 delete from idl_sb4$ where obj#=:1 and part=:2 delete from ncomp_dll$ where obj#=:1 returning dllname into :2 delete from error$ where obj#=:1 delete from settings$ where obj# = :1 delete from procedureinfo$ where obj#=:1 delete from argument$ where obj#=:1 delete from procedurec$ where obj#=:1 delete from procedureplsql$ where obj#=:1 delete from procedurejava$ where obj#=:1 delete from vtable$ where obj#=:1 delete from dependency$ where d_obj#=:1 delete from access$ where d_obj#=:1 delete from objauth$ where obj#=:1 update obj$ set obj#=:6,type#=:7,ctime=:8,mtime=:9,stime=:10,status=:11,dataobj#=:13,flags=:14,oid$=:15,spare1=:16, spare2=:17 where owner#=:1 and name=:2 and namespace=:3 and(remoteowner=:4 or remoteowner is null and :4 is null)and(linkname=:5 or linkname is null and :5 is null)and(subname=:12 or subname is null and :12 is null) ??????????source$???redo: SQL> alter system switch logfile; System altered. SQL> select sequence#,name from v$archived_log where sequence#=(select max(sequence#) from v$archived_log);  SEQUENCE# ---------- NAME --------------------------------------------------------------------------------        242 /s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc SQL> exec dbms_logmnr.add_logfile ('/s01/flash_recovery_area/G10R25/archivelog/2012_05_21/o1_mf_1_242_7vnm13k6_.arc',options => dbms_logmnr.new); PL/SQL procedure successfully completed. SQL> exec dbms_logmnr.start_logmnr(options => dbms_logmnr.dict_from_online_catalog); PL/SQL procedure successfully completed. SQL> select sql_redo,sql_undo from v$logmnr_contents where seg_name = 'SOURCE$' and operation='DELETE'; delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAN'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAO'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 1 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAP'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 1 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAQ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '1' and "SOURCE" = 'procedure maclean_proc as ' and ROWID = 'AAAABIAABAAALpyAAJ'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','1','procedure maclean_proc as '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '2' and "SOURCE" = 'begin ' and ROWID = 'AAAABIAABAAALpyAAK'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','2','begin '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '3' and "SOURCE" = 'execute immediate ''select 2 from dual''; ' and ROWID = 'AAAABIAABAAALpyAAL'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','3','execute immediate ''select 2 from dual''; '); delete from "SYS"."SOURCE$" where "OBJ#" = '56059' and "LINE" = '4' and "SOURCE" = 'end;' and ROWID = 'AAAABIAABAAALpyAAM'; insert into "SYS"."SOURCE$"("OBJ#","LINE","SOURCE") values ('56059','4','end;'); ???? logminer???UNDO SQL???????source$????,?DELETE????????????,????SOURCE????????????PL/SQL???DDL???

    Read the article

  • Restore DB - Error RESTORE HEADERONLY is terminating abnormally

    - by Jordon Willis
    I have taken backup of SQL Server 2008 DB on server, and download them to local environment. I am trying to restore that database and it is keep on giving me following error. An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ ADDITIONAL INFORMATION: The media family on device 'C:\go4sharepoint_1384_8481.bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3241) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=09.00.4053&EvtSrc=MSSQLServer&EvtID=3241&LinkId=20476 I have tried to create a temp DB on server and tried to restore the same backup file and that works. I have also tried no. of times downloading file from server to local pc using different options on Filezila (Auto, Binary) But its not working. After that I tried to execute following command on server. BACKUP DATABASE go4sharepoint_1384_8481 TO DISK=' C:\HostingSpaces\dbname_jun14_2010_new.bak' with FORMAT It is giving me following error: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'c:\Program Files\Microsoft SQL Server\MSSQL10.SQLEXPRESS\MSSQL\Backup\ C:\HostingSpaces\dbname_jun14_2010_new.bak'. Operating system error 123(The filename, directory name, or volume label syntax is incorrect.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. After researching I found the following 2 useful links: http://support.microsoft.com/kb/290787 http://social.msdn.microsoft.com/Forums/en-US/sqlsetupandupgrade/thread/4d5836f6-be65-47a1-ad5d-c81caaf1044f But I am still not able to restore Database correctly. Any help would be much appreciated. Thanks.

    Read the article

  • sql-server: Can I update two table with Single Query?

    - by RedsDevils
    How can I write single UPDATE query to change value of COL1 to ‘X’ if COL2 < 10 otherwise change it to ‘Y’, where the following two tables are linked by ID CREATE TABLE TEMP(ID TINYINT, COL1 CHAR(1)) INSERT INTO TEMP(ID,COL1) VALUES (1,'A') INSERT INTO TEMP(ID,COL1) VALUES (2,'B') INSERT INTO TEMP(ID,COL1) VALUES (11,'A') INSERT INTO TEMP(ID,COL1) VALUES (17,'B') CREATE TABLE TEMP2(ID TINYINT, COL2 TINYINT) INSERT INTO TEMP2(ID,COL2) VALUES (1,1) INSERT INTO TEMP2(ID,COL2) VALUES (2,5) INSERT INTO TEMP2(ID,COL2) VALUES (11,10) INSERT INTO TEMP2(ID,COL2) VALUES (17,15) Thanks in advance!

    Read the article

  • SSIS DTS Package flat file error - "The file name specified in the connection was not valid"

    - by MisterZimbu
    I have a pretty basic SSIS package that is attempting to read a file hosted on a share, and import its contents to a database table. The package runs fine when I run it manually within SSIS. However, when I set up a SQL Agent job and attempt to execute it, I get the following error: Executed as user: DOMAIN\UserName. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 64-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:14:17 AM Error: 2010-05-03 10:14:17.75 Code: 0xC001401E Source: DataImport Connection manager "Data File Local" Description: The file name "\10.1.1.159\llpf\datafile.dat" specified in the connection was not valid. End Error Error: 2010-05-03 10:14:17.75 Code: 0xC001401D Source: DataAnimalImport Description: Connection "Data File Local" failed validation. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:14:17 AM Finished: 10:14:17 AM Elapsed: 0.594 seconds. The package execution failed. The step failed. This leads me to believe it's a permissions issue, but every attempt I've made to fix it has failed. What I've tried so far: Run as the SQL Agent account (DOMAIN\SqlAgent) - yields same error. DOMAIN\SqlAgent has "Full Control" permissions on both the share and the uploaded file. Set up a proxy account with a different account's credentials (DOMAIN\Account) - yields same error. Like above, "Full Control" permissions were given over the share to that account. Gave "Everyone" full control permissions over the share (temporarily!). Yielded same error. Manually copied the file to a local path and tested with the SQL Agent account. Worked properly. Added an ActiveX script task that would first copy the remotely hosted file to a local path, and then have the DTS package reference the local file. Gave a completely nondescriptive (even by SSIS standards) error when trying to run the script. Set up a proxy account, using my own personal account's credentials - worked correctly. However, this is not an acceptable solution as there are password policies in place on my account, as well as being a bad practice to set things up this way in general. Any ideas? I'm still convinced it's a permissions issue. However, what I've read from various searches more or less says giving the executing account permissions on the share should work. However, this is not the case here (unless I'm missing something obscure when I'm setting up permissions on the share).

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >