Search Results

Search found 28540 results on 1142 pages for 'sql triggers'.

Page 70/1142 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Oracle Triggers Query..

    - by AGeek
    Lets consider a Table STUD and a ROW-Level TRIGGER is implemented over INSERT query.. My scenario goes like this, whenever a row is inserted, a trigger is fired and it should access some script file which is placed in the hard disk, and ultimately should print the result. So, is this thing is possible? and if yes, then this thing should exist in dynamic form, i.e. if we change the content of script file, then the oracle should reflect those changes as well. I have tried doing this for java using External Procedures, but doesn't feel that much satisfied with the result that i wanted. Kindly give your point-of-view for this kind of scenario and ways that this can be implemented.

    Read the article

  • PostgreSQL, triggers, and concurrency to enforce a temporal key

    - by Hobbes
    I want to define a trigger in PostgreSQL to check that the inserted row, on a generic table, has the the property: "no other row exists with the same key in the same valid time" (the keys are sequenced keys). In fact, I has already implemented it. But since the trigger has to scan the entire table, now i'm wondering: is there a need for a table-level lock? Or this is managed someway by the PostgreSQL itself?

    Read the article

  • SQL Trigger dont works...

    - by Gabotron
    Is there a way in which a Trigger is not fired? We have this situation: We have a table and there are rows that are been deleted. We need to know who and/or when these row are deleted. We create this trigger: ALTER TRIGGER [dbo].[AUDITdel_nit] ON [dbo].[Client] FOR DELETE AS Insert into AUDIT select 'Delete', getdate(), 'Row Deleted', SYSTEM_USER, host_name(), (select 'ID Client: ' + convert(varchar(12),Id) from deleted), 'Client' ,APP_NAME() We made somte test: deleting rows vis stored procedures and the deleted rows appears in our AUDIT table. But suddenly today we found a row deleted that dont appears in the AUDIT table... Any idea how it can be done?

    Read the article

  • Stop invalid data in a attribute with foreign key constraint using triggers?

    - by Eternal Learner
    How to specify a trigger which checks if the data inserted into a tables foreign key attribute, actually exists in the references table. If it exist no action should be performed , else the trigger should delete the inserted tuple. Eg: Consider have 2 tables R(A int Primary Key) and S(B int Primary Key , A int Foreign Key References R(A) ) . I have written a trigger like this : Create Trigger DelS BEFORE INSERT ON S FOR EACH ROW BEGIN Delete FROM S where New.A <> ( Select * from R;) ); End; I am sure I am making a mistake while specifying the inner sub query within the Begin and end Blocks of the trigger. My question is how do I make such a trigger ?

    Read the article

  • Stored procedure and trigger

    - by noober
    Hello all, I had a task -- to create update trigger, that works on real table data change (not just update with the same values). For that purpose I had created copy table then began to compare updated rows with the old copied ones. When trigger completes, it's neccessary to actualize the copy: UPDATE CopyTable SET id = s.id, -- many, many fields FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED) AND CopyTable.id = s.id; I don't like to have this ugly code in trigger anymore, so I has extracted it to a stored procedure: CREATE PROCEDURE UpdateCopy AS BEGIN UPDATE CopyTable SET id = s.id, -- many, many fields FROM MainTable s WHERE s.id IN (SELECT [id] FROM INSERTED) AND CopyTable.id = s.id; END The result is -- Invalid object name 'INSERTED'. How can I workaround this? Regards,

    Read the article

  • using NEWSEQUENTIALID() with UPDATE Trigger

    - by Ram
    I am adding a new GUID/Uniqueidentifier column to my table. ALTER TABLE table_name ADD VersionNumber UNIQUEIDENTIFIER UNIQUE NOT NULL DEFAULT NEWSEQUENTIALID() GO And when ever a record is updated in the table, I would want to update this column "VersionNumber". So I create a new trigger CREATE TRIGGER [DBO].[TR_TABLE_NAMWE] ON [DBO].[TABLE_NAME] AFTER UPDATE AS BEGIN UPDATE TABLE_NAME SET VERSIONNUMBER=NEWSEQUENTIALID() FROM TABLE_NAME D JOIN INSERTED I ON D.ID=I.ID/* some ID which is used to join*/ END GO But just realized that NEWSEQUENTIALID() can only be used with CREATE TABLE or ALTER TABLE. I got this error The newsequentialid() built-in function can only be used in a DEFAULT expression for a column of type 'uniqueidentifier' in a CREATE TABLE or ALTER TABLE statement. It cannot be combined with other operators to form a complex scalar expression. Is there a workaround for this ? Edit1: Changing NEWSEQUENTIALID() to NEWID() in the trigger solves this, but I am indexing this column and using NEWID() would be sub-optimal

    Read the article

  • SQL Server 2005: Insert a row in a table and update the same row

    - by vikas
    eg:table pkey --guid annualpay datefrom dateto--if null means current record percentannualincrease percent annual increase will be calculated only if there is a difference in newly inserted and previously existing last differing value. percentannualincrease = ([newannualpay-just previous pay(if different from current)]/newannualpay)*100 eg newid(),5000,today,null,0--very first row newid(),5000,today+1,null(*),0 newid,5500,today+2,null(*),?????????????--> need to be calculated before insert *--insert will close the previous record by updating dateto=null to todays date How can I do this stuff in a trigger???

    Read the article

  • Check constraint on table lookup

    - by bzamfir
    Hi, I have a table, department , with several bit fields to indicate department types One is Warehouse (when true, indicate the department is warehouse) And I have another table, ManagersForWarehouses with following structure: ID autoinc WarehouseID int (foreign key reference DepartmentID from departments) ManagerID int (foreign key reference EmployeeID from employees) StartDate EndDate To set new manager for warehouse, I insert in this table with EndDate null, and I have a trigger that sets EndDate for previous record for that warehouse = StartDate for new manager, so a single manager appears for a warehouse at a certain time. I want to add two check constraints as follows, but not sure how to do this do not allow to insert into ManagersForWarehouses if WarehouseID is not marked as warehouse Do not allow to uncheck Warehouse if there are records in ManagersForWarehouses Thanks

    Read the article

  • HTTP triggers for Postgres

    - by HeineyBehinds
    I'm trying to write a Postgres trigger such that when a configuration table is updated, a backend component is notified and can handle the change. I know that Oracle has the concept of a web/HTTP trigger, where you can execute an HTTP GET from the Oracle instance itself to a URL that can then handle the request at the application layer. I'm wondering if Postgres (v. 9.0.5) has the same feature, or comes with anything similar (and, subsequently, how to set it up/configure it)?

    Read the article

  • Are Triggers Based On Queries Atomic?

    - by David
    I have a table that has a Sequence number. This sequence number will change and referencing the auto number will not work. I fear that the values of the trigger will collide. If two transactions read at the same time. I have ran simulated tests on 3 connections @ ~1 million records each and no collisions. CREATE TABLE `aut` ( `au_id` int(10) NOT NULL AUTO_INCREMENT, `au_control` int(10) DEFAULT NULL, `au_name` varchar(50) DEFAULT NULL, `did` int(10) DEFAULT NULL, PRIMARY KEY (`au_id`), KEY `Did` (`did`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 TRIGGER `binc_control` BEFORE INSERT ON `aut` FOR EACH ROW BEGIN SET NEW.AU_CONTROL = (SELECT COUNT(*)+1 FROM aut WHERE did = NEW.did); END;

    Read the article

  • SQL Trigger Need to set x from a value

    - by Eric
    Im stuck on a the type of trigger needed to for this constraint. I will have a price and a commission. The price determines the commission amount, < 100 - 4%, < 200 - 5% etc. My idea. the database contains a separate table that will hold 4 price values , 101, 201, 401, 601, with their own matching comission %, this will be called PC. When i create a property listing i want to calculate the commission they earn depending on the price entered. on insert, i need to check the new.price and compare it to the prices in PC. Once new.price is less than the price tuple, i set the price to that commission value create or replace TRIGGER findCommission BEFORE INSERT OR UPDATE ON HASLISTING FOR each ROW BEGIN IF (:NEW.ASKING_PRICE < 100001) THEN :NEW.COMMISSION = 6.0; END IF; IF (:NEW.ASKING_PRICE < 250001) THEN :NEW.COMMISSION = 5.5; END IF; IF (:NEW.ASKING_PRICE < 1000001) THEN :NEW.COMMISSION = 5.0; END IF; IF (:NEW.ASKING_PRICE > 1000000) THEN :NEW.COMMISSION = 4.0; END IF; END;

    Read the article

  • Merge replication stopping without errors in SQL 2008 R2

    - by Rob Farley
    A non-SQL MVP friend of mine, who also happens to be a client, asked me for some help again last week. I was planning on writing this up even before Rob Volk (@sql_r) listed his T-SQL Tuesday topic for this month. Earlier in the year, I (well, LobsterPot Solutions, although I’d been the person mostly involved) had helped out with a merge replication problem. The Merge Agent on the subscriber was just stopping every time, shortly after it started. With no errors anywhere – not in the Windows Event Log, the SQL Agent logs, not anywhere. We’d managed to get the system working again, but didn’t have a good reason about what had happened, and last week, the problem occurred again. I asked him about writing up the experience in a blog post, largely because of the red herrings that we encountered. It was an interesting experience for me, also because I didn’t end up touching my computer the whole time – just tapping on my phone via Twitter and Live Msgr. You see, the thing with replication is that a useful troubleshooting option is to reinitialise the thing. We’d done that last time, and it had started to work again – eventually. I say eventually, because the link being used between the sites is relatively slow, and it took a long while for the initialisation to finish. Meanwhile, we’d been doing some investigation into what the problem could be, and were suitably pleased when the problem disappeared. So I got a message saying that a replication problem had occurred again. Reinitialising wasn’t going to be an option this time either. In this scenario, the subscriber having the problem happened to be in a different domain to the publisher. The other subscribers (within the domain) were fine, just this one in a different domain had the problem. Part of the problem seemed to be a log file that wasn’t being backed up properly. They’d been trying to back up to a backup device that had a corruption, and the log file was growing. Turned out, this wasn’t related to the problem, but of course, any time you’re troubleshooting and you see something untoward, you wonder. Having got past that problem, my next thought was that perhaps there was a problem with the account being used. But the other subscribers were using the same account, without any problems. The client pointed out that that it was almost exactly six months since the last failure (later shown to be a complete red herring). It sounded like something might’ve expired. Checking through certificates and trusts showed no sign of anything, and besides, there wasn’t a problem running a command-prompt window using the account in question, from the subscriber box. ...except that when he ran the sqlcmd –E –S servername command I recommended, it failed with a Named Pipes error. I’ve seen problems with firewalls rejecting connections via Named Pipes but letting TCP/IP through, so I got him to look into SQL Configuration Manager to see what kind of connection was being preferred... Everything seemed fine. And strangely, he could connect via Management Studio. Turned out, he had a typo in the servername of the sqlcmd command. That particular red herring must’ve been reflected in his cheeks as he told me. During the time, I also pinged a friend of mine to find out who I should ask, and Ted Kruger (@onpnt) ‘s name came up. Ted (and thanks again, Ted – really) reconfirmed some of my thoughts around the idea of an account expiring, and also suggesting bumping up the logging to level 4 (2 is Verbose, 4 is undocumented ridiculousness). I’d just told the client to push the logging up to level 2, but the log file wasn’t appearing. Checking permissions showed that the user did have permission on the folder, but still no file was appearing. Then it was noticed that the user had been switched earlier as part of the troubleshooting, and switching it back to the real user caused the log file to appear. Still no errors. A lot more information being pushed out, but still no errors. Ted suggested making sure the FQDNs were okay from both ends, in case the servers were unable to talk to each other. DNS problems can lead to hassles which can stop replication from working. No luck there either – it was all working fine. Another server started to report a problem as well. These two boxes were both SQL 2008 R2 (SP1), while the others, still working, were SQL 2005. Around this time, the client tried an idea that I’d shown him a few years ago – using a Profiler trace to see what was being called on the servers. It turned out that the last call being made on the publisher was sp_MSenumschemachange. A quick interwebs search on that showed a problem that exists in SQL Server 2008 R2, when stored procedures have more than 4000 characters. Running that stored procedure (with the same parameters) manually on SQL 2005 listed three stored procedures, the first of which did indeed have more than 4000 characters. Still no error though, and the problem as listed at http://support.microsoft.com/kb/2539378 describes an error that should occur in the Event log. However, this problem is the type of thing that is fixed by a reinitialisation (because it doesn’t need to send the procedure change across as a transaction). And a look in the change history of the long stored procs (you all keep them, right?), showed that the problem from six months earlier could well have been down to this too. Applying SP2 (with sufficient paranoia about backups and how to get back out again if necessary) fixed the problem. The stored proc changes went through immediately after the service pack was applied, and it’s been running happily since. The funny thing is that I didn’t solve the problem. He had put the Profiler trace on the server, and had done the search that found a forum post pointing at this particular problem. I’d asked Ted too, and although he’d given some useful information, nothing that he’d come up with had actually been the solution either. Sometimes, asking for help is the most useful thing you can do. Often though, you don’t end up getting the help from the person you asked – the sounding board is actually what you need. @rob_farley

    Read the article

  • Unexplained CPU and Disk activity spikes in SQL Server 2005

    - by Philip Goh
    Before I pose my question, please allow me to describe the situation. I have a database server, with a number of tables. Two of the biggest tables contain over 800k rows each. The majority of rows are less than 10k in size, though roughly 1 in 100 rows will be 1 MB but <4 MB. So out of the 1.6 million rows, about 16000 of them will be these large rows. The reason they are this big is because we're storing zip files binary blobs in the database, but I'm digressing. We have a service that runs constantly in the background, trimming 10 rows from each of these 2 tables. In the performance monitor graph above, these are the little bumps (red for CPU, green for disk queue). Once ever minute we get a large spike of CPU activity together with a jump in disk activity, indicated by the red arrow in the screenshot. I've run the SQL Server profiler, and there is nothing that jumps out as a candidate that would explain this spike. My suspicion is that this spike occurs when one of the large rows gets deleted. I've fed the results of the profiler into the tuning wizard, and I get no optimisation recommendations (i.e. I assume this means my database is indexed correctly for my current workload). I'm not overly worried as the server is coping fine in all circumstances, even under peak load. However, I would like to know if there is anything else I can do to find out what is causing this spike? Update: After investigating this some more, the CPU and disk usage spike was down to SQL server's automatic checkpoint. The database uses the simple recovery model, and this truncates the log file at each checkpoint. We can see this demonstrated in the following graph. As described on MSDN, the checkpoints will occur when the transaction log becomes 70% full and we are using the simple recovery model. This has been enlightening and I've definitely learned something!

    Read the article

  • MS SQL Server 2005 Express rebuild master DB problem

    - by PaN1C_Showt1Me
    Hi ! There has been a power loss on our server and i cannot start the SQL service because the master DB is corrupted (as the log states). I found many articles recommending running the setup.exe with optional parameters: This is what I did: I've downloaded SQLEXPR32.EXE from MS page and ran it The first problem was, that it extracted all the setup files and started the default installation procedure. (which was unuseful for me as I need those params). If I canceled it, all the extracted files disappeared. That's why I decided to copy the extracted files somewhere and than cancel the default installation. Now I'm trying to run the setup.exe from the extraction: setup.exe /qb INSTANCENAME=MSSQLSERVER REINSTALL=SQL_Engine REBUILDDATABASE=1 SAPWD=xxxxx it asks me if I want to rewrite the system db, which is what I need, but then while installing I get this error: *An installation package for the product Microsoft SQL Server 2005 Express Edition cannot be found. Try the installation again using a valid copy of the installation package 'SqlRun_SQL.msi'* Then it tries to install something and it states: cannot install because the same instance name already exists. But I don't want to install a new instance .. Any idea how to solve this, please? Thank you in advance !

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    I submitted this to stack overflow (here) but realised it should really be on serverfault. so apologies for the incorrect and duplicate posting: Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • SQL Server 2005 standard filegroups / files for performance on SAN

    - by Blootac
    Ok so I've just been on a SQL Server course and we discussed the usage scenarios of multiple filegroups and files when in use over local RAID and local disks but we didn't touch SAN scenarios so my question is as follows; I currently have a 250 gig database running on SQL Server 2005 where some tables have a huge number of writes and others are fairly static. The database and all objects reside in a single file group with a single data file. The log file is also on the same volume. My interpretation is that separate data files should be used across different disks to lessen disk contention and that file groups should be used for partitioning of data. However, with a SAN you obviously don't really have the same issue of disk contention that you do with a small RAID setup (or at least we don't at the moment), and standard edition doesn't support partitioning. So in order to improve parallelism what should I do? My understanding of various Microsoft publications is that if I increase the number of data files, separate threads can act across each file separately. Which leads me to the question how many files should I have. One per core? Should I be putting tables and indexes with high levels of activity in separate file groups, each with the same number of data files as we have cores? Thank you

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • SQL Server Reporting Services - website blank, builder works

    - by Keith
    We have a few reports in SQL Server Reporting Services. For some reason when we run the report from the website, it doesn't return any data. When I run the same report from the Report Builder, it returns data. I looked in the logs and the only errors I could find is: ReportingServicesService!library!8!6/15/2012-08:12:33:: i INFO: Current DB Version Unknown, Instance Version C.0.8.54. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Throwing Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights., ;Info: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. ReportingServicesService!library!8!6/15/2012-08:12:33:: e ERROR: Exception caught while starting service. Error: Microsoft.ReportingServices.Diagnostics.Utilities.InvalidReportServerDatabaseException: The version of the report server database is either in a format that is not valid, or it cannot be read. The found version is 'Unknown'. The expected version is 'C.0.8.54'. To continue, update the version of the report server database and verify access rights. I'm not really sure why it would be a different version. It's all SQL Server 2008 R2 and I haven't made any changes to it since it's been running.

    Read the article

  • What does SQL Server's BACKUPIO wait type mean?

    - by solublefish
    I'm using Sql Server 2008 ("R1"), with some maintenance plans that back up my databases to a network share. Some of my backup jobs show long waits of type "BACKUPIO". Of course it seems like this is an I/O subsystem limitation, but I'm skeptical. Perfmon stats for I/O on the production (source) server are well within normal trends for that server. The destination server shows a sustained 7MB/s write rate, which seems incredibly low, even for a slow disk. The network link is gigabit ethernet and nowhere near saturated. The few docs I've turned up about BACKUPIO indicate that it's not specifically a wait on I/O, surprisingly enough. This MSFT doc says it's abnormal unless you're using a tape drive, which I'm not. But it doesn't say (or I don't understand) exactly what resource is missing. http://www.docstoc.com/docs/24580659/Performance-Tuning-in-SQL-Server-2005 And this piece says it's not related to I/O performance at all. http://www.informit.com/articles/article.aspx?p=686168&seqNum=5 "Note that BACKUPIO and IO_AUDIT_MUTEX are not related to IO performance." Anyway, does anyone know what BACKUPIO actually means and/or what I can do to diagnose or eliminate it?

    Read the article

  • sql server uninstallation issue

    - by angel
    I'm unable to remove SQL Server 2008 sp1 completely from my system. I'm using windows 7 ultimate. Everytime I try uninstalling it i get the following error. How can I remove it? here is the log: Overall summary: Final result: Failed: see details below Exit code (Decimal): -2068643839 Exit facility code: 1203 Exit error code: 1 Exit message: Failed: see details below Start time: 2013-06-24 21:10:38 End time: 2013-06-24 21:21:17 Requested action: Uninstall Log with failure: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\sql_rs_Cpu64_1.log Exception help link: http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=10.0.1600.22 Machine Properties: Machine name: ABHI-PC Machine processor count: 4 OS version: Windows Vista OS service pack: Service Pack 1 OS region: United States OS language: English (United States) OS architecture: x64 Process architecture: 64 Bit OS clustered: No Product features discovered: Product Instance Instance ID Feature Language Edition Version Clustered Sql Server 2008 MSSQLSERVER MSRS10.MSSQLSERVER Reporting Services 1033 Enterprise Edition 10.0.1600.22 No Sql Server 2008 Management Tools - Basic 10.0.1600.22 No Package properties: Description: SQL Server Database Services 2008 SQLProductFamilyCode: {628F8F38-600E-493D-9946-F4178F20A8A9} ProductName: SQL2008 Type: RTM Version: 10 SPLevel: 0 Installation edition: ENTERPRISE User Input Settings: ACTION: Uninstall CONFIGURATIONFILE: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini FEATURES: RS,SSMS,SNAC_SDK,CE_RUNTIME,CE_TOOLS,SNAC HELP: False INDICATEPROGRESS: False INSTANCEID: INSTANCENAME: MSSQLSERVER MEDIASOURCE: QUIET: False QUIETSIMPLE: False X86: False Configuration file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\ConfigurationFile.ini Detailed results: Feature: SQL Client Connectivity Status: Skipped MSI status: Passed Configuration status: Passed Feature: SQL Client Connectivity SDK Status: Skipped MSI status: Passed Configuration status: Passed Feature: Reporting Services Status: Failed: see logs for details MSI status: Passed Configuration status: Failed: see details below Configuration error code: 0xFFD65603 Configuration error description: Input string was not in a correct format. Configuration log: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\Detail.txt Feature: SQL Compact Edition Tools Status: Passed MSI status: Passed Configuration status: Passed Feature: SQL Compact Edition Runtime Status: Skipped MSI status: Passed Configuration status: Passed Feature: Management Tools - Basic Status: Failed: see logs for details MSI status: Passed Configuration status: Passed Rules with failures: Global rules: There are no scenario-specific rules. Rules report file: C:\Program Files\Microsoft SQL Server\100\Setup Bootstrap\Log\20130624_210908\SystemConfigurationCheck_Report.htm

    Read the article

  • Floating Panels and Describe Windows in Oracle SQL Developer

    - by thatjeffsmith
    One of the challenges I face as I try to share tips about our software is that I tend to assume there are features that you just ‘know about.’ Either they’re so intuitive that you MUST know about them, or it’s a feature that I’ve been using for so long I forget that others may have never even seen it before. I want to cover two of those today - Describe (DESC) – SHIFT+F4 Floating Panels My super-exciting desktop SQL Developer and Describe DESC or Describe is an Oracle SQL*Plus command. It shows what a table or view is composed of in terms of it’s column definition. Here’s an example: SQL*Plus: Release 11.2.0.3.0 Production on Fri Sep 21 14:25:37 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> desc beer; Name Null? Type ----------------------------------------- -------- ---------------------------- BREWERY NOT NULL VARCHAR2(100) CITY VARCHAR2(100) STATE VARCHAR2(100) COUNTRY VARCHAR2(100) ID NUMBER SQL> You can get the same information – and a good bit more – in SQL Developer using the SQL Developer DESC command. You invoke it with SHIFT+F4. It will open a floating (non-modal!) window with the information you want. Here’s an example: I can see my column definitions, constratins, stats, privs, etc A few ‘cool’ things you should be aware of: I can open as many as I want, and still work in my worksheet, browser, etc. I can also DESC an index, user, or most any other database object I can of course move them off my primary desktop display The DESC panel’s are read-only. I can’t drop a constraint from within the DESC window of a given table. But for dragging columns into my worksheet, and checking out the stats for my objects as I query them – it’s very, very handy. Try This Right Now Type ‘scott.emp’ (or some other table you have), place your cursor on the text, and hit SHIFT+F4. You’ll see the EMP object open. Now click into a column name in the columns page. Drag it into your worksheet. It will paste that column name into your query. This is an alternative for those that don’t like our code insight feature or dragging columns off the connection tree (new for v3.2!) Got it? SQL Developer’s Floating Panels Ok, let’s talk about a similar feature. Did you know that any dockable panel from the View menu can also be ‘floated?’ One of my favorite features is the SQL History. Every query I run is recorded, and I can recall them later without having to remember what I ran and when. And I USUALLY use the keyboard shortcuts for this. Let your trouble float away…if only it were so easy as a right-click in the real world. But sometimes I still want to see my recall list without having to give up my screen real estate. So I just mouse-right click on the panel tab and select ‘Float.’ Then I move it over to my secondary display – see the poorly lit picture in the beginning of this post. And that’s it. Simple, I know. But I thought you should know about these two things!

    Read the article

  • How can I update multiple columns with a Replace in SQL server?

    - by Kettenbach
    How do I update different columns and rows across a table? I want to do something similiar to replace a string in SQL server I want to do this but the value exists in multiple columns of the same type. The values are foreign keys varchars to an employee table. Each column represents a task, so the same employee may be assigned to several tasks in a record and those tasks will vary between records. How can I do this effectively? Basically something of a replace all accross varying columns throughout a table. Thanks for any help or advice. Cheers, ~ck in San Diego

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • Is it possible to create a new T-SQL Operator using CLR Code in MSSQL?

    - by Eoin Campbell
    I have a very simple CLR Function for doing Regex Matching public static SqlBoolean RegExMatch(SqlString input, SqlString pattern) { if (input.IsNull || pattern.IsNull) return SqlBoolean.False; return Regex.IsMatch(input.Value, pattern.Value, RegexOptions.IgnoreCase); } It allows me to write a SQL Statement Like. SELECT * FROM dbo.table1 WHERE dbo.RegexMatch(column1, '[0-9][A-Z]') = 1 -- match entries in col1 like 1A, 2B etc... I'm just thinking it would be nice to reformulate that query so it could be called like SELECT * FROM dbo.table1 WHERE column1 REGEXLIKE '[0-9][A-Z]' Is it possible to create new comparison operators using CLR Code. (I'm guessing from my brief glance around the web that the answer is NO, but no harm asking) Thanks, Eoin C

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >