Search Results

Search found 6492 results on 260 pages for 'trigger io'.

Page 8/260 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Trigger function when mouse click inside textarea AND type stop typing...

    - by marc
    Welcome, In short, my website use BBcode system, and i want allow users to preview message without posting it. I'm using JQuery library. I need 3 actions. 1) When user click in textarea i want display DIV what will contain preview, i want animate opening. 2) When user typing, i want dynamical load parsed by PHP code to DIV. (i'm still thinking what will be best option... refresh every 2 seconds, or maybe we can detect and refresh after 1 second of inactivity [stop typing]) 3) When user click outside textarea i want close preview div with animation. For example the PHP parser will have patch /api/parser.php and variable by POST called $_POST['message']. Any idea my digital friends ?

    Read the article

  • Help Auditing in Oracle

    - by enrique
    Hello everybody I need some help in auditing in Oracle. We have a database with many tables and we want to be able to audit every change made to any table in any field. So the things we want to have in this audit are: user who modified time of change occurred old value and new value so we started creating the trigger which was supposed to perform the audit for any table but then had issues... As I mentioned before we have so many tables and we cannot go creating a trigger per each table. So the idea is creating a master trigger that can behaves dynamically for any table that fires the trigger. I was trying to do it but no lucky at all....it seems that Oracle restricts the trigger environment just for a table which is declared by code and not dynamically like we want to do. Do you have any idea on how to do this or any other advice for solving this issue? thanks in advance.

    Read the article

  • When does ref($variable) return 'IO'?

    - by Zaid
    Here's the relevant excerpt from the documentation of the ref function: The value returned depends on the type of thing the reference is a reference to. Builtin types include: SCALAR ARRAY HASH CODE REF GLOB LVALUE FORMAT IO VSTRING Regexp Based on this, I imagined that calling ref on a filehandle would return 'IO'. Surprisingly, it doesn't: use strict; use warnings; open my $fileHandle, '<', 'aValidFile'; close $fileHandle; print ref $fileHandle; # prints 'GLOB', not 'IO' perlref tries to explain why: It isn't possible to create a true reference to an IO handle (filehandle or dirhandle) using the backslash operator. The most you can get is a reference to a typeglob, which is actually a complete symbol table entry [...] However, you can still use type globs and globrefs as though they were IO handles. In what circumstances would ref return 'IO' then?

    Read the article

  • Capturing index operations using a DDL trigger

    - by AaronBertrand
    Today on twitter the following question came up on the #sqlhelp hash tag, from DaveH0ward : Is there a DMV that can tell me the last time an index was rebuilt? SQL 2008 My initial response: I don't believe so, you'd have to be monitoring for that ... perhaps a DDL trigger capturing ALTER_INDEX? Then I remembered that the default trace in SQL Server ( as long as it is enabled ) will capture these events. My follow-up response: You can get it from the default trace, blog post forthcoming So here is...(read more)

    Read the article

  • Capturing index operations using a DDL trigger

    - by AaronBertrand
    Today on twitter the following question came up on the #sqlhelp hash tag, from DaveH0ward : Is there a DMV that can tell me the last time an index was rebuilt? SQL 2008 My initial response: I don't believe so, you'd have to be monitoring for that ... perhaps a DDL trigger capturing ALTER_INDEX? Then I remembered that the default trace in SQL Server ( as long as it is enabled ) will capture these events. My follow-up response: You can get it from the default trace, blog post forthcoming So here is...(read more)

    Read the article

  • How to change I/O priority of a process or thread in Win7?

    - by romkyns
    Process Explorer is able to show the effective IO priority of a given thread, but not change it. Seeing as IO priority support is a comparatively new feature, most programs don't set their own IO priorities. It appears that by default the IO priority is derived from the thread priority (rather than process priority), which Process Explorer can't modify either. Are there any other tools out there that can help me change the IO priority of a given thread / all threads of a given process?

    Read the article

  • Where to set permissions to all server for logon trigger on sql server 2005

    - by Jay
    I need to keep track of the last login time for each user in our SQL Server 2005 database. I created a trigger like this: CREATE TRIGGER LogonTimeStamp ON ALL SERVER FOR LOGON AS BEGIN IF EXISTS (SELECT * FROM miscdb..user_last_login WHERE user_id = SYSTEM_USER) UPDATE miscdb..user_last_login SET last_login = GETDATE() WHERE user_id = SYSTEM_USER ELSE INSERT INTO miscdb..user_last_login (user_id,last_login) VALUES (SYSTEM_USER,GETDATE()) END; go This trigger works for servers that are system admins but it won't allow regular users to login. I have granted public select,insert and update to the table but that doesn't seem to be the issue. Is there a way to set permissions on the trigger? Is there something else I am missing? Thanks

    Read the article

  • Firing trigger for bulk insert

    - by Deepa
    ALTER TRIGGER [dbo].[TR_O_SALESMAN_INS] ON [dbo].[O_SALESMAN] AFTER INSERT AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for trigger here DECLARE @SLSMAN_CD NVARCHAR(20) DECLARE @SLSMAN_NAME NVARCHAR(20) SELECT @SLSMAN_CD = SLSMAN_CD,@SLSMAN_NAME=SLSMAN_NAME FROM INSERTED IF NOT EXISTS(SELECT * FROM O_SALESMAN_USER WHERE SLSMAN_CD = @SLSMAN_CD) BEGIN INSERT INTO O_SALESMAN_USER(SLSMAN_CD, PASSWORD, USER_CD) VALUES(@SLSMAN_CD, @SLSMAN_CD,@SLSMAN_NAME ) END END This is the trigger written for a table(O_SALESMAN) to fetch few columns from it and insert it into one another table(O_SALESMAN_USER). Presently bulk data is getting inserted into O_SALESMAN table through a stored procedure, where as the trigger is getting fired only once and O_SALESMAN_USER is having only one record inserted each time whenever the stored procedure is being executed,i want trigger to run after each and every record that gets inserted into O_SALESMAN such that both tables should have same count which is not happening..so please let me know what can be modified in this Trigger to achieve the same....

    Read the article

  • Perl IO modules possibly causing issues in Net::DNS module

    - by Rich
    Hi! I’m porting some software that I wrote for a White Russian OpenWRT system to a new Kamikaze 8.09.1 OpenWRT system but I am having some serious issues that I’m hoping you can help me with. Old system Linux kernel 2.4.34 MIPSEL arch Perl 5.8.7 Net::DNS 0.48 IO 1.21 IO::Socket 1.28 IO::Socket::INET 1.28 New system Linux kernel 2.6.26.8 MIPS arch Perl 5.10.0 Net::DNS 0.66 IO 1.23_01 IO::Socket 1.30_01 IO::Socket::INET 1.31 First, let me provide some background information… I am trying to resolve my server (clearprobe.winbeam.com) from within my Perl program and see the following if I enable debugging in Net::DNS: resolve: Server 'clearprobe-ddns.winbeam.com' ;; query(clearprobe-ddns.winbeam.com) ;; setting up an AF_INET() family type UDP socket ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) ;; send_udp(192.168.88.1:53) ;; send_udp(4.2.2.2:53) resolve: res->errorstring: query timed out Both of these servers resolve clearprobe.winbeam.com fine from the command line: root@cwb-2-11:~# echo “nameserver 192.168.88.1” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 192.168.88.1 Address 1: 192.168.88.1 router Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net root@cwb-2-11:~# echo “nameserver 4.2.2.2” > /etc/resolv.conf root@cwb-2-11:~# nslookup clearprobe-ddns.winbeam.com Server: 4.2.2.2 Address 1: 4.2.2.2 vnsc-bak.sys.gtei.net Name: clearprobe-ddns.winbeam.com Address 1: 64.13.48.40 64-13-48-40.war.clearwire-dns.net Using Perl’s call to the C gethostbyaddr() function works fine, but I need to do another lookup later in the software which requires that I specify the nameserver (clearprobe-ddns.winbeam.com is the authority for my internal DNS zone), hence my Net::DNS requirement. Now, here is the IO module-specific information: What I am seeing is that the reply is coming back from the nameserver (confirmed via tcpdump – I can send the captures if you’d like), but the UDP packets are sitting in the process’s UDP receive queue pending reception by Net::DNS (the approx 1752 bytes per response stay queued waiting for $sel-can_read()): root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 1752 0 0.0.0.0:52680 0.0.0.0:* root@cwb-2-11:~# netstat -una Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State udp 5256 0 0.0.0.0:52680 0.0.0.0:* If I force $sock[AF_INET]-recv($buf, $self-_packetsz) around line 803 of /usr/lib/perl5/5.10/Net/DNS/Resolver/Base.pm, instead of waiting for IO::Select’s can_read() function ( @ready = $sel-can_read($timeout)) to populate @ready, the response is received and processed. Any idea what could be causing this issue? In a possibly related matter, I noticed in another script that the following code fails in the same manner (network responses stay in the process’s TCP receive queue) with the new system: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp', Timeout => 5 ); Whereas the following code works: $sock = new IO::Socket::INET( PeerAddr => "$server", PeerPort => 37, Proto => 'tcp' ); I have looked through the NET::DNS code and don’t see a timeout passed for the UDP sockets, so I am not sure if that this is related or not. Please let me know if I can provide you with any further information in order to help diagnose this issue. Thanks! -Rich

    Read the article

  • Adding a trigger command to autocomplete function in zsh

    - by mkaito
    When you define an alias like alias g=git, the shell will pick it up and run the corresponding autocomplete function. Now, there's a program out there called hub, which is basically a supserset of git, with some added, github-specific functionality. The recommended way to use hub is to alias git=hub. Of course, this won't trigger the autocomplete function for git, which makes sense. Now, if I wanted to have git's autocomplete trigger for hub, the only way I know of is editing /usr/share/zsh/functions/Completion/Unix/_git and adding hub in the first line as trigger. While this works, it isn't practical, since this file will get overwritten with the next zsh release. Assuming hub won't provide a zsh completion function any time soon, is there another way of adding hub to the trigger commands for git's autocomplete function?

    Read the article

  • java.util.ConcurrentModificationException when serializing non thread-safe maps

    - by [email protected]
    We have got some questions related to exceptions thrown during a map serialization like the following one (in this example, for a LRUMap): java.util.ConcurrentModificationExceptionat org.apache.commons.collections.SequencedHashMap$OrderedIterator.next(Unknown Source)at org.apache.commons.collections.LRUMap.writeExternal(Unknown Source)at java.io.ObjectOutputStream.writeExternalData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java(Inlined CompiledCode))at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java(Inlined CompiledCode))at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java(Inlined CompiledCode))at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java(Compiled Code))at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java(Compiled Code))at com.tangosol.util.ExternalizableHelper.writeSerializable(ExternalizableHelper.java(InlinedCompiled Code))at com.tangosol.util.ExternalizableHelper.writeObjectInternal(ExternalizableHelper.java(Compiled Code))at com.tangosol.util.ExternalizableHelper.serializeInternal(ExternalizableHelper.java(Compiled Code))at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java(InlinedCompiled Code))at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java(InlinedCompiled Code))at com.tangosol.coherence.servlet.TraditionalHttpSessionModel$OptimizedHolder.serializeValue(TraditionalHttpSessionModel.java(Inlined Compiled Code))at com.tangosol.coherence.servlet.TraditionalHttpSessionModel$OptimizedHolder.getBinary(TraditionalHttpSessionModel.java(Compiled Code)) This is caused because LRUMap is not thread safe, so if another thread is modifying the content of that same map while serialization is in progress, then the ConcurrentModificationException will be thrown. Also, the map must be synchronized. Other structures like java.util.HashMap are not thread safe too. To avoid this kind of problems, it is recommended to use a thread-safe and synchronized map such as java.util.Map, java.util.Hashtable or com.tangosol.util.SafeHashMap. You may also need to use the synchronizedMap(Map) method from Class java.util.Collections.  

    Read the article

  • High data on recv-q buffer and thread lock on java.io.BufferedInputStream in linux

    - by Sagar Patel
    We have a java application running on linux (ubuntu server). We have been facing high recv-q problem since quite some time. Application gets hang and does not read data from socket every few hours. In thread dump, we have found below stack trace. "Receiver-146" daemon prio=10 tid=0x00007fb3fc010000 nid=0x7642 runnable [0x00007fb5906c5000] java.lang.Thread.State: RUNNABLE at java.net.SocketInputStream. socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at java.io.BufferedInputStream.fill(BufferedInputStream.java:235) at java.io.BufferedInputStream.read1(BufferedInputStream.java:275) at java.io.BufferedInputStream.read(BufferedInputStream.java:334) - locked <0x00000007688f1ff0> (a java.io.BufferedInputStream) at org.smpp.TCPIPConnection.receive(TCPIPConnection.java:413) at org.smpp.ReceiverBase.receivePDUFromConnection(ReceiverBase.java:197) at org.smpp.Receiver.receiveAsync(Receiver.java:351) at org.smpp.ReceiverBase.process(ReceiverBase.java:96) at org.smpp.util.ProcessingThread.run(ProcessingThread.java:199) at java.lang.Thread.run(Thread.java:722) We are not able to trace the exact reason behind this? Kindly help. We are using 16 core machine and load on the system is around 30-40 at the time of issue. We use command ss dst <ip> to find out recv-q. Recently we have been facing issues with recv-q size getting hung, were in receive buffer gets stuck at some point of time. But recvQ size is not decreasing and as a result we are losing a lot of hits from the other side, our application is not accepting any data.

    Read the article

  • Tip #13 java.io.File Surprises

    - by ByronNevins
    There is an assumption that I've seen in code many times that is totally wrong.  And this assumption can easily bite you.  The assumption is: File.getAbsolutePath and getAbsoluteFile return paths that are not relative.  Not true!  Sort of.  At least not in the way many people would assume.  All they do is make sure that the beginning of the path is absolute.  The rest of the path can be loaded with relative path elements.  What do you think the following code will print? public class Main {    public static void main(String[] args) {        try {            File f = new File("/temp/../temp/../temp/../");            File abs  = f.getAbsoluteFile();            File parent = abs.getParentFile();            System.out.println("Exists: " + f.exists());            System.out.println("Absolute Path: " + abs);            System.out.println("FileName: " + abs.getName());            System.out.printf("The Parent Directory of %s is %s\n", abs, parent);            System.out.printf("The CANONICAL Parent Directory of CANONICAL %s is %s\n",                        abs, abs.getCanonicalFile().getParent());            System.out.printf("The CANONICAL Parent Directory of ABSOLUTE %s is %s\n",                        abs, parent.getCanonicalFile());            System.out.println("Canonical Path: " + f.getCanonicalPath());        }        catch (IOException ex) {            System.out.println("Got an exception: " + ex);        }    }} Output: Exists: trueAbsolute Path: D:\temp\..\temp\..\temp\..FileName: ..The Parent Directory of D:\temp\..\temp\..\temp\.. is D:\temp\..\temp\..\tempThe CANONICAL Parent Directory of CANONICAL D:\temp\..\temp\..\temp\.. is nullThe CANONICAL Parent Directory of ABSOLUTE D:\temp\..\temp\..\temp\.. is D:\tempCanonical Path: D:\ Notice how it says that the parent of d:\ is d:\temp !!!The file, f, is really the root directory.  The parent is supposed to be null. I learned about this the hard way! getParentXXX simply hacks off the final item in the path. You can get totally unexpected results like the above. Easily. I filed a bug on this behavior a few years ago[1].   Recommendations: (1) Use getCanonical instead of getAbsolute.  There is a 1:1 mapping of files and canonical filenames.  I.e each file has one and only one canonical filename and it will definitely not have relative path elements in it.  There are an infinite number of absolute paths for each file. (2) To get the parent file for File f do the following instead of getParentFile: File parent = new File(f, ".."); [1] http://bt2ws.central.sun.com/CrPrint?id=6687287

    Read the article

  • chromium-browser usus 99,99% IO disk

    - by lars
    My favorite browser: chromium is testing my patience. For some reason it sometimes uses 99,99% of I/O. (reading 2-3MB/s) Other processes (updatedb.mlocate, [kswapd0], clementine, compiz) show the same behavior. However this problem always starts and ends with chromium. To illustrate the impact on my system: when my disk starts to spin like crazy en the led burns continiously the system is so slow that it takes about two to five minuits to switch to tty6, log in and execute "killall chromiumbrowser && killall chromium" This is way faster than starting a new terminal in X, just starting a terminal seems to heavy for compiz under these circumstances. Waiting until its over takes more than 30 minuits, if it ends at all. The exact circumstances are difficult to replicate. Several tabs have to be open, usualy 8 or more. It seems that the chance to increases when more complex sites like gmail of plugins like flash are running. Opening several new tabs at omgubunt.co.uk has the best chance to replecate this isue. I have no idea where to start looking for a solution. Any help would be greatly apreciated ubuntu 12.10 | 2GB | 2x 1.66GHz Intel | 32bit | IBM Thinkpad R60e

    Read the article

  • A programming language that does not allow IO. Haskell is not a pure language

    - by TheIronKnuckle
    (I asked this on Stack Overflow and it got closed as off-topic, I was a bit confused until I read the FAQ, which discouraged subjective theoratical debate style questions. The FAQ here doesn't seem to have a problem with it and it sounds like this is a more appropriate place to post. If this gets closed again, forgive me, I'm not trying to troll) Are there any 100% pure languages (as I describe in the Stack Overflow post) out there already and if so, could they feasibly be used to actually do stuff? i.e. do they have an implementation? I'm not looking for raw maths on paper/Pure lambda calculus. However Pure lambda calculus with a compiler or a runtime system attached is something I'd be interested in hearing about.

    Read the article

  • How to monitor IO svctm with every 5 mins frequency using nagios?

    - by sabya
    I want to collect samples of iostat's svctm, await every 5 mins from all of my servers and store them in nagios. I want to get the values for what is happening in every 5 minutes (not since boot time, iostat's first output gives values since boot time). How can I do it in nagios? EDIT The tps should NOT be calculated #of transactions happened since reboot divided by uptime. What I want is # of transferred happened in last X mins divided X*60.

    Read the article

  • Using all Ten IO slots on a 7420

    - by user12620172
    So I had the opportunity recently to actually use up all ten slots in a clustered 7420 system. This actually uses 20 slots, or 22 if you count the clusteron card. I thought it was interesting enough to share here. This is at one of my clients here in southern California. You can see the picture below. We have four SAS HBAs instead of the usual two. This is becuase we wanted to split up the back-end taffic for different workloads. We have a set of disk trays coming from two SAS cards for nothing but Exadata backups. Then, we have a different set of disk trays coming off of the other two SAS cards for non-Exadata workloads, such as regular user file storage. We have 2 Infiniband cards which allow us to do a full mesh directly into the back of the nearby, production Exadata, specifically for fast backups and restores over IB. You can see a 3rd IB card here, which is going to be connected to a non-production Exadata for slower backups and restores from it.The 10Gig card is for client connectivity, allowing other, non-Exadata Oracle databases to make use of the many snapshots and clones that can now be created using the RMAN copies from the original production database coming off the Exadata. This allows for a good number of test and development Oracle databases to use these clones without effecting performance of the Exadata at all.We also have a couple FC HBAs, both for NDMP backups to an Oracle/StorageTek tape library and also for FC clients to come in and use some storage on the 7420.  Now, if you are adding more cards to your 7420, be aware of which cards you can place in which slots. See the bottom graphic just below the photo.  Note that the slots are numbered 0-4 for the first 5 cards, then the "C" slots which is the dedicated Cluster card (called the Clustron), and then another 5 slots numbered 5-9. Some rules for the slots: Slots 1 & 8 are automatically populated with the two default SAS cards. The only other slots you can add SAS cards to are 2 & 7. Slots 0 and 9 can only hold FC cards. Nothing else. So if you have four SAS cards, you are now down to only four more slots for your 10Gig and IB cards. Be sure not to waste one of these slots on a FC card, which can go into 0 or 9, instead.  If at all possible, slots should be populated in this order: 9, 0, 7, 2, 6, 3, 5, 4

    Read the article

  • Delete trigger does not catch table truncation

    - by Tomaz.tsql
    Sample shows table truncation will not fire delete trigger. USE AdventureWorks; GO -- STAGING IF EXISTS (SELECT * FROM sys.objects WHERE name = 'est_del_trigger_log' AND type = 'U') DROP TABLE test_del_trigger_log; GO IF EXISTS (SELECT * FROM sys.objects WHERE name = 'est_del_trigger' AND type = 'U') DROP TABLE test_del_trigger; GO CREATE TABLE test_del_trigger (id INT IDENTITY(1,1) ,tkt VARCHAR(10) CONSTRAINT pk_test_del_trigger PRIMARY KEY (id) ); GO INSERT INTO...(read more)

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

  • Pass a variable into a trigger

    - by Codesleuth
    I have a trigger which deals with some data for logging purposes like so: CREATE TRIGGER trgDataUpdated ON tblData FOR UPDATE AS BEGIN INSERT INTO tblLog ( ParentID, OldValue, NewValue, UserID ) SELECT deleted.ParentID, deleted.Value, inserted.Value, @intUserID -- how can I pass this in? FROM inserted INNER JOIN deleted ON inserted.ID = deleted.ID END How can I pass in the variable @intUserID into the above trigger, as in the following code: DECLARE @intUserID int SET @intUserID = 10 UPDATE tblData SET Value = @x PS: I know I can't literally pass in @intUserID to the trigger, it was just used for illustration purposes.

    Read the article

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • Conditional Update stored proc is firing a trigger always

    - by schar
    I have a stored procedure that says update table1 set value1 = 1 where value1 = 0 and date < getdate() I have a trigger that goes like CREATE TRIGGER NAME ON TABLENAME FOR UPDATE ... if UPDATE(value1) BEGIN --Some code to figure out that this trigger has been called -- the value is always null END Any idea why this trigger is called even when the stored procedure does not update any values?

    Read the article

  • Oracle database on trigger fail rollback

    - by GigaPr
    Hi I want to create a trigger that execute on update of a table. in particular on update of a table i want to update another table via a trigger but if the trigger fails (REFERENTIAL INTEGRITY-- ENTITY INTEGRITY) i do not want to execute the update anymore. Any suggestion on how to perform this? Is it better to use a trigger or do it anagrammatically via a stored procedure? Thanks

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >