Search Results

Search found 10481 results on 420 pages for 'identity insert'.

Page 90/420 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • Can't connect to EC2 instance Permission denied (publickey)

    - by Assad Ullah
    I got this when I tried to connect my new instace (UBUNTU 12.01 EC2) with my newly generated key sh-3.2# ssh ec2-user@**** -v ****.pem OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: Applying options for * debug1: Connecting to **** [****] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /var/root/.ssh/id_rsa type -1 debug1: identity file /var/root/.ssh/id_rsa-cert type -1 debug1: identity file /var/root/.ssh/id_dsa type -1 debug1: identity file /var/root/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '****' is known and matches the RSA host key. debug1: Found key in /var/root/.ssh/known_hosts:4 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /var/root/.ssh/id_rsa debug1: Trying private key: /var/root/.ssh/id_dsa debug1: No more authentication methods to try.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host

    - by rick
    Firstly, I know that this question has been asked a million times, and I have read everything I can find and still cannot fix the problem. i am encountering this issue when ssh'ing in from my mac to my Ubuntu server on a fresh install of Ubuntu (I reinstalled because of this issue). I have SSH portmapped to 7070 because my ISP is blocking 22. On the client: bash: ssh -p 7070 -v [email protected] debug1: Reading configuration data /etc/ssh_config debug1: Connecting to address.org port 7070. debug1: Connection established. debug1: identity file /home/me/.ssh/identity type -1 debug1: identity file /home/me/.ssh/id_rsa type 1 debug1: identity file /home/me/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host Here's what I have done to try to resolve the issue: Made sure my maxstartups is ok: bash: grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 Made sure hosts.deny is clear of denials. Made sure hosts.allow has my client IP. Clear out known_hosts on client Changed ownership of /var/run to root Made sure etc/run/ssh is Made sure /var/empty exists Reinstall openssh-server Reinstall ubuntu When I run telnet localhost, I get this: telnet localhost Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection refused When I run /usr/sbin/sshd -t Could not load host key: /etc/ssh/ssh_host_rsa_key Could not load host key: /etc/ssh/ssh_host_dsa_key When I regenerate the keys with ssh-keygen -t rsa -f /etc/ssh/ssh_host_rsa_key ssh-keygen -t dsa -f /etc/ssh/ssh_host_dsa_key I get the same error. I am pretty sure this is the issue. Can anyone help?

    Read the article

  • Password-less login into localhost

    - by Brad
    I am trying to setup password-less login into my localhost because it's required for a tutorial. I went through the normal steps of generating an rsa key and appending the public key to authorized_keys but I am still prompted for a password. I've also enabled RSAAuthentication and PubKeyAuthentication in /etc/ssh_config. Following other suggestions I've seen, I tried: chmod go-w ~/ chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys But the problem persists. Here is the output from ssh -v localhost: (tutorial)bnels21-2:tutorial bnels21$ ssh -v localhost OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: Connecting to localhost [::1] port 22. debug1: Connection established. debug1: identity file /Users/bnels21/.ssh/id_rsa type 1 debug1: identity file /Users/bnels21/.ssh/id_rsa-cert type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa type -1 debug1: identity file /Users/bnels21/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9 debug1: match: OpenSSH_5.9 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA 1c:31:0e:56:93:45:dc:f0:77:6c:bd:90:27:3b:c6:43 debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/bnels21/.ssh/known_hosts:11 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/bnels21/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Offering RSA public key: id_rsa3 debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/bnels21/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive Password: Any suggestions? I'm running OSX 10.8.

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • SQL select descendants of a row

    - by Joey Adams
    Suppose a tree structure is implemented in SQL like this: CREATE TABLE nodes ( id INTEGER PRIMARY KEY, parent INTEGER -- references nodes(id) ); Although cycles can be created in this representation, let's assume we never let that happen. The table will only store a collection of roots (records where parent is null) and their descendants. The goal is to, given an id of a node on the table, find all nodes that are descendants of it. A is a descendant of B if either A's parent is B or A's parent is a descendant of B. Note the recursive definition. Here is some sample data: INSERT INTO nodes VALUES (1, NULL); INSERT INTO nodes VALUES (2, 1); INSERT INTO nodes VALUES (3, 2); INSERT INTO nodes VALUES (4, 3); INSERT INTO nodes VALUES (5, 3); INSERT INTO nodes VALUES (6, 2); which represents: 1 `-- 2 |-- 3 | |-- 4 | `-- 5 | `-- 6 We can select the (immediate) children of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1; We can select the children and grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id; We can select the children, grandchildren, and great grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id UNION ALL SELECT c.* FROM nodes AS a, nodes AS b, nodes AS c WHERE a.parent=1 AND b.parent=a.id AND c.parent=b.id; How can a query be constructed that gets all descendants of node 1 rather than those at a finite depth? It seems like I would need to create a recursive query or something. I'd like to know if such a query would be possible using SQLite. However, if this type of query requires features not available in SQLite, I'm curious to know if it can be done in other SQL databases.

    Read the article

  • Query returns too few rows

    - by Tareq
    setup: mysql> create table product_stock( product_id integer, qty integer, branch_id integer); Query OK, 0 rows affected (0.17 sec) mysql> create table product( product_id integer, product_name varchar(255)); Query OK, 0 rows affected (0.11 sec) mysql> insert into product(product_id, product_name) values(1, 'Apsana White DX Pencil'); Query OK, 1 row affected (0.05 sec) mysql> insert into product(product_id, product_name) values(2, 'Diamond Glass Marking Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product(product_id, product_name) values(3, 'Apsana Black Pencil'); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 100, 1); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(1, 50, 2); Query OK, 1 row affected (0.03 sec) mysql> insert into product_stock(product_id, qty, branch_id) values(2, 80, 1); Query OK, 1 row affected (0.03 sec) my query: mysql> SELECT IFNULL(SUM(s.qty),0) AS stock, product_name FROM product_stock s RIGHT JOIN product p ON s.product_id=p.product_id WHERE branch_id=1 GROUP BY product_name ORDER BY product_name; returns: +-------+-------------------------------+ | stock | product_name | +-------+-------------------------------+ | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+-------------------------------+ 1 row in set (0.00 sec) But I want to have the following result: +-------+------------------------------+ | stock | product_name | +-------+------------------------------+ | 0 | Apsana Black Pencil | | 100 | Apsana White DX Pencil | | 80 | Diamond Glass Marking Pencil | +-------+------------------------------+ To get this result what mysql query should I run?

    Read the article

  • google maps api : internal server error when inserting a feature

    - by user142764
    Hi, I try to insert features on a custom google map : i use the sample code from the doc but i get a ServiceException (Internal server error) when i call the service's insert method. Here is what i do : I create a map and get the resulting MapEntry object : myMapEntry = (MapEntry) service.insert(mapUrl, myEntry); This works fine : i can see the map i created in "my maps" on google. I use the feed url from the map to insert a feature : final URL featureEditUrl = myMapEntry.getFeatureFeedUrl(); I create a kml string using the sample from the doc : String kmlStr = "< Placemark xmlns=\"http://www.opengis.net/kml/2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; And when i call the insert method i get an internal server error. I must be doing something wrong but i cant see what, can anybody help ? Here is the complete code i use : public void doCreateFeaturesFormap(MapEntry myMap) throws ServiceException, IOException { final URL featureEditUrl = myMap.getFeatureFeedUrl(); FeatureEntry featureEntry = new FeatureEntry(); try { String kmlStr = "<Placemark xmlns=\"http://www.opengis.net/kml/ 2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; XmlBlob kml = new XmlBlob(); kml.setFullText(kmlStr); featureEntry.setKml(kml); featureEntry.setTitle(new PlainTextConstruct("Feature Title")); } catch (NullPointerException e) { System.out.println("Error: " + e.getClass().getName()); } FeatureEntry myFeature = (FeatureEntry) service.insert( featureEditUrl, featureEntry); } Thanks in advance, Vincent.

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Mysql return value as 0 in the fetch result.

    - by Karthik
    I have this two tables, -- -- Table structure for table `t1` -- CREATE TABLE `t1` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `pname` varchar(20) collate latin1_general_ci NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t1` -- INSERT INTO `t1` VALUES ('p1', 'pro1'); INSERT INTO `t1` VALUES ('p2', 'pro2'); -- -------------------------------------------------------- -- -- Table structure for table `t2` -- CREATE TABLE `t2` ( `pid` varchar(20) collate latin1_general_ci NOT NULL, `year` int(6) NOT NULL, `price` int(3) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci; -- -- Dumping data for table `t2` -- INSERT INTO `t2` VALUES ('p1', 2009, 50); INSERT INTO `t2` VALUES ('p1', 2010, 60); INSERT INTO `t2` VALUES ('p3', 2007, 200); INSERT INTO `t2` VALUES ('p4', 2008, 501); my query is, SELECT * FROM `t1` LEFT JOIN `t2` ON t1.pid = t2.pid Getting the result, pid pname pid year price p1 pro1 p1 2009 50 p1 pro1 p1 2010 60 p2 pro2 NULL NULL NULL My question is, i want to get the price value is 0 instead of NULL. How can i write the query to getting the price value is 0. Thanks in advance for help.

    Read the article

  • ORACLE: can we create global temp tables or any tables in stored proc?

    - by mrp
    Hi, below is the stored proc I wrote: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; when i executed it , i get an error message mentioning: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; Error at line 1 ORA-00955: name is already used by an existing object Script Terminated on line 1. I tried to drop the TEMP_TRAN and it says table doesn't exist. So there is no TEMP_TRAN table existed in system. why am I getting this error? I am using TOAD to create this stored proc. Any help would be highly appreciated.

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • Inorder tree traversal in binary tree in C

    - by srk
    In the below code, I'am creating a binary tree using insert function and trying to display the inserted elements using inorder function which follows the logic of In-order traversal.When I run it, numbers are getting inserted but when I try the inorder function( input 3), the program continues for next input without displaying anything. I guess there might be a logical error.Please help me clear it. Thanks in advance... #include<stdio.h> #include<stdlib.h> int i; typedef struct ll { int data; struct ll *left; struct ll *right; } node; node *root1=NULL; // the root node void insert(node *root,int n) { if(root==NULL) //for the first(root) node { root=(node *)malloc(sizeof(node)); root->data=n; root->right=NULL; root->left=NULL; } else { if(n<(root->data)) { root->left=(node *)malloc(sizeof(node)); insert(root->left,n); } else if(n>(root->data)) { root->right=(node *)malloc(sizeof(node)); insert(root->right,n); } else { root->data=n; } } } void inorder(node *root) { if(root!=NULL) { inorder(root->left); printf("%d ",root->data); inorder(root->right); } } main() { int n,choice=1; while(choice!=0) { printf("Enter choice--- 1 for insert, 3 for inorder and 0 for exit\n"); scanf("%d",&choice); switch(choice) { case 1: printf("Enter number to be inserted\n"); scanf("%d",&n); insert(root1,n); break; case 3: inorder(root1); break; default: break; } } }

    Read the article

  • advanced select in Stored Procedure

    - by Auro
    Hey i got this Table: CREATE TABLE Test_Table ( old_val VARCHAR2(3), new_val VARCHAR2(3), Updflag NUMBER, WorkNo NUMBER ); and this is in my Table: INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('1',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('3',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); now my Table Looks like this: Row Old_val New_val Updflag WorkNo 1 '1' ' 20' 0 0 2 '2' ' 20' 0 0 3 '2' ' 30' 0 0 4 '3' ' 30' 0 0 5 '4' ' 40' 0 0 6 '5' ' 40' 0 0 (if the value in the new_val column are same then they are together and the same goes to old_val) so in the example above row 1-4 are together and row 5-6 at the moment i have in my Stored Procedure a cursor: SELECT t1.Old_val, t1.New_val, t1.updflag, t1.WorkNo FROM Test_Table t1 WHERE t1.New_val = ( SELECT t2.New_val FROM Test_Table t2 WHERE t2.Updflag = 0 AND t2.Worknr = 0 AND ROWNUM = 1 ) the output is this: Row Old_val New_val Updflag WorkNo 1 1 20 0 0 2 2 20 0 0 my Problem is, i dont know how to get row 1 to 4 with one select. (i had an idea with 4 sub-querys but this wont work if its more data that matches together) does anyone of you have an idea?

    Read the article

  • How to remove empty tables from a MySQL backup file.

    - by user280708
    I have multiple large MySQL backup files all from different DBs and having different schemas. I want to load the backups into our EDW but I don't want to load the empty tables. Right now I'm cutting out the empty tables using AWK on the backup files, but I'm wondering if there's a better way to do this. If anyone is interested, this is my AWK script: EDIT: I noticed today that this script has some problems, please beware if you want to actually try to use it. Your output may be WRONG... I will post my changes as I make them. # File: remove_empty_tables.awk # Copyright (c) Northwestern University, 2010 # http://edw.northwestern.edu /^--$/ { i = 0; line[++i] = $0; getline if ($0 ~ /-- Definition/) { inserts = 0; while ($0 !~ / ALTER TABLE .* ENABLE KEYS /) { # If we already have an insert: if (inserts > 0) print else { # If we found an INSERT statement, the table is NOT empty: if ($0 ~ /^INSERT /) { ++inserts # Dump the lines before the INSERT and then the INSERT: for (j = 1; j <= i; ++j) print line[j] i = 0 print $0 } # Otherwise we may yet find an insert, so save the line: else line[++i] = $0 } getline # go to the next line } line[++i] = $0; getline line[++i] = $0; getline if (inserts > 0) { for (j = 1; j <= i; ++j) print line[j] print $0 } next } else { print "--" } } { print }

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Insertion into BST without header Node JAVA

    - by Petiatil
    I am working on a recursive insertion method for a BST. This function is suppose to be a recursive helper method and is in a private class called Node. The Node class is in a class called BinarySearchTree which contains an instance variable for the root. When I am trying to insert an element, I get a NullPointerException at : this.left = insert(((Node)left).element); I am unsure about why this occurs. If I understand correctly, in a BST, I am suppose to insert the item at the last spot on the path transversed. Any help is appreciated! private class Node implements BinaryNode<E> { E item; BinaryNode<E> left, right; public BinaryNode<E> insert(E item) { int compare = item.compareTo(((Node)root).item); if(root == null) { root = new Node(); ((Node)root).item = item; } else if(compare < 0) { this.left = insert(((Node)left).item); } else if(compare > 0) { this.right = insert(((Node)right).item); } return root; } }

    Read the article

  • Framework 4 Features: Summary of Security enhancements

    - by Anthony Shorten
    In the last log entry I mentioned one of the new security features in Oracle Utilities Application Framework 4.0.1. Security is one of the major "tent poles" (to borrow a phrase from Steve Jobs) in this release of the framework. There are a number of security related enhancements requested by customers and as a result of internal reviews that we have introduced. Here is a summary of some of the security enchancements we have added in this release: Security Cache Changes - Security authorization information is automatically cached on the server for performance reasons (security is checked for every single call the product makes for all modes of access). Prior to this release the cache auto-refreshed every 30 minutes (or so). This has beem made more nimble by supporting a cache refresh every minute (or so). This means authorization changes are reflected quicker than before. Business Level security - Business Services are configurable services that are based upon Application Services. Typically, the business service inherited its security profile from its parent service. Whilst this is sufficient for most needs, it is now required to further specify security on the Business Service definition itself. This will allow granular security and allow the same application service to be exposed as different Business Services with their own security. This is particularly useful when you base a Business Service on a query zone. User Propogation - As with other client server applications, the database connections are pooled and shared as needed. This means that a common database user is used to access the database from the pool to allow sharing. Unfortunently, this means that tracability at the database level is that much harder. In Oracle Utilities Application Framework V4 the end userid is now propogated to the database using the CLIENT_IDENTIFIER as part of the Oracle JDBC connection API. This not only means that the common database userid is still used but the end user is indentifiable for the duration of the database call. This can be used for monitoring or to hook into Oracle's database security products. This enhancement is only available to Oracle Database customers. Enhanced Security Definitions - Security Administrators use the product browser front end to control access rights of defined users. While this is sufficient for most sites, a new security portal has been introduced to speed up the maintenance of security information. Oracle Identity Manager Integration - With the popularity of Oracle's Identity Management Suite, the Framework now provides an integration adapter and Identity Manager Generic Transport Connector (GTC) to allow users and group membership to be provisioned to any Oracle Utilities Application Framework based product from Oracle's Identity Manager. This is also available for Oracle Utilties Application Framework V2.2 customers. Refer to My Oracle Support KBid 970785.1 - Oracle Identity Manager Integration Overview. Audit On Inquiry - Typically the configurable audit facility in the Oracle Utilities Application Framework is used to audit changes to records. In Oracle Utilities Application Framework the Business Services and Service Scripts could be configured to audit inquiries as well. Now it is possible to attach auditing capabilities to zones on the product (including base package ones). Time Zone Support - In some of the Oracle Utilities Application Framework based products, the timezone of the end user is a factor in the processing. The user object has been extended to allow the recording of time zone information for use in product functionality. JAAS Suport - Internally the Oracle Utilities Application Framework uses a number of techniques to validate and transmit security information across the architecture. These various methods have been reconciled into using Java Authentication and Authorization Services for standardized security. This is strictly an internal change with no direct on how security operates externally. JMX Based Cache Management - In the last bullet point, I mentioned extra security applied to cache management from the browser. Alternatively a JMX based interface is now provided to allow IT operations to control the cache without the browser interface. This JMX capability can be initiated from a JSR120 compliant JMX console or JMX browser. I will be writing another more detailed blog entry on the JMX enhancements as it is quite a change and an exciting direction for the product line. Data Patch Permissions - The database installer provided with the product required lower levels of security for some operations. At some sites they wanted the ability for non-DBA's to execute the utilities in a controlled fashion. The framework now allows feature configuration to allow delegation for patch execution. User Enable Support - At some sites, the use of temporary staff such as contractors is commonplace. In this scenario, temporary security setups were required and used. A potential issue has arisen when the contractor left the company. Typically the IT group would remove the contractor from the security repository to prevent login using that contractors userid but the userid could NOT be removed from the authorization model becuase of audit requirements (if any user in the product updates financials or key data their userid is recorded for audit purposes). It is now possible to effectively diable the user from the security model to prevent any use of the useridwhilst retaining audit information. These are a subset of the security changes in Oracle Utilities Application Framework. More details about the security capabilities of the product is contained in My Oracle Support KB Id 773473.1 - Oracle Utilities Application Framework Security Overview.

    Read the article

  • Vim disables ibus IME -- is this a bug?

    - by misha
    I'm using ibus IME to input Japanese text into GVim. I have the following Vim script that I source when GVim starts up: autocmd InsertLeave * :call bug#onInsertLeave() function! bug#onInsertLeave() python << EOT import vim import ibus bus = ibus.Bus() ic = ibus.InputContext(bus, bus.current_input_contxt()) ic.disable() print "bug#onInsertLeave(): exiting" EOT endfunction The line that constructs the InputContext raises the exception: dbus.exception.DBusException: org.freedesktop.DBus.Error.Failed: no focused input context This happens under the following conditions: I enter insert mode I insert some Japanese text I exit insert mode If I don't enter any Japanese text through the IME, then the exception is not raised. I've also noticed that if I exit insert mode after entering some Japanese text while the IME is still enabled, then IME input is disabled (I can see the icon change in the taskbar). If I exit insert mode without entering any Japanese text, but while the IME is still enabled, then the IME stays enabled (the icon does not change). It seems like GVim is disabling the IME (or the IME is switching off) in some conditions. Could it be related to the exception? My questions are: Is this a bug? If it is, then whose bug is it? Vim, Ibus, or something else? Are there any ways to work around the exception? EDIT My system info: > misha@misha-lmd:~/git/iwait2013/lagos$ apt-cache policy ibus ibus: > Installed: 1.4.1-3ubuntu1 Candidate: 1.4.1-3ubuntu1 Version table: > *** 1.4.1-3ubuntu1 0 > 500 http://jp.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages > 100 /var/lib/dpkg/status misha@misha-lmd:~/git/iwait2013/lagos$ apt-cache policy vim vim: > Installed: 2:7.3.429-2ubuntu2.1 Candidate: 2:7.3.429-2ubuntu2.1 > Version table: *** 2:7.3.429-2ubuntu2.1 0 > 500 http://jp.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages > 100 /var/lib/dpkg/status > 2:7.3.429-2ubuntu2 0 > 500 http://jp.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages Ubuntu 12.04, Fluxbox

    Read the article

  • 2012 Oracle Fusion Innovation Awards - Part 2

    - by Michelle Kimihira
    Author: Moazzam Chaudry Continuing from Friday's blog on 2012 Oracle Fusion Innovation Awards, this blog (Part 2) will provide more details around the customers. It was a tremendous honor to be in single room of winners. We only wish we could have had more time to share stories from all the winners.  We received great insight from all the innovative solutions that our customers deploy and would like to share them broadly, so that others can benefit from best practices. There was a customer panel session joined by Ingersoll Rand, Nike and Motability and here is what was discussed: Barry Bonar, Enterprise Architect from Ingersoll Rand shared details around their solution, comprised of Oracle Exalogic, Oracle WebLogic Server and Oracle SOA Suite. This combined solutoin enabled their business transformation to increase decision-making, speed and efficiency, resulting in 40% reduced IT spend, 41X Faster response time and huge cost savings. Ashok Balakrishnan, Architect from Nike shared how they leveraged Oracle Coherence to analyze their digital "footprint" of activities. This helps them compete, collaborate and compare athletic data over time. Lastly, Ashley Doodly, Head of IT from Motability shared details around their solution compromised of Oracle SOA Suite, Service Bus, ADF, Coherence, BO and E-Business Suite. This solution helped Motability achieve 100% ROI within the first few months, performance in seconds vs. 10's of minutes and tremendous improvement in throughput that increased up to 50%.  This year's winners by category are: Oracle Exalogic Customer Results using Fusion Middleware Netshoes ATG on Exalogic: 6X Reduced H/W foot print, 6.2X increased throughput and 3 weeks time to market Claro Part of America Movil, running mission critical Java Application on Exalogic with 35X Faster Java response time, 5X Throughput Underwriters Laboratories Exalogic as an Apps Consolidation platform to power tremendous growth Ingersoll Rand EBS on Exalogic: Up to 40% Reduction in overall IT budget, 3x reduced foot print Oracle Cloud Application Foundation Customer Results using Fusion Middleware  Mazda Motor Corporation Tuxedo ART Batch runtime environment to migrate their batch apps on new open environment and reduce main frame cost. HOTELBEDS Technology Open Source to WebLogic transformation Globalia Corporation Introduced Oracle Coherence to fully reengineer DTH system and provide multiple business and technical benefits Nike Nike+, digital sports platform, has 8M users and is expecting an 5X increase in users, many of who will carry multiple devices that frequently sync data with the Digital Sport platform Comcast Corporation The solution is expected to increase availability, continuity, performance, and simplify and make the code at the application layer more flexible. Oracle SOA and Oracle BPM Customer Results using Fusion Middleware NTT Docomo Network traffic solution based on Oracle event processing and coherence - massive in scale: 12M users (50M in future) - 800,000 events/sec. Schneider National, Inc. SOA/B2B/ADF/Data Integration to orchestrate key order processes across Siebel, OTM & EBS.  Platform runs 60M trans/day and  50 million composite SOA instances per day across 10G and 11G Amadeus Oracle BPM solution: Business Rules and processes vary across local (80), regional (~10) and corporate approval process. Up to 10 levels of approval. Plans to deploy across 20+ markets Navitar SOA solution integrates a fully non-Oracle legacy application/ERP environment using Oracle’s SOA Suite and Oracle AIA Foundation Pack. Motability Uses SOA Suite to synchronize data across the systems and to manage the vehicle remarketing process Oracle WebCenter Customer Results using Fusion Middleware  News Limited Single platform running websites for 50% of Australia's newspapers University of Louisville “Facebook for Medicine”: Oracle Webcenter platform and Oracle BIEE to analyze patient test data and uncover potential health issues. Expecting annualized ROI of 277% China Mobile Jiangsu Company portal (25k users) to drive collaboration & productivity Life Technologies Portal for remotely monitoring & repairing biotech instruments LA Dept. of Water & Power Oracle WebCenter Portal to power ladwp.com on desktop and mobile for 1.6million users Oracle Identity Management Customer Results using Fusion Middleware Education Testing Service Identity Management platform for provisioning & SSO of 6 million GRE, GMAT, TOEFL customers Avea Oracle Identity Manager allowing call center personnel to quickly change Identity Profile to handle varying call loads based on a user self service interface. Decreased Admin Cost by 30% Oracle Data Integration Customer Results using Fusion Middleware Raymond James Near real-time integration for improved systems (throughput & performance) and enhanced operational flexibility in a 24 X 7 environment Wm Morrison Supermarkets Electronic Point of Sale integration handling over 80 million transactions a day in near real time (15 min intervals) Oracle Application Development Framework and Oracle Fusion Development Customer Results using Fusion Middleware Qualcomm Incorporated Solution providing  immediate business value enabling a self-service model necessary for growing the new customer base, an increase in customer satisfaction, reduced “time-to-deliver” Micros Systems, Inc. ADF, SOA Suite, WebCenter  enables services that include managing distribution of hotel rooms availability and rates to channels such as Hotel Web-site, Expedia, etc. Marfin Egnatia Bank A new web 2.0 UI provides a much richer experience through the ADF solution with the end result being one of boosting end-user productivity    Business Analytics (Oracle BI, Oracle EPM, Oracle Exalytics) Customer Results using Fusion Middleware INC Research Self-service customer portal delivering 5–10% of the overall revenue - expected to grow fast with the BI solution Experian Reduction in Time to Complete the Financial Close Process Hologic Inc Solution, saving months of decision-making uncertainty! We look forward to seeing many more innovative nominations. The nominatation process for 2013 begins in April 2013.    Additional Information: Blog: Oracle WebCenter Award Winners Blog: Oracle Identity Management Winners Blog: Oracle Exalogic Winners Blog: SOA, BPM and Data Integration will be will feature award winners in its respective areas this week Subscribe to our regular Fusion Middleware Newsletter Follow us on Twitter and Facebook

    Read the article

  • 5 minutes WIF: Make your ASP.NET application use test-STS

    - by DigiMortal
    Windows Identity Foundation (WIF) provides us with simple and dummy STS application we can use to develop our system with no actual STS in place. In this posting I will show you how to add STS support to your existing application and how to generate dummy application that plays you real STS. Word of caution! Although it is relatively easy to build your own STS using WIF tools I don’t recommend you to build it. Identity providers must be highly secure and stable in every means and this makes development of your own STS very complex task. If it is possible then use some known STS solution. I suppose you have WIF and WIF SDK installed on your development machine. If you don’t then here are the links to download pages: Windows Identity Foundation Windows Identity Foundation SDK Adding STS support to your web application Suppose you have web application and you want to externalize authentication so your application is able to detect users, send unauthenticated users to login and work in other terms exactly like it worked before. WIF tools provide you with all you need. 1. Click on your web application project and select “Add STS reference…” from context menu to start adding or updating STS settings for web application. 2. Insert your application URI in application settings window. Note that web.config file is already selected for you. I inserted URI that corresponds to my web application address under IIS Express. This URI must exist (later) because otherwise you cannot use dummy STS service. 3. Select “Create a new STS project in the current solution” and click Next button. 4. Summary screen gives you information about how your site will use STS. You can run this wizard always when you have to modify STS parameters. Click Finish. If everything goes like expected then new web site will be added to your solution and it is named as YourWebAppName_STS. Dummy STS application Image on right shows you dummy STS web site. Yes, it is created as web site project not as web application. But it still works nice and you don’t have to make there any modifications. It just works but it is dummy one. Why dummy STS? Some points about dummy STS web site: Dummy STS is not template for your own custom STS identity provider. Dummy STS is very good and simple replacement of real STS so you have more flexible development environment and you don’t have to authenticate yourself in real service. Of course, you can modify dummy STS web site to mimic some behavior of your real STS. Pages in dummy STS Dummy STS has two pages – Login.aspx and  Default.aspx. Default.aspx is the page that handles requests to STS service. Login.aspx is the page where authentication takes place. Dummy STS authenticates users using FBA. You can insert whatever username you like and dummy STS still works. You can take a look at the code behind these pages to get some idea about how this dummy service is built up. But again – this service is there to simplify your life as developer. Authenticating users using dummy STS If you are using development web server that ships with Visual Studio 2010 I suggest you to switch over to IIS or IIS Express and make some more configuration changes as described in my previous posting Making WIF local STS to work with your ASP.NET application. When you are done with these little modifications you are ready to run your application and see how authentication works. If everything is okay then you are redirected to dummy STS login page when running your web application. Adam Carter is provided as username by default. If you click on submit button you are authenticated and redirected to application page. In my case it looks like this. Conclusion As you saw it is very easy to set up your own dummy STS web site for testing purposes. You coded nothing. You just ran wizard, inserted some data, modified configuration a little bit and you were done. Later, when your application goes to production you can run again this STS configuration utility and it generates correct settings for your real STS service automatically.

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • Data Source Security Part 1

    - by Steve Felts
    I’ve written a couple of articles on how to store data source security credentials using the Oracle wallet.  I plan to write a few articles on the various types of security available to WebLogic Server (WLS) data sources.  There are more options than you might think! There have been several enhancements in this area in WLS 10.3.6.  There are a couple of more enhancements planned for release WLS 12.1.2 that I will include here for completeness.  This isn’t intended as a teaser.  If you call your Oracle support person, you can get them now as minor patches to WLS 10.3.6.   The current security documentation is scattered in a few places, has a few incorrect statements, and is missing a few topics.  It also seems that the knowledge of how to apply some of these features isn’t written down.  The goal of these articles is to talk about WLS data source security in a unified way and to introduce some approaches to using the available features.  Introduction to WebLogic Data Source Security Options By default, you define a single database user and password for a data source.  You can store it in the data source descriptor or make use of the Oracle wallet.  This is a very simple and efficient approach to security.  All of the connections in the connection pool are owned by this user and there is no special processing when a connection is given out.  That is, it’s a homogeneous connection pool and any request can get any connection from a security perspective (there are other aspects like affinity).  Regardless of the end user of the application, all connections in the pool use the same security credentials to access the DBMS.   No additional information is needed when you get a connection because it’s all available from the data source descriptor (or wallet). java.sql.Connection conn =  mydatasource.getConnection(); Note: You can enter the password as a name-value pair in the Properties field (this not permitted for production environments) or you can enter it in the Password field of the data source descriptor. The value in the Password field overrides any password value defined in the Properties passed to the JDBC Driver when creating physical database connections. It is recommended that you use the Password attribute in place of the password property in the properties string because the Password value is encrypted in the configuration file (stored as the password-encrypted attribute in the jdbc-driver-params tag in the module file) and is hidden in the administration console.  The Properties and Password fields are located on the administration console Data Source creation wizard or Data Source Configuration tab. The JDBC API can also be used to programmatically specify a database user name and password as in the following.  java.sql.Connection conn = mydatasource.getConnection(“user”, “password”); According to the JDBC specification, it’s supposed to take a database user and associated password but different vendors implement this differently.  WLS, by default, treats this as an application server user and password.  The pair is authenticated to see if it’s a valid user and that user is used for WLS security permission checks.  By default, the user is then mapped to a database user and password using the data source credential mapper, so this API sort of follows the specification but database credentials are one-step removed from the application code.  More details and the rationale are described later. While the default approach is simple, it does mean that only one database user is doing all of the work.  You can’t figure out who actually did the update and you can’t restrict SQL operations by who is running the operation, at least at the database level.   Any type of per-user logic will need to be in the application code instead of having the database do it.  There are various WLS data source features that can be configured to provide some per-user information about the operations to the database. WebLogic Data Source Security Options This table describes the features available for WebLogic data sources to configure database security credentials and a brief description.  It also captures information about the compatibility of these features with one another. Feature Description Can be used with Can’t be used with User authentication (default) Default getConnection(user, password) behavior – validate the input and use the user/password in the descriptor. Set client identifier Proxy Session, Identity pooling, Use database credentials Use database credentials Instead of using the credential mapper, use the supplied user and password directly. Set client identifier, Proxy session, Identity pooling User authentication, Multi Data Source Set Client Identifier Set a client identifier property associated with the connection (Oracle and DB2 only). Everything Proxy Session Set a light-weight proxy user associated with the connection (Oracle-only). Set client identifier, Use database credentials Identity pooling, User authentication Identity pooling Heterogeneous pool of connections owned by specified users. Set client identifier, Use database credentials Proxy session, User authentication, Labeling, Multi-datasource, Active GridLink Note that all of these features are available with both XA and non-XA drivers. Currently, the Proxy Session and Use Database Credentials options are on the Oracle tab of the Data Source Configuration tab of the administration console (even though the Use Database Credentials feature is not just for Oracle databases – oops).  The rest of the features are on the Identity tab of the Data Source Configuration tab in the administration console (plan on seeing them all in one place in the future). The subsequent articles will describe these features in more detail.  Keep referring back to this table to see the big picture.

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >