Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 462/838 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • getting Error while set up the connection pool in jboss

    - by Yashwant Chavan
    Hi as per following Connection pool configuration facing some issue. Place a copy of mysql-connector-java-[version]-bin.jar in $JBOSS_HOME/server/all/lib. Then, follow the example configuration file named mysql-ds.xml in the $JBOSS_HOME/docs/examples/jca directory that comes with a JBoss binary installation. To activate your DataSource, place an xml file that follows the format of mysql-ds.xml in the deploy subdirectory in either $JBOSS_HOME/server/all, $JBOSS_HOME/server/default, or $JBOSS_HOME/server/[yourconfig] as appropriate. I am getting following error resource-ref: jdbc/buinessCaliberDb has no valid JNDI binding. Check the jboss-web/resource-ref. This is my mysql-ds.xml <datasources> <local-tx-datasource> <jndi-name>jdbc/buinessCaliberDb</jndi-name> <connection-url>jdbc:mysql:///BUSINESS</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>root</user-name> <password>password</password> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> --> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources> and this my web.xml entry <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/buinessCaliberDb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> and this jboss-web.xml entry <jboss-web> <resource-ref> <description>DB Connection</description> <res-ref-name>jdbc/buinessCaliberDb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </jboss-web> Please help

    Read the article

  • How does Subsonic handle connections?

    - by Quintin Par
    In Nhibernate you start a session by creating it during a BeginRequest and close at EndRequest public class Global: System.Web.HttpApplication { public static ISessionFactory SessionFactory = CreateSessionFactory(); protected static ISessionFactory CreateSessionFactory() { return new Configuration() .Configure(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "hibernate.cfg.xml")) .BuildSessionFactory(); } public static ISession CurrentSession { get{ return (ISession)HttpContext.Current.Items["current.session"]; } set { HttpContext.Current.Items["current.session"] = value; } } protected void Global() { BeginRequest += delegate { CurrentSession = SessionFactory.OpenSession(); }; EndRequest += delegate { if(CurrentSession != null) CurrentSession.Dispose(); }; } } What’s the equivalent in Subsonic? The way I understand, Nhibernate will close all the connections at endrequest. Reason: While trouble shooting some legacy code in a Subsonic project I get a lot of MySQL timeouts,suggesting that the code is not closing the connections MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Generated: Tue, 11 Aug 2009 05:26:05 GMT System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. --- MySql.Data.MySqlClient.MySqlException: error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at SubSonic.MySqlDataProvider.CreateConnection(String newConnectionString) at SubSonic.MySqlDataProvider.CreateConnection() at SubSonic.AutomaticConnectionScope..ctor(DataProvider provider) at SubSonic.MySqlDataProvider.GetReader(QueryCommand qry) at SubSonic.DataService.GetReader(QueryCommand cmd) at SubSonic.ReadOnlyRecord`1.LoadByParam(String columnName, Object paramValue) My connection string is as follows <connectionStrings> <add name="xx" connectionString="Data Source=xx.net; Port=3306; Database=db; UID=dbuid; PWD=xx;Pooling=true;Max Pool Size=12;Min Pool Size=2;Connection Lifetime=60" /> </connectionStrings>

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • Problem connecting to postgres with Kohana 3 database module on OS X Snow Leopard

    - by Bart Gottschalk
    Environment: Mac OS X 10.6 Snow Leopard PHP 5.3 Kohana 3.0.4 When I try to configure and use a connection to a postgresql database on localhost I get the following error: ErrorException [ Warning ]: mysql_connect(): [2002] No such file or directory (trying to connect via unix:///var/mysql/mysql.sock) Here is the configuration of the database in /modules/database/config/database.php (note the third instance named 'pgsqltest') return array ( 'default' => array ( 'type' => 'mysql', 'connection' => array( /** * The following options are available for MySQL: * * string hostname * string username * string password * boolean persistent * string database * * Ports and sockets may be appended to the hostname. */ 'hostname' => 'localhost', 'username' => FALSE, 'password' => FALSE, 'persistent' => FALSE, 'database' => 'kohana', ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'alternate' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=kohana', 'username' => 'root', 'password' => 'r00tdb', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), 'pgsqltest' => array( 'type' => 'pdo', 'connection' => array( /** * The following options are available for PDO: * * string dsn * string username * string password * boolean persistent * string identifier */ 'dsn' => 'mysql:host=localhost;dbname=pgsqltest', 'username' => 'postgres', 'password' => 'dev1234', 'persistent' => FALSE, ), 'table_prefix' => '', 'charset' => 'utf8', 'caching' => FALSE, 'profiling' => TRUE, ), ); And here is the code to create the database instance, create a query and execute the query: $pgsqltest_db = Database::instance('pgsqltest'); $query = DB::query(Database::SELECT, 'SELECT * FROM test')->execute(); I'm continuing to research a solution for this error but thought I'd ask to see if someone else has already found a solution. Any ideas are welcome. One other note is that I know my build of PHP can access this postgresql db since I'm able to manage the db using phpPgAdmin. But I have yet to determine what phpPgAdmin is doing differently to connect to the db than what Kohana 3 is attempting. Bart

    Read the article

  • Unique Constraint Nhibernate.

    - by Will
    I have a object with a Nhibernate mapping that has a surrogate ID and a natual ID. Since of cource the natural ID is uniquely constrained a insert query will fail if the object is already in the database with the same natural ID. My solution for this has been to manually check to see if natural IDs are in the database before trying to insert. Is there a way to specify Nhibernate to do a select before insert on natural Id's/Unique Constraints?

    Read the article

  • How to get use text columns in a trigger

    - by Jeremy
    I am trying to use an update trigger in sql 2000 so that when an update occurs, I insert a row into a history table, so I retain all history on a table: CREATE Trigger trUpdate_MyTable ON MyTable FOR UPDATE AS INSERT INTO [MyTableHistory] ( [AuditType] ,[MyTable_ID] ,[Inserted] ,[LastUpdated] ,[LastUpdatedBy] ,[Vendor_ID] ,[FromLocation] ,[FromUnit] ,[FromAddress] ,[FromCity] ,[FromProvince] ,[FromContactNumber] ,[Comment]) SELECT [AuditType] = 'U', D.* FROM deleted D JOIN inserted I ON I.[ID] = D.[ID] GO Of course, I get an error "Cannot use text, ntext, or image columns in the 'inserted' and 'deleted' tables." I tried joining to MyTable instead of deleted, but because the insert triger fires after the insert, it ends up inserting the new record into the history table, when I want the original record. How can I do this and still use text columns?

    Read the article

  • IPreInsertEventListener makes object dirty, causes invalid update

    - by Groxx
    In NHibernate 2.1.2: I'm attempting to set a created timestamp on insert, as demonstrated here. I have this: public bool OnPreInsert(PreInsertEvent @event) { if (@event.Entity is IHaveCreatedTimestamp) { DateTime dt = DateTime.Now; string Created = ((IHaveCreatedTimestamp)@event.Entity).CreatedPropertyName; SetState(@event.Persister, @event.State, Created, dt); @event.Entity.GetType().GetProperty(Created).SetValue(@event.Entity, dt, null); } // return true to veto the insert return false; } The problem is that doing this (or duplicating Ayende's example precisely, or reordering or removing lines) causes an update after the insert. The insert uses the correct "now" value, @p6 = 3/8/2011 5:41:22 PM, but the update tries to set the Created column to @p6 = 1/1/0001 12:00:00 AM, which is outside MSSQL's range: Test 'CanInsertAndDeleteInserted' failed: System.Data.SqlTypes.SqlTypeException : SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM. I've tried the same thing with the PerformSaveOrUpdate listener, described here, but that also causes an update and an insert, and elsewhere there have been mentions of avoiding it because it is called regardless of if the object is dirty or not :/ Searching around elsewhere, there are plenty of claims of success after setting both the state and the object to the same value, or by using the save listener, or of the cause coming from collections or other objects, but I'm not having any success.

    Read the article

  • how to get the table primary key after saving an item via AdoNetServiceProxy

    - by Jronny
    I have a js proxy class: function ProxyClass() { this.Properties = null; this.Insert = function () { try { var service = new Sys.Data.AdoNetServiceProxy("/WcfDataService1.svc"); service.insert(this, "TheTable"); } catch (ex) { alert("error: " + ex); } }; } The insert has been working, but I need to get the primary key of TheTable right after. How could we be able to pull it up? Thanks a lot

    Read the article

  • Comma Seperated Values Insertion In SQL server 2005

    - by Asim Sajjad
    How can I insert Values from the comma separated input paramater to the Store prodcedure ? Example is exec StopreProcedure Name 17,'127,204,110,198',7,'162,170,163,170' you can see that I have two Comma Separated Values in the parameter list , both will have same number of values if first have 5 comma seperated value then second one also has 5 comma separated values you can says 127 and 162 are related 204 and 170 are related and same for other how can I insert these two values in ? One comma Sepated value is inserted but how to insert two ?

    Read the article

  • UpdatePanel and ModalPopup Extender

    - by rs
    I have my form designed as <asp:Panel runat="server" Id="xyz"> <asp:UpdatePanel ID="UpdatePanel1" runat="server" UpdateMode="Conditional"> <ContentTemplate> 'Gridview with edit/delete - opens detailsview(edit template) with data for editing </ContentTemplate> </asp:UpdatePanel> <asp:UpdatePanel ID="UpdatePanel2" runat="server" UpdateMode="Conditional"> <ContentTemplate> 'Hyperlink to open detailsview(insert template) for inserting records </ContentTemplate> </asp:UpdatePanel> </asp:Panel> <asp:Panel runat="server" Id="xyz1"> 'Ajax modal popup extender control </asp:Panel> It works perfectly when i click update, insert alternately, but when i click insert hyperlink (which is outside gridview) and close/cancel popup without any insert and then again click insert it doesn't call insert_onclick event. It works if i click some other button and click this button. What could be causing this issue and how can i solve it?

    Read the article

  • when to use the abstract factory pattern?

    - by hguser
    Hi: I want to know when we need to use the abstract factory pattern. Here is an example,I want to know if it is necessary. The UML THe above is the abstract factory pattern, it is recommended by my classmate. THe following is myown implemention. I do not think it is necessary to use the pattern. And the following is some core codes: package net; import java.io.IOException; import java.util.HashMap; import java.util.Map; import java.util.Properties; public class Test { public static void main(String[] args) throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException { DaoRepository dr=new DaoRepository(); AbstractDao dao=dr.findDao("sql"); dao.insert(); } } class DaoRepository { Map<String, AbstractDao> daoMap=new HashMap<String, AbstractDao>(); public DaoRepository () throws IOException, InstantiationException, IllegalAccessException, ClassNotFoundException { Properties p=new Properties(); p.load(DaoRepository.class.getResourceAsStream("Test.properties")); initDaos(p); } public void initDaos(Properties p) throws InstantiationException, IllegalAccessException, ClassNotFoundException { String[] daoarray=p.getProperty("dao").split(","); for(String dao:daoarray) { AbstractDao ad=(AbstractDao)Class.forName(dao).newInstance(); daoMap.put(ad.getID(),ad); } } public AbstractDao findDao(String id) {return daoMap.get(id);} } abstract class AbstractDao { public abstract String getID(); public abstract void insert(); public abstract void update(); } class SqlDao extends AbstractDao { public SqlDao() {} public String getID() {return "sql";} public void insert() {System.out.println("sql insert");} public void update() {System.out.println("sql update");} } class AccessDao extends AbstractDao { public AccessDao() {} public String getID() {return "access";} public void insert() {System.out.println("access insert");} public void update() {System.out.println("access update");} } And the content of the Test.properties is just one line: dao=net.SqlDao,net.SqlDao So any ont can tell me if this suitation is necessary?

    Read the article

  • bi-directional o2m/m2o beats uni-directional o2m in SQL efficiency?

    - by Henry
    Use these 2 persistent CFCs for example: // Cat.cfc component persistent="true" { property name="id" fieldtype="id" generator="native"; property name="name"; } // Owner.cfc component persistent="true" { property name="id" fieldtype="id" generator="native"; property name="cats" type="array" fieldtype="one-to-many" cfc="cat" cascade="all"; } When one-to-many (unidirectional) Note: inverse=true on unidirectional will yield undesired result: insert into cat (name) values (?) insert into Owner default values update cat set Owner_id=? where id=? When one-to-many/many-to-one (bi-directional, inverse=true on Owner.cats): insert into Owner default values insert into cat (name, ownerId) values (?, ?) Does that mean setting up bi-directional o2m/m2o relationship is preferred 'cause the SQL for inserting the entities is more efficient?

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • Very interesting problem in Compact Framework

    - by Alexander
    Hi, i have a performance problem while inserting data to sqlce.I'm reading string and making inserts to My tables.In LU_MAM table,i insert 1000 records withing 8 seconds.After Mam tables i make some inserts but my largest table is CR_MUS.When i want to insert record into CR_MUS,it takes too much time.CR_MUS has 2000 records and insert takes 35 seconds.What can be reason?I use same logic in my insert functions.Do u have any idea?I use VS 2008 sp1. Dim reader As StringReader reader = New StringReader(data) cn = New SqlCeConnection(General.ConnString) cn.Open() If myTransfer.ClearTables(cn, cmd) = True Then progress = 0 '------------------------------------------ cmd = New SqlServerCe.SqlCeCommand Dim rs As SqlCeResultSet cmd.Connection = cn cmd.CommandType = CommandType.TableDirect Dim rec As SqlCeUpdatableRecord ' name of table While reader.Peek > -1 If strerr_col = "" Then satir = reader.ReadLine() ayrac = Split(satir, "|") If ayrac(0).ToString() = "LC" Then prgsbar.Maximum = Convert.ToInt32(ayrac(1)) ElseIf ayrac(0).ToString = "PPAR" Then . If ayrac(2).ToString <> General.PMVer Then ShowWaitCursor(False) txtDurum.Text = "Wrong Version" Exit Sub End If If p_POCKET_PARAMETERS = True Then cmd.CommandText = "POCKET_PARAMETERS" txtDurum.Text = "POCKET_PARAMETERS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_POCKET_PARAMETERS = False End If strerr_col = myVERI_AL.POCKET_PARAMETERS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString() = "MAM" Then If p_LU_MAM = True Then txtDurum.Text = "LU_MAM " cmd.CommandText = "LU_MAM" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_LU_MAM = False End If strerr_col = myVERI_AL.LU_MAM_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString = "KMUS" Then If p_CR_MUS = True Then cmd.CommandText = "CR_MUS" txtDurum.Text = "CR_MUS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_TR_KAMPANYA_MALZEME = False End If strerr_col = myVERI_AL.CR_MUS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 end while Public Function CR_KAMPANYA_MUSTERI_I(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet, ByRef rec As SqlCeUpdatableRecord) As String Try rec.SetValue(0, If(f_Line(1) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(1))) rec.SetValue(1, If(f_Line(2) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(2))) rec.SetValue(2, If(f_Line(3) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(3))) rec.SetValue(3, If(f_Line(5) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(5))) rec.SetValue(4, If(f_Line(6) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(6))) rs.Insert(rec) Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • Inserting null fields with dbi:Pg

    - by User1
    I have a Perl script inserting data into Postgres according to a pipe delimited text file. Sometimes, a field is null (as expected). However, Perl makes this field into an empty string and the Postgres insert statement fails. Here's a snippet of code: use DBI; #Connect to the database. $dbh=DBI-connect('dbi:Pg:dbname=mydb','mydb','mydb',{AutoCommit=1,RaiseError=1,PrintError=1}); #Prepare an insert. $sth=$dbh-prepare("INSERT INTO mytable (field0,field1) SELECT ?,?"); while (<){ #Remove the whitespace chomp; #Parse the fields. @field=split(/\|/,$_); print "$_\n"; #Do the insert. $sth-execute($field[0],$field[1]); } And if the input is: a|1 b| c|3 EDIT: Use this input instead. a|1|x b||x c|3|x It will fail at b|. DBD::Pg::st execute failed: ERROR: invalid input syntax for integer: "" I just want it to insert a null on field1 instead. Any ideas? EDIT: I simplified the input at the last minute. The old input actually made it work for some reason. So now I changed the input to something that will make the program fail. Also note that field1 is a nullable integer datatype.

    Read the article

  • Issue with XSL Criteria

    - by Rachel
    I am using the below piece of XSL to select the id of the text nodes whose content has a given index. This index value in input will be relative to a spcified node whose id value is known. The criteria to select the text node is, The text node content should contain a index say 'i' relative to node say 'n' whose id value i know. 'i' and 'id of n' is got as index and nodeName from the input param as seen in the xsl. Node 'd1e5' has the text content whose index ranges from 1 to 33. When i give an index value greater than 33 i want the below criteria to fail but it does not, [sum((preceding::text(), .)[normalize-space()][. >> //*[@id=$nodeName]]/string-length(.)) ge $index] Input xml: <?xml version="1.0" encoding="UTF-8"?> <html xmlns="http://www.w3.org/1999/xhtml" id="d1e1"> <head id="d1e3"> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title id="d1e5">Every document must have a title</title> </head> <body id="d1e9"> <h1 id="d1e11" align="center">Very Important Heading</h1> <p id="d1e13">Since this is just a sample, I won't put much text here.</p> </body> </html> XSL code used: <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xsd="http://www.w3.org/2001/XMLSchema" exclude-result-prefixes="xsd" version="2.0"> <xsl:param name="insert-file" as="node()+"> <insert-data><data index="1" nodeName="d1e5"></data><data index="34" nodeName="d1e5"></data></insert-data> </xsl:param> <xsl:param name="nodeName" as="xsd:string" /> <xsl:variable name="main-root" as="document-node()" select="/"/> <xsl:variable name="insert-data" as="element(data)*"> <xsl:for-each select="$insert-file/insert-data/data"> <xsl:sort select="xsd:integer(@index)"/> <xsl:variable name="index" select="xsd:integer(@index)" /> <xsl:variable name="nodeName" select="@nodeName" /> <data text-id="{generate-id($main-root/descendant::text()[sum((preceding::text(), .)[normalize-space()][. >> //*[@id=$nodeName]]/string-length(.)) ge $index][1])}"> </data> </xsl:for-each> </xsl:variable> <xsl:template match="/"> <Output> <xsl:copy-of select="$insert-data" /> </Output> </xsl:template> </xsl:stylesheet> Actual output: <?xml version="1.0" encoding="UTF-8"?> <Output> <data text-id="d1t8"/> <data text-id="d1t14"/> </Output> Expected output: <?xml version="1.0" encoding="UTF-8"?> <Output> <data text-id="d1t8"/> </Output> This solution works fine if index lies between 1 and 33. Any index value greater that 33 causes incorrect text nodes to get selected. I could not understand why the text node 'd1t14' is getting selected. Please share your thoughts.

    Read the article

  • Maven not setting classpath for dependencies properly

    - by Matthew
    OS name: "linux" version: "2.6.32-27-generic" arch: "i386" Family: "unix" Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700) Java version: 1.6.0_20 I am trying to use the mysql dependency in with maven in ubuntu. If I move the "mysql-connector-java-5.1.14.jar" file that maven downloaded into my $JAVA_HOME/jre/lib/ext/ folder, everything is fine when I run the jar. I think I should be able to just specify the dependency in the pom.xml file and maven should take care of setting the classpath for the dependency jars automatically. Is this incorrect? My pom.xml file looks like this: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.ion.common</groupId> <artifactId>TestPreparation</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>TestPrep</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <archive> <manifest> <addClasspath>true</addClasspath> <mainClass>com.ion.common.App</mainClass> </manifest> </archive> </configuration> </plugin> </plugins> </build> <dependencies> <!-- JUnit testing dependency --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <!-- MySQL database driver --> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.14</version> <scope>compile</scope> </dependency> </dependencies> </project> The command "mvn package" builds it without any problems, and I can run it, but when the application attempts to access the database, this error is presented: java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:186) at com.ion.common.Functions.databases(Functions.java:107) at com.ion.common.App.main(App.java:31) The line it is failing on is: Class.forName("com.mysql.jdbc.Driver"); Can anyone tell me what I'm doing wrong or how to fix it?

    Read the article

  • Using map/reduce for mapping the properties in a collection

    - by And
    Update: follow-up to MongoDB Get names of all keys in collection. As pointed out by Kristina, one can use Mongodb 's map/reduce to list the keys in a collection: db.things.insert( { type : ['dog', 'cat'] } ); db.things.insert( { egg : ['cat'] } ); db.things.insert( { type : [] }); db.things.insert( { hello : [] } ); mr = db.runCommand({"mapreduce" : "things", "map" : function() { for (var key in this) { emit(key, null); } }, "reduce" : function(key, stuff) { return null; }}) db[mr.result].distinct("_id") //output: [ "_id", "egg", "hello", "type" ] As long as we want to get only the keys located at the first level of depth, this works fine. However, it will fail retrieving those keys that are located at deeper levels. If we add a new record: db.things.insert({foo: {bar: {baaar: true}}}) And we run again the map-reduce +distinct snippet above, we will get: [ "_id", "egg", "foo", "hello", "type" ] But we will not get the bar and the baaar keys, which are nested down in the data structure. The question is: how do I retrieve all keys, no matter their level of depth? Ideally, I would actually like the script to walk down to all level of depth, producing an output such as: ["_id","egg","foo","foo.bar","foo.bar.baaar","hello","type"] Thank you in advance!

    Read the article

  • SCD2 + Merge Statement + SQL Server

    - by Nev_Rahd
    I am trying work out with MERGE statment to Insert / Update Dimension Table of Type SCD2 My source is a Table var to Merge with Dimension table. My MERGE statement is throwing an error as: The target table 'DM.DATA_ERROR.ERROR_DIMENSION' of the INSERT statement cannot be on either side of a (primary key, foreign key) relationship when the FROM clause contains a nested INSERT, UPDATE, DELETE, or MERGE statement. Found reference constraint 'FK_ERROR_DIMENSION_to_AUDIT_CreatedBy'. My MERGE Statement: DECLARE @DATAERROROBJECT AS [ERROR_DIMENSION] INSERT INTO DM.DATA_ERROR.ERROR_DIMENSION SELECT ERROR_CODE, DATA_STREAM_ID, [ERROR_SEVERITY], DATA_QUALITY_RATING, ERROR_LONG_DESCRIPTION, ERROR_DESCRIPTION, VALIDATION_RULE, ERROR_TYPE, ERROR_CLASS, VALID_FROM, VALID_TO, CURR_FLAG, CREATED_BY_AUDIT_SK, UPDATED_BY_AUDIT_SK FROM (MERGE DM.DATA_ERROR.ERROR_DIMENSION ED USING @DATAERROROBJECT OBJ ON(ED.ERROR_CODE = OBJ.ERROR_CODE AND ED.DATA_STREAM_ID = OBJ.DATA_STREAM_ID) WHEN NOT MATCHED THEN INSERT VALUES( OBJ.ERROR_CODE ,OBJ.DATA_STREAM_ID ,OBJ.[ERROR_SEVERITY] ,OBJ.DATA_QUALITY_RATING ,OBJ.ERROR_LONG_DESCRIPTION ,OBJ.ERROR_DESCRIPTION ,OBJ.VALIDATION_RULE ,OBJ.ERROR_TYPE ,OBJ.ERROR_CLASS ,GETDATE() ,'9999-12-13' ,'Y' ,1 ,1 ) WHEN MATCHED AND ED.CURR_FLAG = 'Y' AND ( ED.[ERROR_SEVERITY] <> OBJ.[ERROR_SEVERITY] OR ED.[DATA_QUALITY_RATING] <> OBJ.[DATA_QUALITY_RATING] OR ED.[ERROR_LONG_DESCRIPTION] <> OBJ.[ERROR_LONG_DESCRIPTION] OR ED.[ERROR_DESCRIPTION] <> OBJ.[ERROR_DESCRIPTION] OR ED.[VALIDATION_RULE] <> OBJ.[VALIDATION_RULE] OR ED.[ERROR_TYPE] <> OBJ.[ERROR_TYPE] OR ED.[ERROR_CLASS] <> OBJ.[ERROR_CLASS] ) THEN UPDATE SET ED.CURR_FLAG = 'N', ED.VALID_TO = GETDATE() OUTPUT $ACTION ACTION_OUT, OBJ.ERROR_CODE ERROR_CODE, OBJ.DATA_STREAM_ID DATA_STREAM_ID, OBJ.[ERROR_SEVERITY] [ERROR_SEVERITY], OBJ.DATA_QUALITY_RATING DATA_QUALITY_RATING, OBJ.ERROR_LONG_DESCRIPTION ERROR_LONG_DESCRIPTION, OBJ.ERROR_DESCRIPTION ERROR_DESCRIPTION, OBJ.VALIDATION_RULE VALIDATION_RULE, OBJ.ERROR_TYPE ERROR_TYPE, OBJ.ERROR_CLASS ERROR_CLASS, GETDATE() VALID_FROM, '9999-12-31' VALID_TO, 'Y' CURR_FLAG, 555 CREATED_BY_AUDIT_SK, 555 UPDATED_BY_AUDIT_SK ) AS MERGE_OUT WHERE MERGE_OUT.ACTION_OUT = 'UPDATE'; What am I doing wrong ?

    Read the article

  • stored procedure for importing txt in sql server db

    - by Iulian
    I have to insert new records in a database every day from a text file ( tab delimited). I'm trying to make this into a stored procedure with a parameter for the file to read data from. CREATE PROCEDURE dbo.UpdateTable @FilePath BULK INSERT TMP_UPTable FROM @FilePath WITH ( FIRSTROW = 2, MAXERRORS = 0, FIELDTERMINATOR = '\t', ROWTERMINATOR = '\n' ) RETURN Then i would call this stored procedure from my code (C#) specifying the file to insert. This is obviously not working, so how can i do it ?

    Read the article

  • How to use OUTPUT statement inside a SQL trigger?

    - by Jeff Meatball Yang
    From MSDN: If a statement that includes an OUTPUT clause is used inside the body of a trigger, table aliases must be used to reference the trigger inserted and deleted tables to avoid duplicating column references with the INSERTED and DELETED tables associated with OUTPUT. Can someone give me an example of this? I'm not sure if this is correct: create trigger MyInterestingTrigger on TransactionalTable123 AFTER insert, update, delete AS declare @batchId table(batchId int) insert SomeBatch (batchDate, employeeId) output SomeBatch.INSERTED.batchId into @batchId insert HistoryTable select b.batchId, i.col1, i.col2, i.col3 from TransactionalTable123.inserted i cross join @batchId b

    Read the article

  • Stored Procedure in Entity Framework

    - by kamal
    Hi I had added the Stored procedure in my Entity framework and i also imported the functions in the edmx. Is it must to add all the three functions insert, update, and delete functions to a table. I had tried with insert alone and also with all, but why can't i get the name of the stored procedure in the connection string. Let me know what i done clearly. I had added the sp i had imported the functions in the model browser. i had also mapped the insert, update and delete function to the table with return type only for insert and update. Still i can't get the name of SP in the connection string. Please let me know how could i resolve this issue. Thanks in Advance, Kamal.

    Read the article

  • replace the line to add some text

    - by shantanuo
    The MySQL dump backup file has the following line... # head -40 backup20-Apr-2010-07-32.sql | grep 'CHANGE MASTER TO ' -- CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000068', MASTER_LOG_POS=176357756; a) I need to complete the statement with the parameters like Master host, user and password. b) I do also need to remove the comment "--" The line should look something like this... CHANGE MASTER TO MASTER_HOST='111.222.333.444', MASTER_USER='slave_user', MASTER_PASSWORD='slave_user', MASTER_LOG_FILE='mysql-bin.000068', MASTER_LOG_POS=176357756;

    Read the article

  • Oracle global lock across process

    - by Jimm
    I would like to synchronize access to a particular insert. Hence, if multiple applications execute this "one" insert, the inserts should happen one at a time. The reason behind synchronization is that there should only be ONE instance of this entity. If multiple applications try to insert the same entity,only one should succeed and others should fail. One option considered was to create a composite unique key, that would uniquely identify the entity and rely on unique constraint. For some reasons, the dba department rejected this idea. Other option that came to my mind was to create a stored proc for the insert and if the stored proc can obtain a global lock, then multiple applications invoking the same stored proc, though in their seperate database sessions, it is expected that the stored proc can obtain a global lock and hence serialize the inserts. My question is it possible to for a stored proc in oracle version 10/11, to obtain such a lock and any pointers to documentation would be helpful.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >