Search Results

Search found 20931 results on 838 pages for 'mysql insert'.

Page 457/838 | < Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >

  • MEB Support to NetBackup MMS

    - by Hema Sridharan
    In MySQL Enterprise Backup 3.6, new option was introduced to support backup to tapes via SBT interface. SBT stands for System Backup to Tape, an Oracle API that helps to perform backup and restore jobs via media management software such as Oracle's Secure Backup (OSB). There are other storage managers like IBM's Tivoli Storage Manager (TSM) and Symantec's Netbackup (NB) which are also supported by MEB but we don't guarantee that it will function as expected for every release. MEB supports SBT API version 2.0 In this blog, I am primarily going to focus the interface of MEB and Symantec's NB. If we are using tapes for backup, ensure that tape library and tape drives are compatible. Test Setup 1. Install NB 7.5 master and media servers in Linux OS. ( NB 7.1 can also be used but for testing purpose I used NB 7.5)2. Install MEB 3.8 also in Linux OS.3. Install NB admin console in your windows desktop and configure the NB master server from there. Note: Ensure that you have root user permission to install NetBackup. Configuration Steps for MEB and NB Once MEB and NB are installed, Ensure that NB is linked to MEB by specifying the library /usr/openv/netbackup/bin/libobk.so64 in the mysqlbackup command line using --sbt-lib-path. Configure the NB master server from windows console. That is configure the storage units by specifying the Storage unit name, Disk type, Media Server name etc.  Create NetBackup policies that are user selectable. But please make sure that policy type is "Oracle".  Define the clients where MEB will be executed. Some times this will be different host where MEB is run or some times in same Media server where NB and tapes are attached. Now once the installation and configuration steps are performed for MEB and NB, the next part is the actual execution.MEB should be run as single file backup using --backup-image option with prefix sbt:(it is a tag which tells MEB that it should stream the backup image through the SBT interface) which is sent to NB client via SBT interface . The resulting backup image is stored where NB stores the images that it backs up. The following diagram shows how MEB interacts with MMS through SBT interface. Backup The following parameters should also be ready for the execution,    --sbt-lib-path : Path to SBT library specific to NetBackup MMS. SBT lib for NetBackup  is in /usr/openv/netbackup/bin/libobk.so64    --sbt-environment: Environment variables must be defined specific to NetBackup. In our example below, we use     NB_ORA_SERV=myserver.com,    NB_ORA_CLIENT=myserver.com,    NB_ORA_POLICY=NBU-MEB    ORACLE_HOME = /export/home2/tmp/hema/mysql-server/ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --port=13000 --protocol=tcp --user=root --backup-image=sbt:bkpsbtNB --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64 --sbt-environment="NB_ORA_SERV=myserver.com, NB_ORA_CLIENT=myserver.com, NB_ORA_POLICY=NBU-MEB, ORACLE_HOME=/export/home2/tmp/hema/mysql-server/” --backup-dir=/export/home2/tmp/hema/MEB_bkdir/ backup-to-image ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Once backup is completed successfully, this should appear in Activity Monitor in NetBackup Console.For restore,  image contents has to be extracted using image-to-backup-dir command and then apply-log and copy-back steps are applied. ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ./mysqlbackup --sbt-lib-path=/usr/openv/netbackup/bin/libobk.so64  --backup-dir=/export/home2/tmp/hema/NBMEB/ --backup-image=sbt:bkpsbtNB image-to-backup-dir-----------------------------------------------------------------------------------------------------------------------------------Now apply logs as usual, shutdown the server and perform restore, restart the server and check the data contents. ./mysqlbackup   ---backup-dir=/export/home2/tmp/hema/NBMEB/  apply-log ./mysqlbackup --datadir=/export/home2/tmp/hema/mysql-server/mysql-5.5-meb-repo/mysql-test/var/mysqld.1/data/  --backup-dir=/export/home2/tmp/hema/MEB_bkpdir/ innodb_log_files_in_group=2 --innodb_log_file_size=5M --user=root --port=13000 --protocol=tcp copy-back The NB console should show 'Restore" job as done. If you don't see that there is something wrong with MEB or NetBackup.You can also refer to more detailed steps of MEB and NB integration in whitepaper here

    Read the article

  • Speaker at the German Visual FoxPro Developer Conference 2003

    The following is an excerpt from the UniversalThread conference coverage of the German Visual FoxPro Developer Conference 2003 written by Hans-Otto Lochmann and Armin Neudert. Track: Visual FoxPro and Linux This track consists of 4 sessions presented on one day in one sequence. Originally the Linux portion of this track was to be presented by Whil Hentzen, the well-known publisher, book author and confer-ence speaker. Unfortunately some illness prevented him from joining this DevCon. Rainer got the bad news only on early Friday morning. It was definitely to late to find a replacement among the already invited speaker on such a short notice. So Rainer decided to take over these "three sessions in a row" by himself with "a little help from his friends". He hired a coach for him for the weekend and prepared slides and sessions by himself - the originally planed slides and session material were still in USA. Rainer survived barely an endless disaster of C0000005's due to various wrong configuration settings... At the presentation Jochen Kirstätter helped massively with technical details regarding Linux whereas Rainer did the slides and the presentation. Gerold Lübben then presented the MySQL part - as originally planned. This track concentrated on the how to run Visual FoxPro applications on Linux machines with the help of a Windows emulator like Wine. As more and more people use Linux machines in production (and not just for running servers), more and more invitations to bid for a development job includes the requirement to run the application in a Linux environment. If you would like to participate in such submissions, then you should get familiar with the open source operating system Linux and the open source Data Base system MySQL. [...] These sessions provided a broad, complete overview of where Linux fits into the current computing landscape from the perspective of a VFP developer, where VFP can be used with Linux, and a conceptual plan for how to approach the incorporation of Linux into your day-to-day work. In order for you to be able to work with a Linux back end, you're going to need to know something about how Linux works. The best way involves a two-step process: First, plunk down a Linux workstation on your desk next to your Windows machine and develop some experience with the new OS.Second, once you have a basic level of comfort with Linux, gained through your experience on a workstation, leverage that knowledge and learn to connect to a Linux server from your Windows machine. This track showed both of these processes: What you can expect when you set up your Linux work-station, how to set it up, how to connect to your Windows network, how to fit VFP into the mix, and even how you could use it to replace your Windows workstation in some cases. Also this track demonstrated how to connect to an existing Linux server, running MySQL or an another back end, and how to get your VFP apps talking to that back end data. This track also showed both of the positions you can take. Rainer disliked it wholeheartedly (the bad guy position in these talks) and Jochen loved it (the good guy and "typical Linux techie"-position we all love). These opposite position lasted for three sessions and both sides where shown with their Pros and Cons in live and lively discussions of the speakers (club banging was forbidden). Gerold Luebben showed how Visual Foxpro and MySQL can work together. MySQL is as one the most well known open SOURCE databases for nearly all platforms available. Particularly in eBusiness MySQL is well positioned and well known for its performance and its stability. Still we like Visual FoxPro more - for sure . [...]

    Read the article

  • Error Handling in T-SQL Scalar Function

    - by hydroparadise
    Ok.. this question could easily take multiple paths, so I will hit the more specific path first. While working with SQL Server 2005, I'm trying to create a scalar funtion that acts as a 'TryCast' from varchar to int. Where I encounter a problem is when I add a TRY block in the function; CREATE FUNCTION u_TryCastInt ( @Value as VARCHAR(MAX) ) RETURNS Int AS BEGIN DECLARE @Output AS Int BEGIN TRY SET @Output = CONVERT(Int, @Value) END TRY BEGIN CATCH SET @Output = 0 END CATCH RETURN @Output END Turns out theres all sorts of things wrong with this statement including "Invalid use of side-effecting or time-dependent operator in 'BEGIN TRY' within a function" and "Invalid use of side-effecting or time-dependent operator in 'END TRY' within a function". I can't seem to find any examples of using try statements within a scalar function, which got me thinking, is error handling in a function is possible? The goal here is to make a robust version of the Convert or Cast functions to allow a SELECT statement carry through depsite conversion errors. For example, take the following; CREATE TABLE tblTest ( f1 VARCHAR(50) ) GO INSERT INTO tblTest(f1) VALUES('1') INSERT INTO tblTest(f1) VALUES('2') INSERT INTO tblTest(f1) VALUES('3') INSERT INTO tblTest(f1) VALUES('f') INSERT INTO tblTest(f1) VALUES('5') INSERT INTO tblTest(f1) VALUES('1.1') SELECT CONVERT(int,f1) AS f1_num FROM tblTest DROP TABLE tblTest It never reaches point of dropping the table because the execution gets hung on trying to convert 'f' to an integer. I want to be able to do something like this; SELECT u_TryCastInt(f1) AS f1_num FROM tblTest fi_num __________ 1 2 3 0 5 0 Any thoughts on this? Is there anything that exists that handles this? Also, I would like to try and expand the conversation to support SQL Server 2000 since Try blocks are not an option in that scenario. Thanks in advance.

    Read the article

  • Error creating Rails DB using rake db:create

    - by Simon
    Hi- I'm attempting to get my first "hello world" rails example going using the rails' getting started guide on my OSX 10.6.3 box. When I go to execute the first rake db:create command (I'm using mysql) I get: simon@/Users/simon/source/rails/blog/config: rake db:create (in /Users/simon/source/rails/blog) Couldn't create database for {"reconnect"=>false, "encoding"=>"utf8", "username"=>"root", "adapter"=>"mysql", "database"=>"blog_development", "pool"=>5, "password"=>nil, "socket"=>"/opt/local/var/run/mysql5/mysqld.sock"}, charset: utf8, collation: utf8_unicode_ci (if you set the charset manually, make sure you have a matching collation) I found plenty of stackoverflow questions addressing this problem with the following advice: Verify that user and password are correct (I'm running w/ no password for root on my dev box) Verify that the socket is correct - I can cat the socket, so I assume it's correct Verify that the user can create a DB (As you can see root can connect and create a this DB no problem) simon@/Users/simon/source/rails/blog/config: mysql -uroot -hlocalhost Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 16 Server version: 5.1.45 Source distribution Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql create database blog_development; Query OK, 1 row affected (0.00 sec) Any idea on what might be going on here?

    Read the article

  • Two different assembly versions "The located assembly's manifest definition does not match the assem

    - by snicker
    I have a project that I am working on that requires the use of the Mysql Connector for NHibernate, (Mysql.Data.dll). I also want to reference another project (Migrator.NET) in the same project. The problem is even though Migrator.NET is built with the reference to MySql.Data with specific version = false, it still tries to reference the older version of MySql.Data that the library was built with instead of just using the version that is there.. and I get the exception listed in the title: ---- System.IO.FileLoadException : Could not load file or assembly 'MySql.Data, Version=1.0.10.1, Culture=neutral, PublicKeyToken=c5687fc88969c44d' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) The version I am referencing in the main assembly is 6.1.3.0. How do I get the two assemblies to cooperate? Edit: For those of you specifying Assembly Binding Redirection, I have set this up: <?xml version="1.0" encoding="utf-8" ?> <configuration> <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <dependentAssembly> <assemblyIdentity name="MySql.Data" publicKeyToken="c5687fc88969c44d" culture="neutral"/> <bindingRedirect oldVersion="0.0.0.0-6.1.3.0" newVersion="6.1.3.0"/> </dependentAssembly> </assemblyBinding> </runtime> </configuration> I am referencing this the main assembly in another project and still getting the same errors. If my main assembly is copied local to be used in the other assembly, will it use the settings in app.config or does this information have to be included with every application or assembly that references my main assembly?

    Read the article

  • SQL select descendants of a row

    - by Joey Adams
    Suppose a tree structure is implemented in SQL like this: CREATE TABLE nodes ( id INTEGER PRIMARY KEY, parent INTEGER -- references nodes(id) ); Although cycles can be created in this representation, let's assume we never let that happen. The table will only store a collection of roots (records where parent is null) and their descendants. The goal is to, given an id of a node on the table, find all nodes that are descendants of it. A is a descendant of B if either A's parent is B or A's parent is a descendant of B. Note the recursive definition. Here is some sample data: INSERT INTO nodes VALUES (1, NULL); INSERT INTO nodes VALUES (2, 1); INSERT INTO nodes VALUES (3, 2); INSERT INTO nodes VALUES (4, 3); INSERT INTO nodes VALUES (5, 3); INSERT INTO nodes VALUES (6, 2); which represents: 1 `-- 2 |-- 3 | |-- 4 | `-- 5 | `-- 6 We can select the (immediate) children of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1; We can select the children and grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id; We can select the children, grandchildren, and great grandchildren of 1 by doing this: SELECT a.* FROM nodes AS a WHERE parent=1 UNION ALL SELECT b.* FROM nodes AS a, nodes AS b WHERE a.parent=1 AND b.parent=a.id UNION ALL SELECT c.* FROM nodes AS a, nodes AS b, nodes AS c WHERE a.parent=1 AND b.parent=a.id AND c.parent=b.id; How can a query be constructed that gets all descendants of node 1 rather than those at a finite depth? It seems like I would need to create a recursive query or something. I'd like to know if such a query would be possible using SQLite. However, if this type of query requires features not available in SQLite, I'm curious to know if it can be done in other SQL databases.

    Read the article

  • google maps api : internal server error when inserting a feature

    - by user142764
    Hi, I try to insert features on a custom google map : i use the sample code from the doc but i get a ServiceException (Internal server error) when i call the service's insert method. Here is what i do : I create a map and get the resulting MapEntry object : myMapEntry = (MapEntry) service.insert(mapUrl, myEntry); This works fine : i can see the map i created in "my maps" on google. I use the feed url from the map to insert a feature : final URL featureEditUrl = myMapEntry.getFeatureFeedUrl(); I create a kml string using the sample from the doc : String kmlStr = "< Placemark xmlns=\"http://www.opengis.net/kml/2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; And when i call the insert method i get an internal server error. I must be doing something wrong but i cant see what, can anybody help ? Here is the complete code i use : public void doCreateFeaturesFormap(MapEntry myMap) throws ServiceException, IOException { final URL featureEditUrl = myMap.getFeatureFeedUrl(); FeatureEntry featureEntry = new FeatureEntry(); try { String kmlStr = "<Placemark xmlns=\"http://www.opengis.net/kml/ 2.2\">" + "<name>Aunt Joanas Ice Cream Shop</name>" + "<Point>" + "<coordinates>-87.74613826475604,41.90504663195118,0</ coordinates>" + "</Point></Placemark>"; XmlBlob kml = new XmlBlob(); kml.setFullText(kmlStr); featureEntry.setKml(kml); featureEntry.setTitle(new PlainTextConstruct("Feature Title")); } catch (NullPointerException e) { System.out.println("Error: " + e.getClass().getName()); } FeatureEntry myFeature = (FeatureEntry) service.insert( featureEditUrl, featureEntry); } Thanks in advance, Vincent.

    Read the article

  • What do these .NET auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Java: library that does nice formatted log outputs

    - by WizardOfOdds
    I cannot find back a library that allowed to format log output statements in a much nicer way than what is usually seen. One of the feature I remember is that it could 'offset' the log message depending on the 'nestedness' of where the log statement was occuring. That is, instead of this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java| 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java| 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java| 459 | Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java| 789 | Another debug message It would shows something like this: DEBUG | DefaultBeanDefinitionDocumentReader.java| 86 | Loading bean definitions DEBUG | AbstractAutowireCapableBeanFactory.java | 411 | Finished creating instance of bean 'MS-SQL' DEBUG | DefaultSingletonBeanRegistry.java | 213 | Creating shared instance of singleton bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 383 | Creating instance of bean 'MySQL' DEBUG | AutowireCapableBeanFactory.java | 459 | |__ Eagerly caching bean 'MySQL' to allow for resolving potential circular references DEBUG | AutowireCapableBeanFactory.java | 789 | |__ Another debug message This is an example I just made up (VeryLongCamelCaseClassNamesNotMine). But I remember seeing such cleanly formatted log output and they were really much nicer than anything I had seen before and, in addition to being just plain nicer, they were also easier to read for they reproduced some of the logical organization of the code. Yet I cannot find anymore what that library was. I'm pretty sure it was fully compatible with log4j or sl4j.

    Read the article

  • What do these C# auto-generated table adapter commands do? e.g. UPDATE/INSERT followed by a SELECT

    - by RickL
    I'm working with a legacy application which I'm trying to change so that it can work with SQL CE, whilst it was originally written against SQL Server. The problem I am getting now is that when I try to do dataAdapter.Update, SQL CE complains that it is not expecting the SELECT keyword in the command text. I believe this is because SQL CE does not support batch SELECT statements. The auto-generated table adapter command looks like this... this._adapter.InsertCommand.CommandText = @"INSERT INTO [Table] ([Field1], [Field2]) VALUES (@Value1, @Value2); SELECT Field1, Field2 FROM Table WHERE (Field1 = @Value1)"; What is it doing? It looks like it is inserting new records from the datatable into the database, and then reading that record back from the database into the datatable? What's the point of that? Can I just go through the code and remove all these SELECT statements? Or is there an easier way to solve my problem of wanting to use these data adapters with SQL CE? I cannot regenerate these table adapters, as the people who knew how to have long since left.

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Why is F# member not found when used in subclass

    - by James Black
    I have a base type that I want to inherit from, for all my DAO objects, but this member gets the error further down about not being defined: type BaseDAO() = member v.ExecNonQuery2(conn)(sqlStr) = let comm = new MySqlCommand(sqlStr, conn, CommandTimeout = 10) comm.ExecuteNonQuery |> ignore comm.Dispose |> ignore I inherit in this type: type CreateDatabase() = inherit BaseDAO() member private self.createDatabase(conn) = self.ExecNonQuery2 conn "DROP DATABASE IF EXISTS restaurant" This is what I see when my script runs in the interactive shell: --> Referenced 'C:\Program Files\MySQL\MySQL Connector Net 6.2.3\Assemblies\MySql.Data.dll' [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\BaseDAO.fs] namespace FSI_0106.RestaurantServiceDAO type BaseDAO = class new : unit -> BaseDAO member ExecNonQuery2 : conn:MySql.Data.MySqlClient.MySqlConnection -> sqlStr:string -> unit member execNonQuery : sqlStr:string -> unit member execQuery : sqlStr:string * selectFunc:(MySql.Data.MySqlClient.MySqlDataReader -> 'a list) -> 'a list member f : x:obj -> string member Conn : MySql.Data.MySqlClient.MySqlConnection end [Loading C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs] C:\Users\jblack\Documents\Visual Studio 2010\Projects\RestaurantService\RestaurantDAO\CreateDatabase.fs(56,14): error FS0039: The field, constructor or member 'ExecNonQuery2' is not defined I am curious what I am doing wrong. I have tried not inheriting, and just instantiating the BaseDAO type in the function, but I get the same error. I started on this path because I had a property that had the same error, so it seems there may be a problem with how I am defining my BaseDAO type, but it compiles with no error, which further confuses me about this problem.

    Read the article

  • ORACLE: can we create global temp tables or any tables in stored proc?

    - by mrp
    Hi, below is the stored proc I wrote: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; when i executed it , i get an error message mentioning: create or replace procedure test005 as begin CREATE GLOBAL TEMPORARY TABLE TEMP_TRAN ( COL1 NUMBER(9), COL2 VARCHAR2(30), COL3 DATE ) ON COMMIT PRESERVE ROWS / INSERT INTO TEMP_TRAN VALUES(1,'D',sysdate); INSERT INTO TEMP_TRAN VALUES(2,'I',sysdate); INSERT INTO TEMP_TRAN VALUES(3,'s',sysdate); COMMIT; end; Error at line 1 ORA-00955: name is already used by an existing object Script Terminated on line 1. I tried to drop the TEMP_TRAN and it says table doesn't exist. So there is no TEMP_TRAN table existed in system. why am I getting this error? I am using TOAD to create this stored proc. Any help would be highly appreciated.

    Read the article

  • Hibernate not saving foreign key, but with junit it's ok

    - by Leonardo
    Hi All, I have this strange problem. In a J2ee webapp with spring, smartgwt and hibernate, it happens that I have a class A wich has a set of class B, both of them mapped to table A and table B. I wrote a simple test case for testing the service manager which is supposed to do insert, update, delete and everything work as expected especially during insert. In the end I have one record in A and records in B with foreign key to A. But when I try to call the service from the web app, the entity in B are saved without a foreign key reference. I am sure that the service is the same. One thing I noticed is that enabling hibernate logging, seems that when the service is called from the application, one more update is made: insert A insert B update A update B update B (foreign key only) update A <--- ??? update B <--- ??? Instead, when junit test case is run, the update is as follows: insert A insert B update A update B update B (foreign key only) I suppose the latest update is what is causing the erroe, maybe it is overwriting values. Considering that the app is using spring, with the well known mechanism of DAO + Manager, where can I investigate to solve this issue ? Someone told me that the session is not closed, so hibernate would do one more update before release the objects by itself. I am pretty sure that all the configuration hbm, xml, and the rest are fine...but I maybe wrong. thanks

    Read the article

  • Inorder tree traversal in binary tree in C

    - by srk
    In the below code, I'am creating a binary tree using insert function and trying to display the inserted elements using inorder function which follows the logic of In-order traversal.When I run it, numbers are getting inserted but when I try the inorder function( input 3), the program continues for next input without displaying anything. I guess there might be a logical error.Please help me clear it. Thanks in advance... #include<stdio.h> #include<stdlib.h> int i; typedef struct ll { int data; struct ll *left; struct ll *right; } node; node *root1=NULL; // the root node void insert(node *root,int n) { if(root==NULL) //for the first(root) node { root=(node *)malloc(sizeof(node)); root->data=n; root->right=NULL; root->left=NULL; } else { if(n<(root->data)) { root->left=(node *)malloc(sizeof(node)); insert(root->left,n); } else if(n>(root->data)) { root->right=(node *)malloc(sizeof(node)); insert(root->right,n); } else { root->data=n; } } } void inorder(node *root) { if(root!=NULL) { inorder(root->left); printf("%d ",root->data); inorder(root->right); } } main() { int n,choice=1; while(choice!=0) { printf("Enter choice--- 1 for insert, 3 for inorder and 0 for exit\n"); scanf("%d",&choice); switch(choice) { case 1: printf("Enter number to be inserted\n"); scanf("%d",&n); insert(root1,n); break; case 3: inorder(root1); break; default: break; } } }

    Read the article

  • advanced select in Stored Procedure

    - by Auro
    Hey i got this Table: CREATE TABLE Test_Table ( old_val VARCHAR2(3), new_val VARCHAR2(3), Updflag NUMBER, WorkNo NUMBER ); and this is in my Table: INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('1',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 20',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('2',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('3',' 30',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); INSERT INTO Test_Table (old_val, new_val, Updflag , WorkNo) VALUES('4',' 40',0,0); now my Table Looks like this: Row Old_val New_val Updflag WorkNo 1 '1' ' 20' 0 0 2 '2' ' 20' 0 0 3 '2' ' 30' 0 0 4 '3' ' 30' 0 0 5 '4' ' 40' 0 0 6 '5' ' 40' 0 0 (if the value in the new_val column are same then they are together and the same goes to old_val) so in the example above row 1-4 are together and row 5-6 at the moment i have in my Stored Procedure a cursor: SELECT t1.Old_val, t1.New_val, t1.updflag, t1.WorkNo FROM Test_Table t1 WHERE t1.New_val = ( SELECT t2.New_val FROM Test_Table t2 WHERE t2.Updflag = 0 AND t2.Worknr = 0 AND ROWNUM = 1 ) the output is this: Row Old_val New_val Updflag WorkNo 1 1 20 0 0 2 2 20 0 0 my Problem is, i dont know how to get row 1 to 4 with one select. (i had an idea with 4 sub-querys but this wont work if its more data that matches together) does anyone of you have an idea?

    Read the article

  • Sql Server 2005 Check Constraint not being applied in execution when using variables

    - by DarylS
    Here is some SQL sample code: --Create 2 Sales tables with constraints based on the saledate create table Sales1(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales1 ADD CONSTRAINT CK_Sales1 CHECK (([SaleDate]>='01 May 2010')) GO create table Sales2(SaleDate datetime, Amount money) ALTER TABLE dbo.Sales2 ADD CONSTRAINT CK_Sales2 CHECK (([SaleDate]<'01 May 2010')) GO --Insert some data into Sales1 insert into Sales1 (SaleDate, Amount) values ('02 May 2010', 50) insert into Sales1 (SaleDate, Amount) values ('03 May 2010', 60) GO --Insert some data into Sales2 insert into Sales2 (SaleDate, Amount) values ('30 Mar 2010', 10) insert into Sales2 (SaleDate, Amount) values ('31 Mar 2010', 20) GO --Create a view that combines these 2 tables create VIEW [dbo].[Sales] AS SELECT SaleDate, Amount FROM Sales1 UNION ALL SELECT SaleDate, Amount FROM Sales2 GO --Get the results --Query 1 select * from Sales where SaleDate < '31 Mar 2010' -- if you look at the execution plan this query only looks at Sales2 (Which is good) --Query 2 DECLARE @SaleDate datetime SET @SaleDate = '31 Mar 2010' select * from Sales where SaleDate < @SaleDate -- if you look at the execution plan this query looks at Sales1 and Sales2 (Which is NOT good) Looking at the execution plan you will see that the two queries are differnt. For Query 1 the only table that is accessed is Sales1 (which is good). For Query 2 both tables are accessed (Which is bad). Why are these execution plans different, and how do i get Query 2 to only access the relevant table when variables are used? I have tried to add indexes for the SaleDate column and that does not seem to help.

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • Insertion into BST without header Node JAVA

    - by Petiatil
    I am working on a recursive insertion method for a BST. This function is suppose to be a recursive helper method and is in a private class called Node. The Node class is in a class called BinarySearchTree which contains an instance variable for the root. When I am trying to insert an element, I get a NullPointerException at : this.left = insert(((Node)left).element); I am unsure about why this occurs. If I understand correctly, in a BST, I am suppose to insert the item at the last spot on the path transversed. Any help is appreciated! private class Node implements BinaryNode<E> { E item; BinaryNode<E> left, right; public BinaryNode<E> insert(E item) { int compare = item.compareTo(((Node)root).item); if(root == null) { root = new Node(); ((Node)root).item = item; } else if(compare < 0) { this.left = insert(((Node)left).item); } else if(compare > 0) { this.right = insert(((Node)right).item); } return root; } }

    Read the article

  • Vim disables ibus IME -- is this a bug?

    - by misha
    I'm using ibus IME to input Japanese text into GVim. I have the following Vim script that I source when GVim starts up: autocmd InsertLeave * :call bug#onInsertLeave() function! bug#onInsertLeave() python << EOT import vim import ibus bus = ibus.Bus() ic = ibus.InputContext(bus, bus.current_input_contxt()) ic.disable() print "bug#onInsertLeave(): exiting" EOT endfunction The line that constructs the InputContext raises the exception: dbus.exception.DBusException: org.freedesktop.DBus.Error.Failed: no focused input context This happens under the following conditions: I enter insert mode I insert some Japanese text I exit insert mode If I don't enter any Japanese text through the IME, then the exception is not raised. I've also noticed that if I exit insert mode after entering some Japanese text while the IME is still enabled, then IME input is disabled (I can see the icon change in the taskbar). If I exit insert mode without entering any Japanese text, but while the IME is still enabled, then the IME stays enabled (the icon does not change). It seems like GVim is disabling the IME (or the IME is switching off) in some conditions. Could it be related to the exception? My questions are: Is this a bug? If it is, then whose bug is it? Vim, Ibus, or something else? Are there any ways to work around the exception? EDIT My system info: > misha@misha-lmd:~/git/iwait2013/lagos$ apt-cache policy ibus ibus: > Installed: 1.4.1-3ubuntu1 Candidate: 1.4.1-3ubuntu1 Version table: > *** 1.4.1-3ubuntu1 0 > 500 http://jp.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages > 100 /var/lib/dpkg/status misha@misha-lmd:~/git/iwait2013/lagos$ apt-cache policy vim vim: > Installed: 2:7.3.429-2ubuntu2.1 Candidate: 2:7.3.429-2ubuntu2.1 > Version table: *** 2:7.3.429-2ubuntu2.1 0 > 500 http://jp.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages > 100 /var/lib/dpkg/status > 2:7.3.429-2ubuntu2 0 > 500 http://jp.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages Ubuntu 12.04, Fluxbox

    Read the article

  • Silverlight Cream for May 25, 2010 - 2 -- #870

    - by Dave Campbell
    In this Issue: Kirupa, Matthias Shapiro(-2-, -3-), Giorgetti Alessandro, Kunal Chowdhury, Mike Snow, and Jason Zander. Shoutout: This looks like a really nice WP7 app done by a team of folks for Imagine Cup 2010: Ahead ... I hope to see some blog posts and code on this! From SilverlightCream.com, and remember you can send me a link to your post or submit at SilverlightCream.com: Control Storyboards Easily using Behaviors Kirupa is following through on a promise to discuss the Behaviors that come on-board Blend. He's starting with two to help deal with Storyboards: ControlStoryboardAction and the StoryboardCompletedTrigger. PHP, MySQL and Silverlight: The Complete Tutorial (Part 1) Matthias Shapiro has a 3-parter up on PHP, MySQL, and Silverlight -- wondered how I missed this first one until I realized they all posted in 2 days... this first post sets up the MySQL database to be used. PHP, MySQL, and Silverlight: The Complete Tutorial (Part 2) In part 2, Matthias Shapiro writes a PHP web service that grabs the data from the database and sends it in JSON format to the Silverlight app (see part 3). PHP, MySQL, and Silverlight: The Complete Tutorial (Part 3) Matthias Shapiro's part 3 is the Silverlight part that reads the JSON produced by the PHP webservice from Part 2, to provide display and edits of the data... and this whole series includes source. Silverlight: adding an IsEditing property to the DataForm Giorgetti Alessandro laments the lack of an IsEditing property in the DataForm, then goes on to demonstrate his path to a suitable workaround. Step-by-Step Command Binding in Silverlight 4 Kunal Chowdhury has a nicely-detailed post on Command Binding in Silverlight 4 and builds up a demo MVVM app in the process... source project included. Silverlight Tip of the Day #24 – Resolving Unknown Objects in VS I'm not sure I would call Mike Snow's latest Silverlight Tip 'Silverlight' ... but if you don't know it, you need to. Sample: Windows Phone 7 Example Application with Landscape Layout Whoa... check out the WP7 app Jason Zander did with landscape mode defined... you're going to want to refer back to this one... Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Increasing deadlocks with NoLock

    - by Dave Ballantyne
    One on my personnel pet issues is with inappropriate use of the NOLOCK hint (and read uncommitted) .  Dont get me wrong, I have used it in exceptional circumstances , but as a general statement it is a bad thing.  Mostly , when NOLOCK, is used the discussion is around a single statement,  “it runs faster with nolock for XYZ reason”,  however ,IMO, this is quite a shorted sighted view.  What about the Transaction ? What about other concurrent users ?  What is good for one statement in isolation , does not mean that it is good for the system as a whole.  I have seen on a number of occasions deadlocks happen, when tasks that would of(and should of) be blocked continue to execute, only for a deadlock to occur at a later data writing (INSERT,UPDATE,DELETE) statement.  Writers will block writers regardless of isolation level. By Way of (fairly contrived ) example , lets generate some dummy tables and populate with some data drop table a go drop table b go Create Table a ( col1 integer ) go insert into a values(1) insert into a values(2) go Create Table b ( col1 integer ) go insert into b values(1) insert into b values(2) go   Now make two connections. In connection one execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from a In connection two execute set transaction isolation level read committed BEGIN TRAN Select * from a Select * from b delete from b Right now the ‘select from a’ in connection two is being blocked by the ‘delete from a’ in connection one.  This is ,IMO, quite a healthy and natural thing to be happening , some see this as a ‘slow down’, a drop in performance.  So, lets reach for our ‘NOLOCK’ magic pill.  Cancel the blocked query and ROLLBACK both transactions, then in connection one execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from b and then in connection two execute set transaction isolation level read uncommitted BEGIN TRAN Select * from a Select * from b delete from a We have now solved out performance problem , no more blocking.  Lets finish the work required by the transaction, in connection one , execute delete from a Oh, ‘ performance problem’ again , its now being blocked. Still, lets complete the work in connection two…. delete from b DEADLOCK!!  It is important to be clear about the role of the select statements.  They do not participate within the deadlock, but are preventing code executing that would of.   Additionally, without the select readers to block, a deadlock would occur on the deletes with READ COMMITTED. Naturally, other isolation levels will exhibit different behaviour as to where and when they will and wont block,  and I would encourage you to read BOL and satisfy yourself that you really do NEED to NOLOCK.

    Read the article

  • SQL SERVER – DELETE, TRUNCATE and RESEED Identity

    - by pinaldave
    Yesterday I had a headache answering questions to one of the DBA on the subject of Reseting Identity Values for All Tables. After talking to the DBA I realized that he has no clue about how the identity column behaves when there is DELETE, TRUNCATE or RESEED Identity is used. Let us run a small T-SQL Script. Create a temp table with Identity column beginning with value 11. The seed value is 11. USE [TempDB] GO -- Create Table CREATE TABLE [dbo].[TestTable]( [ID] [int] IDENTITY(11,1) NOT NULL, [var] [nchar](10) NULL ) ON [PRIMARY] GO -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO When seed value is 11 the next value which is inserted has the identity column value as 11. – Select Data SELECT * FROM [TestTable] GO Effect of DELETE statement -- Delete Data DELETE FROM [TestTable] GO When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] Effect of TRUNCATE statement -- Truncate table TRUNCATE TABLE [TestTable] GO When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Effect of RESEED statement If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1. -- Reseed DBCC CHECKIDENT ('TestTable', RESEED, 1) GO When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2. -- Build sample data INSERT INTO [TestTable] VALUES ('val') GO -- Select Data SELECT * FROM [TestTable] GO Here is the clean up act. -- Clean up DROP TABLE [TestTable] GO Question for you: If I reseed value with some random number followed by the truncate command on the table what will be the seed value of the table. (Example, if original seed value is 11 and I reseed the value to 1. If I follow up with truncate table what will be the seed value now? Here is the complete script together. You can modify it and find the answer to the above question. Please leave a comment with your answer. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • DataTable identity column not set after DataAdapter.Update/Refresh on table with "instead of"-trigge

    - by Arno
    Within our unit tests we use plain ADO.NET (DataTable, DataAdapter) for preparing the database resp. checking the results, while the tested components themselves run under NHibernate 2.1. .NET version is 3.5, SqlServer version is 2005. The database tables have identity columns as primary keys. Some tables apply instead-of-insert/update triggers (this is due to backward compatibility, nothing I can change). The triggers generally work like this: create trigger dbo.emp_insert on dbo.emp instead of insert as begin set nocount on insert into emp ... select @@identity end The insert statement issued by the ADO.NET DataAdapter (generated on-the-fly by a thin ADO.NET wrapper) tries to retrieve the identity value back into the DataRow: exec sp_executesql N' insert into emp (...) values (...); select id, ... from emp where id = @@identity ' But the DataRow's id-Column is still 0. When I remove the trigger temporarily, it works fine - the id-Column then holds the identity value set by the database. NHibernate on the other hand uses this kind of insert statement: exec sp_executesql N' insert into emp (...) values (...); select scope_identity() ' This works, the NHibernate POCO has its id property correctly set right after flushing. Which seems a little bit counter-intuitive to me, as I expected the trigger to run in a different scope, hence @@identity should be a better fit than scope_identity(). So I thought no problem, I will apply scope_identity() instead of @@identity under ADO.NET as well. But this has no effect, the DataRow value is still not updated accordingly. And now for the best part: When I copy and paste those two statements from SqlServer profiler into a Management Studio query (that is including "exec sp_executesql"), and run them there, the results seem to be inverse! There the ADO.NET version works, and the NHibernate version doesn't (select scope_identity() returns null). I tried several times to verify, but to no avail. Of course this just shows the resultset coming from the database, whatever happens inside NHibernate and ADO.NET is another topic. Also, several session properties defined by T-SQL SET are different in the two scenarios (Management Studio query vs. application at runtime) This is a real puzzle to me. I would be happy about any insights on that. Thank you!

    Read the article

  • Function inserted not all records

    - by user1799459
    I wrote the following code by data transfer from Access to Firebird def FirebirdDatetime(dt): return '\'%s.%s.%s\'' % (str(dt.day).rjust(2,'0'), str(dt.month).rjust(2,'0'), str(dt.year).rjust(4,'0')) def SelectFromAccessTable(tablename): return 'select * from [' + tablename+']' def InsertToFirebirdTable(tablename, row): values='' i=0 for elem in row: i+=1 #print type(elem) if type(elem) == int: temp = str(elem) elif (type(elem) == str) or (type(elem)==unicode): temp = '\'%s\'' % (elem,) elif type(elem) == datetime.datetime: temp =FirebirdDatetime(elem) elif type(elem) == decimal.Decimal: temp = str(elem) elif elem==None: temp='null' if (i<len(row)): values+=temp+', ' else: values+=temp return 'insert into '+tablename+' values ('+values+')' def AccessToFirebird(accesstablename, firebirdtablename, accesscursor, firebirdcursor): SelectSql=SelectFromAccessTable(accesstablename) for row in accesscursor.execute(SelectSql): InsertSql=InsertToFirebirdTable(firebirdtablename, row) InsertSql=InsertSql print InsertSql firebirdcursor.execute(InsertSql) In the main module there is an AccessToFirebird function call conAcc = pyodbc.connect('DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};DBQ=D:\ThirdTask\Northwind.accdb') SqlAccess=conAcc.cursor(); conn.begin() cur=conn.cursor() sql.AccessToFirebird('Customers', 'CLIENTS', SqlAccess, cur) conn.commit() conn.begin() cur=conn.cursor() sql.AccessToFirebird('??????????', 'EMPLOYEES', SqlAccess, cur) sql.AccessToFirebird('????', 'ROLES', SqlAccess, cur) sql.AccessToFirebird('???? ???????????', 'EMPLOYEES_ROLES', SqlAccess, cur) sql.AccessToFirebird('????????', 'DELIVERY', SqlAccess, cur) sql.AccessToFirebird('??????????', 'SUPPLIERS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ???????', 'TAX_STATUS_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????? ? ??????', 'STATE_ORDER_DETAILS', SqlAccess, cur) sql.AccessToFirebird('????????? ???????', 'CONDITION_OF_ORDERS', SqlAccess, cur) sql.AccessToFirebird('??????', 'ORDERS', SqlAccess, cur) sql.AccessToFirebird('?????', 'BILLS', SqlAccess, cur) sql.AccessToFirebird('????????? ?????? ?? ????????????', 'STATUS_PURCHASE_ORDER', SqlAccess, cur) sql.AccessToFirebird('?????? ?? ????????????', 'ORDERS_FOR_ACQUISITION', SqlAccess, cur) sql.AccessToFirebird('???????? ? ?????? ?? ????????????', 'INFORMPURCHASEORDER', SqlAccess, cur) sql.AccessToFirebird('??????', 'PRODUCTS', SqlAccess, cur) conn.commit() conAcc.commit() conn.close() conAcc.close() But as a result, not all records have been inserted into the table Products (Table Goods - Northwind database), for example, does not work request insert into PRODUCTS values ('4', 1, 'NWTB-1', '?????????? ???', null, 13.5000, 18.0000, 10, 40, '10 ??????? ?? 20 ?????????', '10 ??????? ?? 20 ?????????', 10, '???????', '') In ibexpert to this request message issued can't format message 13:587 -- message file C:\Windows\firebird.msg not found. conversion error from string "10 ?????????±???? ???? 20 ???°???µ?‚????????". Worked only requests insert into PRODUCTS values ('1', 82, 'NWTC-82', '???????', null, 2.0000, 4.0000, 20, 100, null, null, null, '????', '') insert into PRODUCTS values ('9', 83, 'NWTCS-83', '???????????? ?????', null, 0.5000, 1.8000, 30, 200, null, null, null, '????? ? ???????', '') insert into PRODUCTS values ('1', 97, 'NWTC-82', '???????', null, 3.0000, 5.0000, 50, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 98, 'NWTSO-98', '??????? ???', null, 1.0000, 1.8900, 100, 200, null, null, null, '????', '') insert into PRODUCTS values ('6', 99, 'NWTSO-99', '??????? ??????', null, 1.0000, 1.9500, 100, 200, null, null, null, '????', '') other records were not inserted.

    Read the article

< Previous Page | 453 454 455 456 457 458 459 460 461 462 463 464  | Next Page >