Search Results

Search found 10481 results on 420 pages for 'identity insert'.

Page 56/420 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • MySql BulkCopy/Insert from DataReader

    - by Sky Sanders
    I am loading a bunch of rows into MySql in C#. In MS Sql I can feed a DataReader to SqlBulkCopy, but the MySqlBulkCopy only presents itself as a bootstrap for a load from file. So, my current solution is using a prepared command in a transacted loop. Is there a faster way to accomplish bulk loading of MySql using a DataReader source?

    Read the article

  • Autoincrementing hierarchical IDs on SQL Server

    - by Ville Koskinen
    Consider this table on SQL Server wordID aliasID value =========================== 0 0 'cat' 1 0 'dog' 2 0 'argh' 2 1 'ugh' WordID is a id of a word which possibly has aliases. AliasID determines a specific alias to a word. So above 'argh' and 'ugh' are aliases to each other. I'd like to be able to insert new words which do not have any aliases in the table without having to query the table for a free wordID value first. Inserting value 'hey' without specifying a wordID or an aliasID, a row looking like this would be created: wordID aliasID value =========================== 3 0 'hey' Is this possible and how?

    Read the article

  • How to Insert a sub string to each line in perl

    - by Nano HE
    Hi, My code as below, How to remove the blank after add hello. to each lines. #!C:\Perl\bin\perl.exe use strict; use warnings; use Data::Dumper; my $fh = \*DATA; #my($line) = $_; while(my $line = <$fh>) { print "Hello.".$line; chomp($line); } __DATA__ Member Information id = 0 name = "tom" age = "20" Output: D:\learning\perl>test.pl Hello.Member Information Hello. id = 0 # i want to remove the blank between Hello. and id Hello. name = "tom" # same as above Hello. age = "20" # same D:\learning\perl>

    Read the article

  • Insert new relationship data in core data

    - by michael
    Hoping someone can shed some light on what I might be doing wrong here. Trying to add an "event" to a list of events, represented by a one to many (inverse) relationship: MyEvents <--- Event This is the code: MyEvents *myEvents = (MyEvents *)[NSEntityDescription insertNewObjectForEntityForName:@"MyEvents" inManagedObjectContext:context]; NSLog(@"MYEVENTS: %@", myEvents); NSLog(@"EVENT: %@", event); [myEvents addEventObject:event]; My context is fine, and both the myEvents and event print perfectly valid information. When I try to add the event that I have (which is passed into this view controller, having been retrieved from core data previously) with this code [myEvents addEventObject:event]; It falls over with *** -[NSComparisonPredicate evaluateWithObject:]: message sent to deallocated instance MyEvents and Event are just the default generated code. MyEvents contains only the relationship to event. Thanks.

    Read the article

  • HSQLDB how to manually insert records

    - by l245c4l
    Hey, My question is how can I manually add records to hsqldb database. I mean using command line or some client. I know I can use hsqldb manager but I cannot execute any query with it. It says that there is no table of specified name. What might be the problem?

    Read the article

  • Duplicate entries on mysql on insert using doctrine

    - by Nikos Galis
    Hi all! I am facing a very weird problem with mysql and doctrine [with help of codeIgniter]. I am trying to make a simple migration script taking all records from one table and after a little process, saving them to another. However, on my laptop [running windows and wamp] I get double numbers of the original table records to have been copied to the destination table. In my colleagues' laptops, everything works fine! We are all using mysql 5.0.86 [plus windows plus wamp]. Here is the code : function buggy_function(){ $this->db(); //get db connection $q = Doctrine_Query::create()->from('Oldtable r'); $oldrecords = $q->fetchArray(); $count = 0; foreach ($oldrecords as $oldrecord){ $newrecord = new NewTableClass(); $newrecord->password = md5($oldrecord['password']); $newrecord->save(); echo $newrecord->id. ' Id -> saved.' } } Simple as that! I have 39 records on the Old table and I am getting 78 records in the new table, which are exactly the same records, except from the unique primary key. It seems as if the script runs twice. But the output of the script is the following : 1 Id -> saved. 2 Id -> saved. ... ... 39 Id -> saved. Do you have any idea why this is happening? Any known bug for mysql? Thank you in advanced!'

    Read the article

  • Where to insert 'orderby' expression in this linq-to-sql query

    - by ile
    var result = db.PhotoAlbums.Select(albums => new PhotoAlbumDisplay { AlbumID = albums.AlbumID, Title = albums.Title, Date = albums.Date, PhotoID = albums.Photos.Select(photo => photo.PhotoID).FirstOrDefault().ToString() }); Wherever I try to put orderby albums.AlbumID descending I get error. Someone knows solution? Thanks!

    Read the article

  • Using perl to parse a file and insert specific values into a database

    - by Sean
    Disclaimer: I'm a newbie at scripting in perl, this is partially a learning exercise (but still a project for work). Also, I have a much stronger grasp on shell scripting, so my examples will likely be formatted in that mindset (but I would like to create them in perl). Sorry in advance for my verbosity, I want to make sure I am at least marginally clear in getting my point across I have a text file (a reference guide) that is a Word document converted to text then swapped from Windows to UNIX format in Notepad++. The file is uniform in that each section of the file had the same fields/formatting/tables. What I have planned to do, in a basic way is grab each section, keyed by unique batch job names and place all of the values into a database (or maybe just an excel file) so all the fields can be searched/edited for each job much easier than in the word file and possibly create a web interface later on. So what I want to do is grab each section by doing something like: sed -n '/job_name_1_regex/,/job_name_2_regex/' file.txt --how would this be formatted within a perl script? (grab the section in total, then break it down further from there) To read the file in the script I have open FORMAT_FILE, 'test_format.txt'; and then use foreach $line (<FORMAT_FILE>) to parse the file line by line. --is there a better way? My next problem is that since I converted from a word doc with tables, which looks like: Table Heading 1 Table Heading 2 Heading 1/Value 1 Heading 2/Value 1 Heading 1/Value 2 Heading 2/Value 2 but the text file it looks like: Table Heading 1 Table Heading 2Heading 1/Value 1Heading 1/Value 2Heading 2/Value 1Heading 2/Value 2 So I want to have "Heading 1" and "Heading 2" as a columns name and then put the respective values there. I just am not sure how to get the values in relation to the heading from the text file. The values of Heading 1 will always be the line number of Heading 1 plus 2 (Heading 1, Heading 2, Values for heading 1). I know this can be done in awk/sed pretty easily, just not sure how to address it inside a perl script. After I have all the right values and such, linking it up to a database may be an issue as well, I haven't started looking at the way perl interacts with DBs yet. Sorry if this is a bit scatterbrained...it's still not fully formed in my head.

    Read the article

  • c++ stl priority queue insert bad_alloc exception

    - by bsg
    Hi, I am working on a query processor that reads in long lists of document id's from memory and looks for matching id's. When it finds one, it creates a DOC struct containing the docid (an int) and the document's rank (a double) and pushes it on to a priority queue. My problem is that when the word(s) searched for has a long list, when I try to push the DOC on to the queue, I get the following exception: Unhandled exception at 0x7c812afb in QueryProcessor.exe: Microsoft C++ exception: std::bad_alloc at memory location 0x0012ee88.. When the word has a short list, it works fine. I tried pushing DOC's onto the queue in several places in my code, and they all work until a certain line; after that, I get the above error. I am completely at a loss as to what is wrong because the longest list read in is less than 1 MB and I free all memory that I allocate. Why should there suddenly be a bad_alloc exception when I try to push a DOC onto a queue that has a capacity to hold it (I used a vector with enough space reserved as the underlying data structure for the priority queue)? I know that questions like this are almost impossible to answer without seeing all the code, but it's too long to post here. I'm putting as much as I can and am anxiously hoping that someone can give me an answer, because I am at my wits' end. The NextGEQ function is too long to put here, but it reads a list of compressed blocks of docids block by block. That is, if it sees that the lastdocid in the block (in a separate list) is larger than the docid passed in, it decompresses the block and searches until it finds the right one. If it sees that it was already decompressed, it just searches. Below, when I call the function the first time, it decompresses a block and finds the docid; the push onto the queue after that works. The second time, it doesn't even need to decompress; that is, no new memory is allocated, but after that time, pushing on to the queue gives a bad_alloc error. struct DOC{ long int docid; long double rank; public: DOC() { docid = 0; rank = 0.0; } DOC(int num, double ranking) { docid = num; rank = ranking; } bool operator>( const DOC & d ) const { return rank > d.rank; } bool operator<( const DOC & d ) const { return rank < d.rank; } }; struct listnode{ int* metapointer; int* blockpointer; int docposition; int frequency; int numberdocs; int* iquery; listnode* nextnode; }; void QUERYMANAGER::SubmitQuery(char *query){ vector<DOC> docvec; docvec.reserve(20); DOC doct; //create a priority queue to use as a min-heap to store the documents and rankings; //although the priority queue uses the heap as its underlying data structure, //I found it easier to use the STL priority queue implementation priority_queue<DOC, vector<DOC>,std::greater<DOC>> q(docvec.begin(), docvec.end()); q.push(doct); //do some processing here; startlist is a pointer to a listnode struct that starts the //linked list cout << "Opening lists:" << endl; //point the linked list start pointer to the node returned by the OpenList method startlist = &OpenList(value); listnode* minpointer; q.push(doct); //more processing here; else{ //start by finding the first docid in the shortest list int i = 0; q.push(doct); num = NextGEQ(0, *startlist); q.push(doct); while(num != -1) cout << "finding nextGEQ from shortest list" << endl; q.push(doct); //the is where the problem starts - every previous q.push(doct) works; the one after //NextGEQ(num +1, *startlist) gives the bad_alloc error num = NextGEQ(num + 1, *startlist); q.push(doct); //if you didn't break out of the loop; i.e., all lists contain a matching docid, //calculate the document's rank; if it's one of the top 20, create a struct //containing the docid and the rank and add it to the priority queue if(!loop) { cout << "found match" << endl; if(num < 0) { cout << "reached end of list" << endl; //reached the end of the shortest list; close the list CloseList(startlist); break; } rank = calculateRanking(table, num); try{ //if the heap is not full, create a DOC struct with the docid and //rank and add it to the heap if(q.size() < 20) { doc.docid = num; doc.rank = rank; q.push(doct); q.push(doc); } } catch (exception& e) { cout << e.what() << endl; } } } Thank you very much, bsg.

    Read the article

  • getting last insert id .sqlalchemy orm

    - by gummmibear
    Hi i use sqlalchemy, i need some help. import hashlib import sqlalchemy as sa from sqlalchemy import orm from allsun.model import meta t_user = sa.Table("users",meta.metadata,autoload=True) class Duplicat(Exception): pass class LoginExistsException(Exception): pass class EmailExistsException(Exception): pass class User(object): """ def __setattr__(self, key, value): if key=='password' : value=unicode(hashlib.sha512(value).hexdigset()) object.__setattr__(self,key,value) """ def loginExists(self): try: meta.Session.query(User).filter(User.login==self.login).one() except orm.exc.NoResultFound: pass else: raise LoginExistsException() def emailExists(self): try: meta.Session.query(User).filter(User.email==self.email).one() except orm.exc.NoResultFound: pass else: raise EmailExistsException() def save(self): meta.Session.begin() meta.Session.save(self) try: meta.Session.commit() except sa.exc.IntegrityError: raise Duplicat() How can i get inserted id when i call? user = User() user.login = request.params['login'] user.password = hashlib.sha512(request.params['password']).hexdigest() user.email = request.params['email'] user.save()

    Read the article

  • How to insert JSP functionality in Servlets?

    - by chustar
    How can I use Servlets to access the HTML uses of having JSP without having to have all my client-facing pages called *.jsp? I would rather do this than using all the response.write() stuff because I think it is easier to read and maintain when it is all clean "HTML". Is this is fair assesment?

    Read the article

  • Qt creator, insert custom menu at specified place into menu bar

    - by user363778
    Hi, I have created a menu bar and some menus with Qt creator. One of the menus had to be coded to use QActionGroup features. Now it is easy to add my custom menu to the menu bar with: printMenu = menuBar()-addMenu(tr("&Print")); but my menu will be in the last position of the menu bar. How do I add my menu at a specified place? (e.g. the second place right after the File menu) Greetings

    Read the article

  • Eclipse code template to insert a bookmark?

    - by Mike
    Eclipse has a nifty feature which allows you to define "templates" for code. I have created one to automatically put in a println and add a "TODO" comment. I'd like for this to also add a bookmark so I can easily find it again. (The codebase I am working with makes it unfeasible to use just the Task List to find what I need to do since there are a lot of TODOs laying around.) My current template is simply System.out.println("don't commit me!"); //TODO: fix this ${cursor}.

    Read the article

  • Insert an ajaxified Webpart into an existing MOSS site

    - by mamoo
    Hi everybody, I need to code a webpart which purpose is to asynchronously fetch some documents and display them into an existing page. Unfortunately I have to face a lot of rescritcions and my struggle to find a solution seems useleess so far. 1) I cannot use Microsoft asp.net ajax 2) I must use Jsonp because the called service (page, whatever...) is outside the site's domain. That's not a big problem. 3) I have no possibility to alter the existing page code, so I cannot reference an external library such as JQuery. 4) For the same reason I have no possibility to call my methods on the window.onLoad event, so here the question is: how can I be sure that everything is correctly loaded before triggering my ajax call? 5) Since several instances of the same webpart can be placed into the same page, can there be some possible conflicts among the various js functions?

    Read the article

  • Unit test insert/update/delete

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with data that actually goes into the database. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without saving the data to DB, I would not for sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality in one test?

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Jquery Insert HTML

    - by danit
    I'm using: http://jquery.malsup.com/form/#getting-started to submit my form, Ajax styleee. However the form submits correctly and the data is sent however the fields on the form continue to have data in them. How can I clear them on submit? Hiding the form and displaying thank you would also be acceptable? $(document).ready(function() { $('#pollform').ajaxForm(function() { $('#pollform').hide(); }); });

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >