Search Results

Search found 47 results on 2 pages for 'bulkinsert'.

Page 1/2 | 1 2  | Next Page >

  • Bulkinsert from CSV into db (C#) -> max number of rows in a web application?

    - by Swoosh
    Web application - C#, .Net, SQL 2k5. I recently used bulkinsert on an other application and I thought I would like to give it a try. I am going to receive a CSV file with 1000 rows, which will most likely add 500 000 (that is five hundred thousand) records in the database. I don't have any idea yet about this huge amount if it's going to work out well. I am afraid that it will time out. I didn't do any testing yet, but I am pretty sure it would time out eventually. Is there a way to make it not time out (I don't know ... split the bulkinsert into 1000 pieces :D) or I should try to do something like BCP, with a SQL job ...

    Read the article

  • How to compare two TXT files before send it to SQL

    - by adopilot
    I have to handle TXT dat files which coming from one embed device, My problem is in that device always sending all captured data but I want to take only difrences between two sending and do calculation on them. After calculation I send it to SQL using bulkinsert function. I want to extract data which is different according to first file I got from device. Lats say that device first time device send data like this in some.dat (ASCII) file 0000199991 0000199321 0000132913 0000232318 0000312898 On second calls to get data from device it is going to return all again (previous and next captured records) something like this 0000199991 0000199321 0000132913 0000232318 0000312898 9992129990 8782999022 2323423456 But this time I do want only to calculate and pass trough data added after first insert. I am trying to make Win Forms app using C# and Visual Studio 2008

    Read the article

  • bulk insert and update with ADO.NET Entity Framework

    - by Keith Barrows
    I am writing a small application that does a lot of feed processing. I want to use LINQ EF for this as speed is not an issue, it is a single user app and, in the end, will only be used once a month. My questions revolves around the best way to do bulk inserts using LINQ EF. After parsing the incoming data stream I end up with a List of values. Since the end user may end up trying to import some duplicate data I would like to "clean" the data during insert rather than reading all the records, doing a for loop, rejecting records, then finally importing the remainder. This is what I am currently doing: DateTime minDate = dataTransferObject.Min(c => c.DoorOpen); DateTime maxDate = dataTransferObject.Max(c => c.DoorOpen); using (LabUseEntities myEntities = new LabUseEntities()) { var recCheck = myEntities.ImportDoorAccess.Where(a => a.DoorOpen >= minDate && a.DoorOpen <= maxDate).ToList(); if (recCheck.Count > 0) { foreach (ImportDoorAccess ida in recCheck) { DoorAudit da = dataTransferObject.Where(a => a.DoorOpen == ida.DoorOpen && a.CardNumber == ida.CardNumber).First(); if (da != null) da.DoInsert = false; } } ImportDoorAccess newIDA; foreach (DoorAudit newDoorAudit in dataTransferObject) { if (newDoorAudit.DoInsert) { newIDA = new ImportDoorAccess { CardNumber = newDoorAudit.CardNumber, Door = newDoorAudit.Door, DoorOpen = newDoorAudit.DoorOpen, Imported = newDoorAudit.Imported, RawData = newDoorAudit.RawData, UserName = newDoorAudit.UserName }; myEntities.AddToImportDoorAccess(newIDA); } } myEntities.SaveChanges(); } I am also getting this error: System.Data.UpdateException was unhandled Message="Unable to update the EntitySet 'ImportDoorAccess' because it has a DefiningQuery and no element exists in the element to support the current operation." Source="System.Data.SqlServerCe.Entity" What am I doing wrong? Any pointers are welcome.

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • How to bulk insert a CSV file into SQLite C#

    - by Lirik
    I have seen similar questions (1, 2), but none of them discuss how to insert CSV files into SQLite. About the only thing I could think of doing is to use a CSVDataAdapter and fill the SQLiteDataSet, then use the SQLiteDataSet to update the tables in the database: The only DataAdapter for CSV files I found is not actually available: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); CSVda.HasHeaderRow = true; DataSet ds = new DataSet(); // <-- Use an SQLiteDataSet instead CSVda.Fill(ds); To write to a CSV file: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); bool InclHeader = true; CSVda.Update(MyDataSet,"MyTable",InclHeader); I found the above code @ http://devintelligence.com/2005/02/dataadapter-for-csv-files/ The CSVDataAdapter was supposed to come with OpenNetCF's SDF, but it doesn't seem to be available anymore. Does anybody know where I can get a CSVDataAdapter? Perhaps somebody knows the much simpler thing: how to do bulk inserts of CSV files into SQLite... your help would be greatly appreciated!

    Read the article

  • How to bulk insert a CSV file into SQLite

    - by Lirik
    I have seen similar questions (1, 2), but none of them discuss how to insert CSV files into SQLite. About the only thing I could think of doing is to use a CSVDataAdapter and fill the SQLiteDataSet, then use the SQLiteDataSet to update the tables in the database: The only DataAdapter for CSV files I found is not actually available: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); CSVda.HasHeaderRow = true; DataSet ds = new DataSet(); // <-- Use an SQLiteDataSet instead CSVda.Fill(ds); To write to a CSV file: CSVDataAdapter CSVda = new CSVDataAdapter(@"c:\MyFile.csv"); bool InclHeader = true; CSVda.Update(MyDataSet,"MyTable",InclHeader); I found the above code @ http://devintelligence.com/2005/02/dataadapter-for-csv-files/ The CSVDataAdapter was supposed to come with OpenNetCF's SDF, but it doesn't seem to be available anymore. Does anybody know where I can get a CSVDataAdapter? Perhaps somebody knows the much simpler thing: how to do bulk inserts of CSV files into SQLite... your help would be greatly appreciated!

    Read the article

  • mongodb: insert if not exists

    - by LeMiz
    Hello, Every day, I receive a stock of documents (an update). What I want to do is inserting each of them if it does not exists. I also want to keep track of the first time I inserted them, and the last time I saw them in an update. I don't want to have duplicate documents. I don't want to remove a document which has previously been saved, but is not in my update. 95% (estimated) of the records are unmodified from day to day. I am using the python driver (pymongo), for that matter. What I currently do is (pseudo-code): for each document in update: existing_document = collection.find_one(document) if not existing_document: document['insertion_date'] = now else: document = existing_document document['last_update_date'] = now my_collection.save(document) My problem is that it is very slow (40 mins for less than 100 000 records, and I have millions of them in the update). I am pretty sure there is something builtin for doing this, but the document for update() is mmmhhh.... a bit terse.... ( http://www.mongodb.org/display/DOCS/Updating ) Can someone give an advice on doing it faster ?

    Read the article

  • insert multiple rows via a php array into mysql

    - by toofarsideways
    I'm passing a large dataset into a mysql table via php using insert commands and I'm wondering if its possible to insert approximately 1000 rows at a time via a query other than appending each value on the end of an mile long string and then executing it. I am using the codeigniter framework so its functions are also available to me.

    Read the article

  • bulk insert image from relative path

    - by Markus
    Hi, I wonder if some can help me out with this little problem. I have the following insert statement: insert into symbol (sy_id, sy_fg_color, sy_bg_color, sy_icon) select 302, 0, 16245177, sy_icon = (select * from openrowset(bulk 'K:\mypath\icons\myicon.png', single_blob) as image) Is it possible to make the path relative in any way? I'm using TFS to deploy the database, so if it's not possible to make it relative with T-SQL, maybe it can be done with a little help from TFS/Visual Studio deploy?

    Read the article

  • Can Sql Server BULK INSERT read from a named pipe/fifo?

    - by Peter
    Is it possible for BULK INSERT/bcp to read from a named pipe, fifo-style? That is, rather than reading from a real text file, can BULK INSERT/bcp be made to read from a named pipe which is on the write end of another process? For example: create named pipe unzip file to named pipe read from named pipe with bcp or BULK INSERT or: create 4 named pipes split 1 file into 4 streams, writing each stream to a separate named pipe read from 4 named pipes into 4 tables w/ bcp or BULK INSERT The closest I've found was this fellow (site now unreachable), who managed to write to a named pipe w/ bcp, with a his own utility and usage like so: start /MIN ZipPipe authors_pipe authors.txt.gz 9 bcp pubs..authors out \\.\pipe\authors_pipe -T -n But he couldn't get the reverse to work. So before I head off on a fool's errand, I'm wondering whether it's fundamentally possible to read from a named pipe w/ BULK INSERT or bcp. And if it is possible, how would one set it up? Would NamedPipeServerStream or something else in the .NET System.IO.Pipes namespace be adequate? eg, an example using Powershell: [reflection.Assembly]::LoadWithPartialName("system.core") $pipe = New-Object system.IO.Pipes.NamedPipeServerStream("Bob") And then....what?

    Read the article

  • Bulk Copy from one server to another

    - by Joseph
    Hi All, I've one situation where I need to copy part of the data from one server to another. The table schema are exactly same. I need to move partial data from the source, which may or may not be available in the destination table. The solution I'm thinking is, use bcp to export data to a text(or .dat) file and then take that file to the destination as both are not accessible at the same time (Different network), then import the data to the destination. There are some conditions I need to satisfy. 1. I need to export only a list of data from the table, not whole. My client is going to give me IDs which needs to be moved from source to destination. I've around 3000 records in the master table, and same in the child tables too. What I expect is, only 300 records to be moved. 2. If the record exists in the destination, the client is going to instruct as whether to ignore or overwrite case to case. 90% of the time, we need to ignore the records without overwriting, but log the records in a log file. Please help me with the best approach. I thought of using BCP with query option to filter the data, but while importing, how do I bypass inserting the existing records? If I need to overwrite, how to do it? Thanks a lot in advance. ~Joseph

    Read the article

  • MySql BulkCopy/Insert from DataReader

    - by Sky Sanders
    I am loading a bunch of rows into MySql in C#. In MS Sql I can feed a DataReader to SqlBulkCopy, but the MySqlBulkCopy only presents itself as a bootstrap for a load from file. So, my current solution is using a prepared command in a transacted loop. Is there a faster way to accomplish bulk loading of MySql using a DataReader source?

    Read the article

  • In TSQL (SQL Server), How do I insert multiple rows WITHOUT repeating the "INSERT INTO dbo.Blah" par

    - by Timothy Khouri
    I know I've done this before years ago, but I can't remember the syntax, and I can't find it anywhere due to pulling up tons of help docs and articles about "bulk imports". Here's what I want to do, but the syntax is not exactly right... please, someone who has done this before, help me out :) INSERT INTO dbo.MyTable (ID, Name) VALUES (123, 'Timmy'), (124, 'Jonny'), (125, 'Sally') I know that this is close to the right syntax. I might need the word "BULK" in there, or something, I can't remember. Any idea?

    Read the article

  • How to bulk insert from CSV when some fields have new line character?

    - by z-boss
    I have a CSV dump from another DB that looks like this (id, name, notes): 1001,John Smith,15 Main Street 1002,Jane Smith,"2010 Rockliffe Dr. Pleasantville, IL USA" 1003,Bill Karr,2820 West Ave. The last field may contain carriage returns and commas, in which case it is surrounded by double quotes. I use this code to import CSV into my table: BULK INSERT CSVTest FROM 'c:\csvfile.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) SQL Server 2005 bulk insert cannot figure out that carriage returns inside quotes are not row terminators. How to overcome?

    Read the article

  • SQL Server Bulk insert of CSV file with inconsistent quotes

    - by mattstuehler
    Is it possible to BULK INSERT (SQL Server) a CSV file in which the fields are only OCCASSIONALLY surrounded by quotes? Specifically, quotes only surround those fields that contain a ",". In other words, I have data that looks like this (the first row contain headers): id, company, rep, employees 729216,INGRAM MICRO INC.,"Stuart, Becky",523 729235,"GREAT PLAINS ENERGY, INC.","Nelson, Beena",114 721177,GEORGE WESTON BAKERIES INC,"Hogan, Meg",253 Because the quotes aren't consistent, I can't use '","' as a delimiter, and I don't know how to create a format file that accounts for this. I tried using ',' as a delimter and loading it into a temporary table where every column is a varchar, then using some kludgy processing to strip out the quotes, but that doesn't work either, because the fields that contain ',' are split into multiple columns. Unfortunately, I don't have the ability to manipulate the CSV file beforehand. Is this hopeless? Many thanks in advance for any advice. By the way, i saw this post SQL bulk import from csv, but in that case, EVERY field was consistently wrapped in quotes. So, in that case, he could use ',' as a delimiter, then strip out the quotes afterwards.

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • BULK INSERT problem in mysql

    - by kartiku
    Hi, I get an error with the following sql command for bulk insert....any help would be appreciated. BULK INSERT libra.faculty FROM 'd\:faculty.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ); Here's the error message ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'BULK INSERT libra.faculty FROM 'd:\faculty.csv' WITH ( FIELDTERMINATOR = ',', RO' at line 1

    Read the article

  • Bulk inserts into sqlite db on the iphone...

    - by akaii
    I'm inserting a batch of 100 records, each containing a dictonary containing arbitrarily long HTML strings, and by god, it's slow. On the iphone, the runloop is blocking for several seconds during this transaction. Is my only recourse to use another thread? I'm already using several for acquiring data from HTTP servers, and the sqlite documentation explicitly discourages threading with the database, even though it's supposed to be thread-safe... Is there something I'm doing extremely wrong that if fixed, would drastically reduce the time it takes to complete the whole operation? NSString* statement; statement = @"BEGIN EXCLUSIVE TRANSACTION"; sqlite3_stmt *beginStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &beginStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); return; } if (sqlite3_step(beginStatement) != SQLITE_DONE) { sqlite3_finalize(beginStatement); printf("db error: %s\n", sqlite3_errmsg(database)); return; } NSTimeInterval timestampB = [[NSDate date] timeIntervalSince1970]; statement = @"INSERT OR REPLACE INTO item (hash, tag, owner, timestamp, dictionary) VALUES (?, ?, ?, ?, ?)"; sqlite3_stmt *compiledStatement; if(sqlite3_prepare_v2(database, [statement UTF8String], -1, &compiledStatement, NULL) == SQLITE_OK) { for(int i = 0; i < [items count]; i++){ NSMutableDictionary* item = [items objectAtIndex:i]; NSString* tag = [item objectForKey:@"id"]; NSInteger hash = [[NSString stringWithFormat:@"%@%@", tag, ownerID] hash]; NSInteger timestamp = [[item objectForKey:@"updated"] intValue]; NSData *dictionary = [NSKeyedArchiver archivedDataWithRootObject:item]; sqlite3_bind_int( compiledStatement, 1, hash); sqlite3_bind_text( compiledStatement, 2, [tag UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_text( compiledStatement, 3, [ownerID UTF8String], -1, SQLITE_TRANSIENT); sqlite3_bind_int( compiledStatement, 4, timestamp); sqlite3_bind_blob( compiledStatement, 5, [dictionary bytes], [dictionary length], SQLITE_TRANSIENT); while(YES){ NSInteger result = sqlite3_step(compiledStatement); if(result == SQLITE_DONE){ break; } else if(result != SQLITE_BUSY){ printf("db error: %s\n", sqlite3_errmsg(database)); break; } } sqlite3_reset(compiledStatement); } timestampB = [[NSDate date] timeIntervalSince1970] - timestampB; NSLog(@"Insert Time Taken: %f",timestampB); // COMMIT statement = @"COMMIT TRANSACTION"; sqlite3_stmt *commitStatement; if (sqlite3_prepare_v2(database, [statement UTF8String], -1, &commitStatement, NULL) != SQLITE_OK) { printf("db error: %s\n", sqlite3_errmsg(database)); } if (sqlite3_step(commitStatement) != SQLITE_DONE) { printf("db error: %s\n", sqlite3_errmsg(database)); } sqlite3_finalize(beginStatement); sqlite3_finalize(compiledStatement); sqlite3_finalize(commitStatement);

    Read the article

  • Add data in bulk.

    - by Ashish Rajan
    Hi all, I need your suggestion for this. I need to add data to mysql database through the admin interface, at initial i need to add data in bulk, so i thought of using csv upload but how to add images with csv i.e. when doing single add i insert name , description and a image via a form, but how to do the same for bulk. Thanks in advance.

    Read the article

  • Bulk Insert takes 4x as long on first operation of the day

    - by patrick
    I do Bulk Inserts into a table with about 14 million rows at fiver minute increments during a 7 hour period during the day. These inserts take somewhere between 9-14 secs. However, the first insert always takes about 40 secs. Anyone know what SQL Server 2005 would be doing differently on the first insert into a table for that day? From what I've read I should probably use the SqlBulkCopy class instead of just using a bulk insert in a stored procedure. Is that that the general consensus?

    Read the article

  • How do you increase the number of processes in parallel with Powershell 3?

    - by Mark Shay
    I am trying to run 20 processes in parallel. I changed the session as below, but having no luck. I am getting only up to 5 parallel processes per session. $wo=New-PSWorkflowExecutionOption -MaxSessionsPerWorkflow 50 -MaxDisconnectedSessions 200 -MaxSessionsPerRemoteNode 50 -MaxActivityProcesses 50 Register-PSSessionConfiguration -Name ITWorkflows -SessionTypeOption $wo -Force Get-PSSessionConfiguration ITWorkflows | Format-List -Property * Is there a switch parameter to increase the number of processes? This is what I am running: Workflow MyWorkflow1 { Parallel { InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 2 and 2975416"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 2975417 and 5950831"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 5950832 and 8926246"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 8926247 and 11901661"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 11901662 and 14877076"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns"where OrderId between 14877077 and 17852491"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 17852492 and 20827906"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 20827907 and 23803321"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 23803322 and 26778736"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 26778737 and 29754151"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 29754152 and 32729566"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 32729567 and 35704981"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 35704982 and 38680396"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 38680397 and 432472144"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 432472145 and 435447559"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 435447560 and 438422974"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 864944289 and 867919703"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 867919704 and 870895118"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 870895119 and 1291465602"} InlineScript { import-module \\PS_Scripts\bulkins.ps1; BulkIns "where OrderId between 1291465603 and 1717986945"} }

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • Bulk Insert of hundreds of millions of records

    - by Dave Jarvis
    What is the fastest way to insert 237 million records into a table that has rules (for distributing the data across 84 child tables)? First I tried inserts. No go. Then I tried inserts with BEGIN/COMMIT. Not nearly fast enough. Next, I tried COPY FROM, but then noticed the documentation states that the rules are ignored. (And it was having difficulties with the column order and date format -- it said that '1984-07-1' was not a valid integer; true, but a bit unexpected.) Some example data: station_id,taken,amount,category_id,flag 1,'1984-07-1',0,4, 1,'1984-07-2',0,4, 1,'1984-07-3',0,4, 1,'1984-07-4',0,4,T Here is the table structure (with one rule included): CREATE TABLE climate.measurement ( id bigserial NOT NULL, station_id integer NOT NULL, taken date NOT NULL, amount numeric(8,2) NOT NULL, category_id smallint NOT NULL, flag character varying(1) NOT NULL DEFAULT ' '::character varying ) WITH ( OIDS=FALSE ); ALTER TABLE climate.measurement OWNER TO postgres; CREATE OR REPLACE RULE i_measurement_01_001 AS ON INSERT TO climate.measurement WHERE date_part('month'::text, new.taken)::integer = 1 AND new.category_id = 1 DO INSTEAD INSERT INTO climate.measurement_01_001 (id, station_id, taken, amount, category_id, flag) VALUES (new.id, new.station_id, new.taken, new.amount, new.category_id, new.flag); I can generate the data into any format. Am looking for something that won't take four days. I originally had the data in MySQL (still do), but am hoping to get a performance increase by switching to PostgreSQL and am eager to use its PL/R extensions for stats. I was also thinking about using: http://pgbulkload.projects.postgresql.org/ Any help, tips, or guidance would be greatly appreciated. Thank you!

    Read the article

  • Passing Binary Data to a Stored Procedure in SQL Server 2008

    - by Joe Majewski
    I'm trying to figure out a way to store files in a database. I know it's recommended to store files on the file system rather than the database, but the job I'm working on would highly prefer using the database to store these images (files). There are also some constraints. I'm not an admin user, and I have to make stored procedures to execute all the commands. This hasn't been of much difficulty so far, but I cannot for the life of me establish a way to store a file (image) in the database. When I try to use the BULK command, I get an error saying "You do not have permission to use the bulk load statement." The bulk utility seemed like the easy way to upload files to the database, but without permissions I have to figure a work-a-round. I decided to use an HTML form with a file upload input type and handle it with PHP. The PHP calls the stored procedure and passes in the contents of the file. The problem is that now it's saying that the max length of a parameter can only be 128 characters. Now I'm completely stuck. I don't have permissions to use the bulk command and it appears that the max length of a parameter that I can pass to the SP is 128 characters. I expected to run into problems because binary characters and ascii characters don't mix well together, but I'm at a dead end... Thanks

    Read the article

1 2  | Next Page >