Search Results

Search found 811 results on 33 pages for 'bulk'.

Page 7/33 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Need help with Drupal bulk mail low open rate for legitimate mailing list

    - by Ron Williams
    I've moved from constant contact to Drupal Simplenews/Mimemail/SMTP. Previously the open rate was around 50% for constant contact, but now it's 4-5% for the same list via the mentioned setup. Mail is getting out from the server, but it's having an issue anyway. Here's the setup: -The e-mail list consists of approximately 80,000 addresses which is queued at 10,000 e-mails per cron run (which runs hourly). -The server is a Dual Core2Quad machine with 2GB of RAM. -When mail is being sent, the mail queue will usually go up to ~1000 at the beginning of the hour before reducing to ~250 by the time the next cron occurs. -Newsletter is themed to display custom style for newsletter on send -Newsletter is received by some, but appears to be bounced by many (based on low open rate_ -I've added SPF, domain keys, and a PTR record to the DNS -Server hostname (listed in ptr) is different from hosted domain -Very low spam number via Spamassassin -IP and domain are not blacklisted -Mail goes out via SMTP module on delivery. Any ideas?

    Read the article

  • Bulk update + SQL + self join

    - by Nev_Rahd
    Hello All I would like to update a column in Table with reference to other column(s) in same table. Ex: As in figure below - I would like to update effective endate with min date whichever is greater than effective Start Date of currrent record. How can this be acheived in T-SQL. Can this be done with single update statement ? Thanks.

    Read the article

  • Bulk Compare, Report, Update

    - by Tim Donaldon
    I need to import either csv or excel file into a dbase. The column headers will match but I will want to compare the file against the dbase using an ItemID field, list the rows to be affected and the differences, then allow an update to all the rows with the matching ID.

    Read the article

  • How to do bulk update of views?

    - by Shaul
    My database has about 30 views, most of which have a reference to another database on this server (call it DB1). Now, without going into the reasons why, I need to update all those views to DB2, also on the local server. I would hate to have to do this manually on each view. Is there some SQL query I can run that will replace all occurrences of the string 'DB1' with 'DB2' in all my views?

    Read the article

  • sql server bulk copy out/postgres copy from infile

    - by Chris Curvey
    I'm starting a conversion of a system from MS SQL Server to Postgres. I have the table structures converted, and I use "bcp" to get the data out of SQL Server. ERROR: invalid byte sequence for encoding "UTF8": 0x80 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY cm_outgoing, line 200: "200 c:\temp\200.xml 2009-10-10 01:50:44.000 1900-01-01 00:00:00.000" I've already used "sed" to get rid of the NUL (0x00) entries in the file, and I can't find any instances of 0x80 in the file that I'm trying to import. Any thoughts? Is there an easier way?

    Read the article

  • Emacs bulk indent for Python

    - by Vernon
    Working with Python in Emacs if I want to add a try/catch to a block of code, I often find that I am having to indent the whole block, line by line. In Emacs, how do you indent the whole block at once. I am not an experienced Emacs user, but just find it is the best tool for working through ssh. I am using Emacs on the command line(Ubuntu), not as a gui, if that makes any difference.

    Read the article

  • Bulk update & occasional insert (coredata) - Too slow

    - by Andrew
    Update: Currently looking into NSSET's minusSet links: http://stackoverflow.com/questions/1475636/comparing-two-arrays Hi guys, Could benefit from your wisdom here.. I'm using Coredata in my app, on first launch I download a data file and insert over 500 objects (each with 60 attributes) - fast, no problem. Each subsequent launch I download an updated version of the file, from which I need to update all existing objects' attributes (except maybe 5 attributes) and create new ones for items which have been added to the downloaded file. So, first launch I get 500 objects.. say a week later my file now contains 507 items.. I create two arrays, one for existing and one for downloaded. NSArray *peopleArrayDownloaded = [CoreDataHelper getObjectsFromContext:@"person" :@"person_id" :YES :managedObjectContextPeopleTemp]; NSArray *peopleArrayExisting = [CoreDataHelper getObjectsFromContext:@"person" :@"person_id" :YES :managedObjectContextPeople]; If the count of each array is equal then I just do this: NSUInteger index = 0; if ([peopleArrayExisting count] == [peopleArrayDownloaded count]) { NSLog(@"Number of people downloaded is same as the number of people existing"); for (person *existingPerson in peopleArrayExisting) { person *tempPerson = [peopleArrayDownloaded objectAtIndex:index]; // NSLog(@"Updating id: %@ with id: %@",existingPerson.person_id,tempPerson.person_id); // I have 60 attributes which I to update on each object, is there a quicker way other than overwriting existing? index++; } } else { NSLog(@"Number of people downloaded is different to number of players existing"); So now comes the slow part. I end up using this (which is tooooo slow): NSLog(@"Need people added to the league"); for (person *tempPerson in peopeArrayDownloaded) { NSPredicate *predicate = [NSPredicate predicateWithFormat:@"person_id = %@",tempPerson.person_id]; // NSLog(@"Searching for existing person, person_id: %@",existingPerson.person_id); NSArray *filteredArray = [peopleArrayExisting filteredArrayUsingPredicate:predicate]; if ([filteredArray count] == 0) { NSLog(@"Couldn't find an existing person in the downloaded file. Adding.."); person *newPerson = [NSEntityDescription insertNewObjectForEntityForName:@"person" inManagedObjectContext:managedObjectContextPeople]; Is there a way to generate a new array of index items referring to the additional items in my downloaded file? Incidentally, on my tableViews I'm using NSFetchedResultsController so updating attributes will call [cell setNeedsDisplay]; .. about 60 times per cell, not a good thing and it can crash the app. Thanks for reading :)

    Read the article

  • Do partitions allow multiple bulk loads?

    - by ck
    I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction. As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client. To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this). I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?

    Read the article

  • bulk insert from Java into Oracle

    - by Will Glass
    I need to insert many small rows rapidly into Oracle. (5 fields). With MySQL, I break the inserts into groups of 100, then use one insert statement for every group of 100 inserts. But with Oracle, user feedback is that the mass inserts (anywhere from 1000-30000) are too slow. Is there a similar trick I can use to speed up the programmatic inserts from Java into Oracle?

    Read the article

  • how do I import a targa sequence as a composition in after effects cs4 ?

    - by George Profenza
    I am a complete n00b with After Effects now, but I want to achieve something basic. I have a sequence of TARGA(.tga) files named alphanumerically(name_0000, where 0000 is the frame). I want to import those as a composition. What I'm after is having each .tga file siting in the timeline in sequence. I tried File Import File, selected the first, then selected the last, holding Shift, and ticked sequence, but I cannot select Composition from that Dialog, I can only select Footage and I don't knwow why. Hints ?

    Read the article

  • Can Sql Server BULK INSERT read from a named pipe/fifo?

    - by Peter
    Is it possible for BULK INSERT/bcp to read from a named pipe, fifo-style? That is, rather than reading from a real text file, can BULK INSERT/bcp be made to read from a named pipe which is on the write end of another process? For example: create named pipe unzip file to named pipe read from named pipe with bcp or BULK INSERT or: create 4 named pipes split 1 file into 4 streams, writing each stream to a separate named pipe read from 4 named pipes into 4 tables w/ bcp or BULK INSERT The closest I've found was this fellow (site now unreachable), who managed to write to a named pipe w/ bcp, with a his own utility and usage like so: start /MIN ZipPipe authors_pipe authors.txt.gz 9 bcp pubs..authors out \\.\pipe\authors_pipe -T -n But he couldn't get the reverse to work. So before I head off on a fool's errand, I'm wondering whether it's fundamentally possible to read from a named pipe w/ BULK INSERT or bcp. And if it is possible, how would one set it up? Would NamedPipeServerStream or something else in the .NET System.IO.Pipes namespace be adequate? eg, an example using Powershell: [reflection.Assembly]::LoadWithPartialName("system.core") $pipe = New-Object system.IO.Pipes.NamedPipeServerStream("Bob") And then....what?

    Read the article

  • Bulkloading schema less entities on Google App Engine

    - by Rahul
    The new bulkloader added into SDK 1.3.4 works great for models that have a schema. For models inheriting db.Expando (or loosely defined schemas) i would like to understand what i would have to do to bulk upload them. I defined a custom connector, that implemented the ConnectorInterface and converted my data to the python dict required. How can i use this dict to define entities that get uploaded to the data store ? In the documentation there seems to be a post_import_function that can be used to return the entities that get uploaded. Is there an example on how this function is used ?

    Read the article

  • Run time insert using bulk update ,giving an internal error?

    - by Vineet
    Hi , I am trying to make a run time table named dynamic and inserting data into it from index by table using bulk update,but when i am trying to execute it this error is coming: ERROR at line 1: ORA-06550: line 0, column 0: PLS-00801: internal error [74301 ] declare type index_tbl_type IS table of number index by binary_integer; num_tbl index_tbl_type; TYPE ref_cur IS REF CURSOR; cur_emp ref_cur; begin execute immediate 'create table dynamic (v_num number)';--Creating a run time tabl FOR i in 1..10000 LOOP execute immediate 'insert into dynamic values('||i||')';--run time insert END LOOP; OPEN cur_emp FOR 'select * from dynamic';--opening ref cursor FETCH cur_emp bulk collect into num_tbl;--bulk inserting in index by table close cur_emp; FORALL i in num_tbl.FIRST..num_tbl.LAST --Bulk update execute immediate 'insert into dynamic values('||num_tbl(i)||')'; end;

    Read the article

  • How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list?

    - by gunshor
    How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list? I'm trying to demonstrate how to turn any string of multi-line text into an ordered list. The script (preferably JS) needs to respect: - carriage returns at the end of a line, to mean "that line ends here" - indentations at the beginning of a line, to mean "this is part of the item above it" - dashes at the beginning of a line, to mean "this is a task, and the line above it is its project"

    Read the article

  • Bulk inserting best way to about it? + Helping me understand fully what I found so far

    - by chobo2
    Hi So I saw this post here and read it and it seems like bulk copy might be the way to go. http://stackoverflow.com/questions/682015/whats-the-best-way-to-bulk-database-inserts-from-c I still have some questions and want to know how things actually work. So I found 2 tutorials. http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx First way uses 2 ado.net 2.0 features. BulkInsert and BulkCopy. the second one uses linq to sql and OpenXML. This sort of appeals to me as I am using linq to sql already and prefer it over ado.net. However as one person pointed out in the posts what he just going around the issue at the cost of performance( nothing wrong with that in my opinion) First I will talk about the 2 ways in the first tutorial I am using VS2010 Express, .net 4.0, MVC 2.0, SQl Server 2005 Is ado.net 2.0 the most current version? Based on the technology I am using, is there some updates to what I am going to show that would improve it somehow? Is there any thing that these tutorial left out that I should know about? BulkInsert I am using this table for all the examples. CREATE TABLE [dbo].[TBL_TEST_TEST] ( ID INT IDENTITY(1,1) PRIMARY KEY, [NAME] [varchar](50) ) SP Code USE [Test] GO /****** Object: StoredProcedure [dbo].[sp_BatchInsert] Script Date: 05/19/2010 15:12:47 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[sp_BatchInsert] (@Name VARCHAR(50) ) AS BEGIN INSERT INTO TBL_TEST_TEST VALUES (@Name); END C# Code /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 1000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } So first thing is the batch size. Why would you set a batch size to anything but the number of records you are sending? Like I am sending 500,000 records so I did a Batch size of 500,000. Next why does it crash when I do this? If I set it to 1000 for batch size it works just fine. System.Data.SqlClient.SqlException was unhandled Message="A transport-level error has occurred when sending the request to the server. (provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe.)" Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 Class=20 LineNumber=0 Number=233 Server="" State=0 StackTrace: at System.Data.Common.DbDataAdapter.UpdatedRowStatusErrors(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.UpdatedRowStatus(RowUpdatedEventArgs rowUpdatedEvent, BatchCommandInfo[] batchCommands, Int32 commandCount) at System.Data.Common.DbDataAdapter.Update(DataRow[] dataRows, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.UpdateFromDataTable(DataTable dataTable, DataTableMapping tableMapping) at System.Data.Common.DbDataAdapter.Update(DataTable dataTable) at TestIQueryable.Program.BatchInsert() in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 124 at TestIQueryable.Program.Main(String[] args) in C:\Users\a\Downloads\TestIQueryable\TestIQueryable\TestIQueryable\Program.cs:line 16 InnerException: Time it took to insert 500,000 records with insert batch size of 1000 took "2 mins and 54 seconds" Of course this is no official time I sat there with a stop watch( I am sure there are better ways but was too lazy to look what they where) So I find that kinda slow compared to all my other ones(expect the linq to sql insert one) and I am not really sure why. Next I looked at bulkcopy /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } This one seemed to go really fast and did not even need a SP( can you use SP with bulk copy? If you can would it be better?) BatchCopy had no problem with a 500,000 batch size.So again why make it smaller then the number of records you want to send? I found that with BatchCopy and 500,000 batch size it took only 5 seconds to complete. I then tried with a batch size of 1,000 and it only took 8 seconds. So much faster then the bulkinsert one above. Now I tried the other tutorial. USE [Test] GO /****** Object: StoredProcedure [dbo].[spTEST_InsertXMLTEST_TEST] Script Date: 05/19/2010 15:39:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[spTEST_InsertXMLTEST_TEST](@UpdatedProdData nText) AS DECLARE @hDoc int exec sp_xml_preparedocument @hDoc OUTPUT,@UpdatedProdData INSERT INTO TBL_TEST_TEST(NAME) SELECT XMLProdTable.NAME FROM OPENXML(@hDoc, 'ArrayOfTBL_TEST_TEST/TBL_TEST_TEST', 2) WITH ( ID Int, NAME varchar(100) ) XMLProdTable EXEC sp_xml_removedocument @hDoc C# code. /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } So I like this because I get to use objects even though it is kinda redundant. I don't get how the SP works. Like I don't get the whole thing. I don't know if OPENXML has some batch insert under the hood but I do not even know how to take this example SP and change it to fit my tables since like I said I don't know what is going on. I also don't know what would happen if the object you have more tables in it. Like say I have a ProductName table what has a relationship to a Product table or something like that. In linq to sql you could get the product name object and make changes to the Product table in that same object. So I am not sure how to take that into account. I am not sure if I would have to do separate inserts or what. The time was pretty good for 500,000 records it took 52 seconds The last way of course was just using linq to do it all and it was pretty bad. /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } I did only 50,000 records and that took over a minute to do. So I really narrowed it done to the linq to sql bulk insert way or bulk copy. I am just not sure how to do it when you have relationship for either way. I am not sure how they both stand up when doing updates instead of inserts as I have not gotten around to try it yet. I don't think I will ever need to insert/update more than 50,000 records at one type but at the same time I know I will have to do validation on records before inserting so that will slow it down and that sort of makes linq to sql nicer as your got objects especially if your first parsing data from a xml file before you insert into the database. Full C# code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Serialization; using System.Data; using System.Data.SqlClient; namespace TestIQueryable { class Program { private static string connectionString = ""; static void Main(string[] args) { BatchInsert(); Console.WriteLine("done"); } /// <summary> /// This is using linq to sql to to insert lots of records. /// This way is slow as it uses no mass insert. /// Only tried to insert 50,000 records as I did not want to sit around till it did 500,000 records. /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertAll() { using (TestDataContext db = new TestDataContext()) { db.CommandTimeout = 600; for (int count = 0; count < 50000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; db.TBL_TEST_TESTs.InsertOnSubmit(testRecord); } db.SubmitChanges(); } } /// <summary> /// This is using linq to sql to make the table objects. /// It is then serailzed to to an xml document and sent to a stored proedure /// that then does a bulk insert(I think with OpenXML) /// http://www.codeproject.com/KB/linq/BulkOperations_LinqToSQL.aspx /// </summary> private static void LinqInsertXMLBatch() { using (TestDataContext db = new TestDataContext()) { TBL_TEST_TEST[] testRecords = new TBL_TEST_TEST[500000]; for (int count = 0; count < 500000; count++) { TBL_TEST_TEST testRecord = new TBL_TEST_TEST(); testRecord.NAME = "Name : " + count; testRecords[count] = testRecord; } StringBuilder sBuilder = new StringBuilder(); System.IO.StringWriter sWriter = new System.IO.StringWriter(sBuilder); XmlSerializer serializer = new XmlSerializer(typeof(TBL_TEST_TEST[])); serializer.Serialize(sWriter, testRecords); db.insertTestData(sBuilder.ToString()); } } /// <summary> /// An ado.net 2.0 way to mass insert records. This seems to be the fastest. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchBulkCopy() { // Get the DataTable DataTable dtInsertRows = GetDataTable(); using (SqlBulkCopy sbc = new SqlBulkCopy(connectionString, SqlBulkCopyOptions.KeepIdentity)) { sbc.DestinationTableName = "TBL_TEST_TEST"; // Number of records to be processed in one go sbc.BatchSize = 500000; // Map the Source Column from DataTabel to the Destination Columns in SQL Server 2005 Person Table // sbc.ColumnMappings.Add("ID", "ID"); sbc.ColumnMappings.Add("NAME", "NAME"); // Number of records after which client has to be notified about its status sbc.NotifyAfter = dtInsertRows.Rows.Count; // Event that gets fired when NotifyAfter number of records are processed. sbc.SqlRowsCopied += new SqlRowsCopiedEventHandler(sbc_SqlRowsCopied); // Finally write to server sbc.WriteToServer(dtInsertRows); sbc.Close(); } } /// <summary> /// Another ado.net 2.0 way that uses a stored procedure to do a bulk insert. /// Seems slower then "BatchBulkCopy" way and it crashes when you try to insert 500,000 records in one go. /// http://www.codeproject.com/KB/cs/MultipleInsertsIn1dbTrip.aspx#_Toc196622241 /// </summary> private static void BatchInsert() { // Get the DataTable with Rows State as RowState.Added DataTable dtInsertRows = GetDataTable(); SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand("sp_BatchInsert", connection); command.CommandType = CommandType.StoredProcedure; command.UpdatedRowSource = UpdateRowSource.None; // Set the Parameter with appropriate Source Column Name command.Parameters.Add("@Name", SqlDbType.VarChar, 50, dtInsertRows.Columns[0].ColumnName); SqlDataAdapter adpt = new SqlDataAdapter(); adpt.InsertCommand = command; // Specify the number of records to be Inserted/Updated in one go. Default is 1. adpt.UpdateBatchSize = 500000; connection.Open(); int recordsInserted = adpt.Update(dtInsertRows); connection.Close(); } private static DataTable GetDataTable() { // You First need a DataTable and have all the insert values in it DataTable dtInsertRows = new DataTable(); dtInsertRows.Columns.Add("NAME"); for (int i = 0; i < 500000; i++) { DataRow drInsertRow = dtInsertRows.NewRow(); string name = "Name : " + i; drInsertRow["NAME"] = name; dtInsertRows.Rows.Add(drInsertRow); } return dtInsertRows; } static void sbc_SqlRowsCopied(object sender, SqlRowsCopiedEventArgs e) { Console.WriteLine("Number of records affected : " + e.RowsCopied.ToString()); } } }

    Read the article

  • MySQL to AppEngine

    - by Daniel Naito
    Hi Nick! How are you? I'm from Brazil and study at FATEC (college located in Brazil). I'm trying to learn about AppEngine. Now, I'm trying to load a large database from MySQL to AppEngine to perform some queries, but I don't know how i can do it. I did some testing with CSV files,but is there any way to perform the direct import from MySQL? This database is from Pentaho BI Server (www.pentaho.com). Thank you for your attention. Regards, Daniel Naito

    Read the article

  • How to finish a broken data upload to the production Google App Engine server?

    - by WooYek
    I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded? Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check. Here's relevant (IMO) log portion, how to interpret work item numbers? [DEBUG 2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds [DEBUG 2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080] <cut> [DEBUG 2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds [DEBUG 2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170] <cut> [DEBUG 2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds [DEBUG 2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180] [ERROR 2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

    Read the article

  • Getting past Salesforce trigger governors

    - by Jake
    I'm trying to write an "after update" trigger that does a batch update on all child records of the record that has just been updated. This needs to be able to handle 15k+ child records at a time. Unfortunately, the limit appears to be 100, which is so far below my needs it's not even close to acceptable. I haven't tried splitting the records into batches of 100 each, since this will still put me at a cap of 10k updates per trigger execution. (Maybe I could just daisy-chain triggers together? ugh.) Does anyone know what series of hoops I can jump through to overcome yet another ridiculously short-sighted limitation by this awful development "platform"?

    Read the article

  • Exporting query results to a file on the fly

    - by ercan
    Hi all, I need to export the results of a query to a csv file in an FTP folder. Is it possible to achieve this within a stored procedure? If yes, comes yet another constraint: can I achieve this without sysadmin privileges, aka without using xp_cmdshell + BCP utility? If no to 2., does the caller have to have sysadmin privileges or would it suffice if the SP owner has sysadmin privileges? Here are some more details to the problem: The SP must export and transfer the file on the fly and raise error if something went wrong. The caller must get a response immediately, i.e. in case of no error, he can assume that the results are successfully transferred to the folder. Therefore, a DTS/SSIS job that runs every N minutes is not an option. I know the problem smells like I will have to do this at application level, but I would be more than happy if all those stuff could be done from T-SQL.

    Read the article

  • Bulk get of child entities on Google app engine?

    - by dfrankow
    On Google App Engine in Python, I have a Visit entity that has a parent of Patient. A Patient may have multiple visits. I need to set the most_recent_visit (and some auxiliary visit data) somewhere for later querying, probably in another child entity that Brett Slatkin might call a "relationship index entity." I wish to do so in a bulk style as follows: 1. get 100 Patient keys 2. get all Visits that have any of the Patients from 1 as an ancestor 3. go through the Visits and get the latest for each Patient Is there any way to perform step 2 in a bulk get? I know there is a way to bulk get entities from a list of keys, and there is a way to match a single entity by its ancestor.

    Read the article

  • How to create the automatic mass form submitter (javascript-ajax script) to be used on the 3rd part

    - by Daniel
    I need a script that can handle the following tasks. Take user data from my database and fill in and submit / post data to forms located on third part websites.: So I want to know if is it hard to create or do somebody knows if does exists some script for mass form submissions in PHP -Javascript-Ajax ? I run Dancers & Hostess & Model jobs website, I would like to find some script which allows the girls automaticly submit to hundreds websites forms (other 3rd part model agencies) with their similar model application form info on my website previously specified, 1).Firstly the girls will fill out my agency portfolio very detailed form , like this i will get all the model personal info from them , 2) Secondly i would like to allow for example models to submit to 100 and more other model agencies forms (I will find those websites before, and I will get their field names = values and thanks to some script would like to connect them with every girl data already created in my website to submit . I would like to implement it to my wordpress website where the girls has their portfolios instead of my pages . I would like to offer this service especially to models , it should work like some directory submitters , The script knows names - values and fill it out itself, but I want it online - browser side, where the girls should only fill out captcha if there is and click the button "submit".After succesful submit it should offer other form to submit. I would be very happy if you know the answer or if you can redirect me to some article

    Read the article

  • How to bulk insert from CSV when some fields have new line character?

    - by z-boss
    I have a CSV dump from another DB that looks like this (id, name, notes): 1001,John Smith,15 Main Street 1002,Jane Smith,"2010 Rockliffe Dr. Pleasantville, IL USA" 1003,Bill Karr,2820 West Ave. The last field may contain carriage returns and commas, in which case it is surrounded by double quotes. I use this code to import CSV into my table: BULK INSERT CSVTest FROM 'c:\csvfile.csv' WITH ( FIELDTERMINATOR = ',', ROWTERMINATOR = '\n' ) SQL Server 2005 bulk insert cannot figure out that carriage returns inside quotes are not row terminators. How to overcome?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >