Search Results

Search found 811 results on 33 pages for 'bulk'.

Page 25/33 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • SqlBulkCopy causes Deadlock on SQL Server 2000.

    - by megatoast
    I have a customized data import executable in .NET 3.5 which the SqlBulkCopy to basically do faster inserts on large amounts of data. The app basically takes an input file, massages the data and bulk uploads it into a SQL Server 2000. It was written by a consultant who was building it with a SQL 2008 database environment. Would that env difference be causing this? SQL 2000 does have the bcp utility which is what BulkCopy is based on. So, When we ran this, it triggered a Deadlock error. Error details: Transaction (Process ID 58) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. I've tried numerous ways to try to resolve it. like temporarily setting the connection string variable MultipleActiveResultSets=true, which wasn't ideal, but it still gives a Deadlock error. I also made sure it wasn't a connection time out problem. here's the function. Any advice? /// <summary> /// Bulks the insert. /// </summary> public void BulkInsert(string destinationTableName, DataTable dataTable) { SqlBulkCopy bulkCopy; if (this.Transaction != null) { bulkCopy = new SqlBulkCopy ( this.Connection, SqlBulkCopyOptions.TableLock, this.Transaction ); } else { bulkCopy = new SqlBulkCopy ( this.Connection.ConnectionString, SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.UseInternalTransaction ); } bulkCopy.ColumnMappings.Add("FeeScheduleID", "FeeScheduleID"); bulkCopy.ColumnMappings.Add("ProcedureID", "ProcedureID"); bulkCopy.ColumnMappings.Add("AltCode", "AltCode"); bulkCopy.ColumnMappings.Add("AltDescription", "AltDescription"); bulkCopy.ColumnMappings.Add("Fee", "Fee"); bulkCopy.ColumnMappings.Add("Discount", "Discount"); bulkCopy.ColumnMappings.Add("Comment", "Comment"); bulkCopy.ColumnMappings.Add("Description", "Description"); bulkCopy.BatchSize = dataTable.Rows.Count; bulkCopy.DestinationTableName = destinationTableName; bulkCopy.WriteToServer(dataTable); bulkCopy = null; }

    Read the article

  • Using Groovy as a scripting language...

    - by Zombies
    I prefer to use scripting languages for short tasks, anything such as a really simple http bot, bulk importing/exporting data to/from somewhere, etc etc... Basic throw-away scripts and simple stuff. The point being, that a scripting language is just an efficient tool to write quick programs with. As for my understanding of Groovy at this point... If you were to program in Groovy, and you wan't to write a quick script, wouldn't you be forced to going back to regular java syntax (and we know how that can be convoluted compared to a scripting language) in order to do anything more complicated? For example, if I want to do some http scripting, wouldn't I just be right back at using java syntax to invoke Commons HttpClient? To me, the point of a scripting language is for quickly typed and less forced constructs. And here is another thing, it doesn't seem that there is any incentive for groovy based libraries to be developed when there are already so many good java one's out there, thus making groovy appear to be a Java dependent language with minor scripting features. So right now I am wondering if I could switch to Groovy as a scripting language or continue to use a more common scripting language such as Perl, Python or Ruby.

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Database design for credit based purchases

    - by FreshCode
    I need an elegant way to implement credit-based purchases for an online store with a small variety of products which can be purchased using virtual credit or real currency. Alternatively, products could only be priced in credits. Previous work I have implemented credit-based purchasing before using different product types (eg. Credit, Voucher or Music) with post-order processing to assign purchased credit to users in the form of real currency, which could subsequently be used to discount future orders' charge totals. This worked fairly well as a makeshift solution, but did not succeed in disconnecting the virtual currency from the real currency, which is what I'd like to do, since spending credits is psychologically easier for customers than spending real currency. Design I need guidance on designing the database correctly with support for the simultaneous bulk purchase of credits at a discount along with real currency products. Alternatively, should all products be priced in credits and only credit have a real currency value? Existing Database Design Partial Products table: ProductId Title Type UnitPrice SalePrice Partial Orders table: OrderId UserId (related to Users table, not shown) Status Value Total Partial OrderItems table (similar to CartItems table): OrderItemId OrderId (related to Orders table) ProductId (related to Products table) Quantity UnitPrice SalePrice Prospective UserCredits table: CreditId UserId (related to Users table, not shown) Value (+/- value. Summed over time to determine saldo.) Date I'm using ASP.NET MVC and LINQ-to-SQL on a SQL Server database.

    Read the article

  • Temporary debug releases and final application releases

    - by baron
    I have a quick question regarding debug and release in VS 2008. I have an app i've been working on - its not yet complete but the bulk of the functionality is there. So basically i'm trying to give a copy of it now to the person helping with documentation - just so they can have a play and get the feel for what i've made. Now the question is how to provide it to them. I was told to just copy the .exe out of the debug/bin folder and put that onto USB. But when testing, if I run this .exe anywhere else (outside of this folder) it crashes. I've now worked out why this is: var path = ConfigurationManager.AppSettings["PathToUse"]; var files = Directory.GetFiles(path); throws a null reference, so that App.config file is not being used. If I copy that file in with the .exe it works again. So actually my question is regarding the best way to manage this situation. What is the best way to provide a working copy to people, and, is there a reference on preparing apps for release - so everything is packaged together and installed in a clean structured folder heirarchy?

    Read the article

  • Java Error beeping

    - by user1281385
    Im working on a Chat client that i didnt write the bulk of the code for. It works fine however when someone sends a message it beeps (system error beep) when using Java 7. Java 6 and below doesn't have this beep. I cant seem to find whats causing the beep is there any way to find it ? I dont think its calling beep as i have public class nobeep extends sun.awt.windows.WToolkit { @Override public void beep() { System.out.println("tried to beep"); new Exception().printStackTrace(); } } and then called System.setProperty("awt.toolkit", "nobeep"); in the main method. Using the method to send a beep doesnt make it beep. Its only when sent normally. Is there a quick way to track down the cause of the beep ? Edit: After looking in the bugs database - its confirmed. http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7194469 I know it says no work around but is there one (java not c++) or just wait until update 8 ?

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • MVVM: Thin ViewModels and Rich Models

    - by Dan Bryant
    I'm continuing to struggle with the MVVM pattern and, in attempting to create a practical design for a small/medium project, have run into a number of challenges. One of these challenges is figuring out how to get the benefits of decoupling with this pattern without creating a lot of repetitive, hard-to-maintain code. My current strategy has been to create 'rich' Model classes. They are fully aware that they will be consumed by an MVVM pattern and implement INotifyPropertyChanged, allow their collections to be observed and remain cognizant that they may always be under observation. My ViewModel classes tend to be thin, only exposing properties when data actually needs to be transformed, with the bulk of their code being RelayCommand handlers. Views happily bind to either ViewModels or Models directly, depending on whether any data transformation is required. I use AOP (via Postsharp) to ease the pain of INotifyPropertyChanged, making it easy to make all of my Model classes 'rich' in this way. Are there significant disadvantages to using this approach? Can I assume that the ViewModel and View are so tightly coupled that if I need new data transformation for the View, I can simply add it to the ViewModel as needed?

    Read the article

  • JavaScript: Achieving precise animation end values?

    - by bobthabuilda
    I'm currently trying to write my own JavaScript library. I'm in the middle of writing an animation callback, but I'm having trouble getting precise end values, especially when animation duration times are smaller. Right now, I'm only targeting positional animation (left, top, right, bottom). When my animations complete, they end up having an error margin of 5px~ on faster animations, and 0.5px~ on animations 1000+ ms or greater. Here's the bulk of the callback, with notes following. var current = parseFloat( this[0].style[prop] || 0 ) // If our target value is greater than the current , gt = !!( value > current ) , delta = ( Math.abs(current - value) / (duration / 13) ) * (gt ? 1 : -1) , elem = this[0] , anim = setInterval( function(){ elem.style[prop] = ( current + delta ) + 'px'; current = parseFloat( elem.style[prop] ); if ( gt && current >= value || !gt && current <= value ) clearInterval( anim ); }, 13 ); this[0] and elem both reference the target DOM element. prop references the property to animate, left, top, bottom, right, etc. current is the current value of the DOM element's property. value is the desired value to animate to. duration is the specified duration (in ms) that the animation should last. 13 is the setInterval delay (which should roughly be the absolute minimal for all browsers). gt is a var that is true if value exceeds the initial current, else it is false. How can I resolve the error margin?

    Read the article

  • How important is it to use SSL?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • Escaping colons in hibernate createSQLQuery

    - by Stratosgear
    I am confused on how I can create an SQL statement containing colons. I am trying to create a view and I am using (notice the double colons): create view MyView as ( SELECT tableA.colA as colA, tableB.colB as colB, round(tableB.colD / 1024)::numeric, 2) as calcValue, FROM tableA, tableB WHERE tableA.colC = 'someValue' ); This is a postgres query and I am forced to use the double colons (::) in order to correctly run the statement. I then pass the above statement through: s.createSQLQuery(myQuery).executeUpdate(); and I get a: Exception in thread "main" org.hibernate.exception.DataException: \ could not execute native bulk manipulation query at org.hibernate.exception.SQLStateConverter.convert(\ SQLStateConverter.java:102) ... more stacktrace... with an output of my above statement changed as (notice the question mark): create view MyView as ( SELECT tableA.colA as colA, tableB.colB as colB, round(tableB.colD / 1024)?, 2) as calcValue, FROM tableA, tableB WHERE tableA.colC = 'someValue' ); Obviously, hibernate confuses my colons with named parameters. Is there a way to escape the colons (a google suggestion that mentions that a single colon is escaped as a double colon does NOT work) or another way of running this statement? Thanks.

    Read the article

  • Netty options for real-time distribution of small messages to a large number of clients?

    - by user439407
    I am designing a (near) real-time Netty server to distribute a large number of very small messages to a large number of clients across the internet. In internal, go as fast as you can testing, I found that I could do 10k clients no sweat, but now that we are trying to go across the internet, where the latency, bandwidth etc varies pretty wildly we are running into the dreaded outOfMemory issues, even with 2 gigs of RAM. I have tried various workarounds(setting the socket stack sizes smaller, setting high and low water marks, cancelling things that are too old), and they help a little, but they seem to only help a little bit. What would some good ways to optimize Netty for sending large #s of small messages without significant delays? Also, the bulk of the message only consists of one kind of message that I don't particularly care if it doesn't arrive. I would use UDP but because we don't control the client, thats not really a possibility. Is it possible to set a separate timeout solely for this kind of message without affecting the other messages? Any insight you could offer would be greatly appreciated.

    Read the article

  • MS Access to sql server searching

    - by malou17
    How to use this code if we are going to use sql server database becaUSE in this code we used MS Access as the database private void btnSearch_Click(object sender, System.EventArgs e) { String pcode = txtPcode.Text; int ctr = productsDS1.Tables[0].Rows.Count; int x; bool found = false; for (x = 0; x<ctr; x++) { if (productsDS1.Tables[0].Rows[x][0].ToString() == pcode) { found = true; break; } } if (found == true) { txtPcode.Text = productsDS1.Tables[0].Rows[x][0].ToString(); txtDesc.Text = productsDS1.Tables[0].Rows[x][1].ToString(); txtPrice.Text = productsDS1.Tables[0].Rows[x][2].ToString(); } else { MessageBox.Show("Record Not Found"); } private void btnNew_Click(object sender, System.EventArgs e) { int cnt = productsDS1.Tables[0].Rows.Count; string lastrec = productsDS1.Tables[0].Rows[cnt][0].ToString(); int newpcode = int.Parse(lastrec) + 1; txtPcode.Text = newpcode.ToString(); txtDesc.Clear(); txtPrice.Clear(); txtDesc.Focus(); here's the connectionstring Jet OLEDB:Global Partial Bulk Ops=2;Jet OLEDB:Registry Path=;Jet OLEDB:Database Locking Mode=0;Data Source="J:\2009-2010\1st sem\VC#\Sample\WindowsApplication_Products\PointOfSales.mdb"

    Read the article

  • Culture Sensitive GetHashCode

    - by user114928
    Hi, I'm writing a c# application that will process some text and provide basic query functions. In order to ensure the best possible support for other languages, I am allowing the users of the application to specify the System.Globalization.CultureInfo (via the "en-GB" style code) and also the full range of collation options using the System.Globalization.CompareOptions flags enum. For regular string comparison I'm then using a combination of: a) String.Compare overload that accepts the culture and options b) For some bulk processes I'm caching the byte data (KeyData) from CompareInfo.GetSortKey (overload that accepts the options) and using a byte-by-byte comparison of the KeyData. This seemed fine (although please comment if you think these two methods shouldn't be mixed), but then I had reason to use the HashSet< class which only has an overload for IEqualityComparer<. MS documentation seems to suggest that I should use StringComparer (which implements both IEqualityComparer< and IComparer<), but this only seems to support the "IgnoreCase" option from CompareOptions and not "IgnoreKanaType", "IgnoreSymbols", "IgnoreWidth" etc. I'm assuming that a StringComparer that ignores these other options could produce different hashcodes for two strings that might be considered the same using my other comparison options. I'd therefore get incorrect results from my application. Only thought at the moment is to create my own IEqualityComparer< that generates a hashcode from the SortKey.KeyData and compares eqality be using the String.Compare overload. Any suggestions?

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • How much effort do you have to put in to get gains from using SSE?

    - by John
    Case One Say you have a little class: class Point3D { private: float x,y,z; public: operator+=() ...etc }; Point3D &Point3D::operator+=(Point3D &other) { this->x += other.x; this->y += other.y; this->z += other.z; } A naive use of SSE would simply replace these function bodies with using a few intrinsics. But would we expect this to make much difference? MMX used to involve costly state cahnges IIRC, does SSE or are they just like other instructions? And even if there's no direct "use SSE" overhead, would moving the values into SSE registers and back out again really make it any faster? Case Two Instead, you're working with a less OO-based code base. Rather than an array/vector of Point3D objects, you simply have a big array of floats: float coordinateData[NUM_POINTS*3]; void add(int i,int j) //yes it's unsafe, no overlap check... example only { for (int x=0;x<3;++x) { coordinateData[i*3+x] += coordinateData[j*3+x]; } } What about use of SSE here? Any better? In conclusion Is trying to optimise single vector operations using SSE actually worthwhile, or is it really only valuable when doing bulk operations?

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • From where to send mails in a MVC framework, so that there is no duplication of code?

    - by Sabya
    It's a MVC question. Here is the situation: I am writing an application where I have "groups". You can invite other persons to your groups by typing their email and clicking "invite". There are two ways this functionality can be called: a) web interface and b) API After the mail sending is over I want to report to the user which mails were sent successfully (i.e., if the SMTP send succeeded. Currently, I am not interested in reporting mail bounces). So, I am thinking how should I design so that there is no code duplication. That is, API and web-interface should share the bulk of the code. To do this, I can create the method "invite" inside the model "group". So, the API and and the Web-interface can just call: group-invite($emailList); This method can send the emails. But the, problem is, then I have to access the mail templates, create the views for the mails, and then send the mails. Which should actually be in the "View" part or at least in the "Controller" part. What is the most elegant design in this situation? Note: I am really thinking to write this in the Model. My only doubt is: previously I thought sending mails also as "presentation". Since it is may be considered as a different form of generating output.

    Read the article

  • How to upload files and store them in a server local path when MS SQL SERVER allows remote connectio

    - by user193655
    I am developing a win32 windows application with Delphi and MS SQL Server. it works fine in LAN but I am trying to add the support for SQL Server remote connections (= working with a DB that can be accessed with an external IP, as described in this article: http://support.microsoft.com/default.aspx?scid=kb;EN-US;914277). Basically I have a Table in DB where I keep the DocumentID, the document description and the Document path (like \FILESERVER\MyApplicationDocuments\45.zip). Of course \FILESERVER is a local (LAN) path for the server but not for the client (as I am now trying to add the support for remote connections). So I need a way to access \FILESERVER even if of course I cannot see it in LAN. I found the following T-SQL code snippet that is perfect for the "download trick": SELECT BulkColumn as MyFile FROM OPENROWSET(BULK '\FILESERVER\MyApplicationDocuments\45.zip' , SINGLE_BLOB) AS X With the code above I can download a file on the client. But how to upload it? I need an "Uppload trick" to be able to insert new files, but also to delete or replace existing files. Can anyone suggest? If a trick is not available could you suggest an alternative? Like an extended stored procedure or calling some .net assembly from the server.

    Read the article

  • How important is it to use SSL on every page of your website?

    - by Mark
    Recently I installed a certificate on the website I'm working on. I've made as much of the site as possible work with HTTP, but after you log in, it has to remain in HTTPS to prevent session hi-jacking, doesn't it? Unfortunately, this causes some problems with Google Maps; I get warnings in IE saying "this page contains insecure content". I don't think we can afford Google Maps Premier right now to get their secure service. It's sort of an auction site so it's fairly important that people don't get charged for things they didn't purchase because some hacker got into their account. All payments are done through PayPal though, so I'm not saving any sort of credit card info, but I am keeping personal contact information. Fraudulent charges could be reversed fairly easily if it ever came to that. What do you guys suggest I do? Should I take the bulk of the site off HTTPS and just secure certain pages like where ever you enter your password, and that's it? That's what our competition seems to do.

    Read the article

  • Generate and download a text file in javascript

    - by Mark B
    All my research so far suggests this can't be done, but I'm hoping someone here has some cunning ideas. I have a form on a website which allows users to bulk upload lots of URLs to add to a list on the server. There's quite a lot of server-side processing to do on each URL, so to avoid timeouts and to display progress, I've implemented the upload using jQuery to submit the URLs one at a time using ajax. This is all working nicely. However, part of the processing on each URL is deduplicating it against the complete list. The ajax call returns a status indicating either a successful upload or a rejection due to duplication. As the upload progresses, I tell the user how many URLs have been rejected as duplicates (along with overall progress and ETA). The problem now is how to give the user a complete list of the failed duplicate URLs. I've kept them in an array in my jQuery, and would like the user to be able to click on a link on the form to download a text file containing those URLs. Is this possible just using client-side processing? The server-side processing basically handles a single keyword at a time. I'd rather not have to store the duplicates in a database table with some kind of session key which gets sent with every ajax call, and is then used at the end to generate the text file server-side (and then gets cleaned up some time later). I can see how to do this, but it seems very clunky and a bit 20th century.

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Attach an entity that is not new, perhaps having been loaded from another DataContext. LINQ to SQL -

    - by soldieraman
    Alright How I got this error I got one application sitting on a server 2 users accessing this application - doing some bulk data processing . eg. entering values and then the application is working with another system to extract values for them and then saving. I can't recreate the error The error logs show: The error happend at the same time in both the application Both happend on a Attach/Submit (but two different functions) There is no way they are using the same DataContext object as I save the DataContext in the HttpContext.Items My hunch / guess is: One datacontext was not refreshed i.e. the an object was created for the same item twice as it was new in both the forms. eg. Customer Number - a customer was created (as one couldn't be found) by one datacontext - the other one couldn't find it either (i am using compiled queries to find it in the datacontext) so it created another object and on attaching failed. The HttpContext.Items lost its value somehow (i am using a virtual pc as server - maybe something went wrong there) I am going more of the second as I can't recreate the error - but it just might be a timing (for attach/save) thing - also the error makes me think of the 2nd too.

    Read the article

  • confusion about using types instead of gtts in oracle

    - by Omnipresent
    I am trying to convert queries like below to types so that I won't have to use GTT: insert into my_gtt_table_1 (house, lname, fname, MI, fullname, dob) (select house, lname, fname, MI, fullname, dob from (select 'REG' house, mbr_last_name lname, mbr_first_name fname, mbr_mi MI, mbr_first_name || mbr_mi || mbr_last_name fullname, mbr_dob dob from table_1 a, table_b where a.head = b.head and mbr_number = '01' and mbr_last_name = v_last_name) c above is just a sample but complex queries are bigger than this. the above is inside a stored procedure. So to avoid the gtt (my_gtt_table_1). I did the following: create or replace type lname_row as object ( house varchar2(30) lname varchar2(30), fname varchar2(30), MI char(1), fullname VARCHAR2(63), dob DATE ) create or replace type lname_exact as table of lname_row Now in the SP: type lname_exact is table of <what_table_should_i_put_here>%rowtype; tab_a_recs lname_exact; In the above I am not sure what table to put as my query has nested subqueries. query in the SP: (I am trying this for sample purpose to see if it works) select lname_row('', '', '', '', '', '', sysdate) bulk collect into tab_a_recs from table_1; I am getting errors like : ORA-00913: too many values I am really confused and stuck with this :(

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >