Search Results

Search found 2214 results on 89 pages for 'significant figures'.

Page 51/89 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • ORM solutions (JPA; Hibernate) vs. JDBC

    - by Grasper
    I need to be able to insert/update objects at a consistent rate of at least 8000 objects every 5 seconds in an in-memory HSQL database. I have done some comparison performance testing between Spring/Hibernate/JPA and pure JDBC. I have found a significant difference in performance using HSQL.. With Spring/Hib/JPA, I can insert 3000-4000 of my 1.5 KB objects (with a One-Many and a Many-Many relationship) in 5 seconds, while with direct JDBC calls I can insert 10,000-12,000 of those same objects. I cannot figure out why there is such a huge discrepancy. I have tweaked the Spring/Hib/JPA settings a lot trying to get close in performance without luck. I want to use Spring/Hib/JPA for future purposes, expandability, and because the foreign key relationships (one-many and many-many) are difficult to maintain by hand; but the performance requirements seem to point towards using pure JDBC. Any ideas of why there would be such a huge discrepancy?

    Read the article

  • Multiple Service Address Configs in WCF Silverlight App

    - by CraigS
    My team is building our first significant Silverlight application, using a 3 layered architecture and WCF. We have developed about 10 separate WCF services in the middle layer so far, and this number is only going to grow. Generally, the presentation layer (ie. the Silverlight app) is pointing to the services as hosted on our dev server. However, there are times when I want it to access the services from localhost - ie. the developers machine. Is there an easy way to change where the presentation layer is looking for the services? Is there some way of easily switching between options here?

    Read the article

  • How to skip "Loose Object" popup when running 'git gui'

    - by Michael Donohue
    When I run 'git gui' I get a popup that says This repository currently has approximately 1500 loose objects. It then suggests compressing the database. I've done this before, and it reduces the loose objects to about 250, but that doesn't suppress the popup. Compressing again doesn't change the number of loose objects. Our current workflow requires significant use of 'rebase' as we are transitioning from Perforce, and Perforce is still the canonical SCM. Once Git is the canonical SCM, we will do regular merges, and the loose objects problem should be greatly mitigated. In the mean time, I'd really like to make this 'helpful' popup go away.

    Read the article

  • How should I handle incomplete packet buffers?

    - by Benjamin Manns
    I am writing a client for a server that typically sends data as strings in 500 or less bytes. However, the data will occasionally exceed that, and a single set of data could contain 200,000 bytes, for all the client knows (on initialization or significant events). However, I would like to not have to have each client running with a 50 MB socket buffer (if it's even possible). Each set of data is delimited by a null \0 character. What kind of structure should I look at for storing partially sent data sets? For example, the server may send ABCDEFGHIJKLMNOPQRSTUV\0WXYZ\0123!\0. I would want to process ABCDEFGHIJKLMNOPQRSTUV, WXYZ, and 123! independently. Also, the server could send ABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890LOL123HAHATHISISREALLYLONG without the terminating character. I would want that data set stored somewhere for later appending and processing. Also, I'm using asynchronous socket methods (BeginSend, EndSend, BeginReceive, EndReceive) if that matters.

    Read the article

  • How to deal with color loss on GDI+ Image Resize?

    - by user125775
    Hello All, I am resizing images with C#/GDI+ using the following routing bmpOut = new Bitmap(lnNewWidth, lnNewHeight); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBilinear; g.FillRectangle(Brushes.White, 0, 0, lnNewWidth, lnNewHeight); g.DrawImage(loBMP, 0, 0, lnNewWidth, lnNewHeight); and encoding it with the highest quality. System.Drawing.Imaging.Encoder qualityEncoder = System.Drawing.Imaging.Encoder.Quality; EncoderParameter myEncoderParameter = new EncoderParameter(qualityEncoder, 100L); However, the images that I get have significant loss of color (I am using JPG images only). The quality is perfect, but color is washed away. Do you have any idea what is goingf on? Thanks a lot in advance.

    Read the article

  • What are all the disadvantages of using files as a means of communicating between two processes?

    - by Manny
    I have legacy code which I need to improve for performance reasons. My application comprises of two executables that need to exchange certain information. In the legacy code, one exe writes to a file ( the file name is passed as an argument to exe) and the second executable first checks if such a file exists; if does not exist checks again and when it finds it, then goes on to read the contents of the file. This way information in transferred between the two executables. The way the code is structured, the second executable is successful on the first try itself. Now I have to clean this code up and was wondering what are the disadvantages of using files as a means of communication rather than some inter-process communication like pipes.Is opening and reading a file more expensive than pipes? Are there any other disadvantages? And how significant do you think would be the performance degradation. The legacy code is run on both windows and linux.

    Read the article

  • read subprocess stdout line by line

    - by Caspin
    My python script uses subprocess to call a linux utility that is very noisy. I want to store all of the output to a log file, but only show some of it to the user. I thought the following would work, but the output does show up in my application until the utility has produced a significant amount of output. #fake_utility.py, just generates lots of output over time import time i = 0 while True: print hex(i)*512 i += 1 time.sleep(0.5) #filters output import subprocess proc = subprocess.Popen(['python','fake_utility.py'],stdout.subprocess.PIPE) for line in proc.stdout: #the real code does filtering here print "test:", line.rstrip() The behavior I really want is for the filter script to print each line as it is received from the subprocess. Sorta like what tee does but with python code. What am I missing? Is this even possible?

    Read the article

  • Is it possible to use DLR in a .NET 3.5 website project?

    - by Aplato
    I'm trying to evaluate an expression stored in a database i.e. "if (Q1 ==2) {result = 3.1;} elseif (Q1 ==3){result=4.1;} else result = 5.9;" Rather than parsing it myself I'm trying to use the DLR. I'm using version .92 from the Codeplex repository and my solution is a .NET 3.5 website; and I'm having conflicts between the System.Core and Microsoft.Scripting.ExtenstionAttribute .dll's. Error = { Description: "'ExtensionAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'.", File: "InternalXmlHelper.vb" } At this time I cannot upgrade to .NET 4.0 and make significant use of the .net 3.5 features (so downgrading is not an option). Any help greatly appreciated.

    Read the article

  • Performance of .NET ILMerged assemblies

    - by matt
    I have two .NET libraries: "Foo.Bar" and "Foo.Baz". "Foo.Bar" is self-contained, while "Foo.Baz" references "Foo.Bar". Assuming I do the following: Use ILMerge to merge "Foo.Bar.dll" with "Foo.Baz.dll" into "Foo1.dll". Create a new solution containing the entirity of both "Foo.Bar" and "Foo.Baz" (since I have access to their source code), and compile this into "Foo2.dll". Will there be any differences in the performance of Foo1.dll and Foo2.dll when using their functionality from an external project? If so, how significant is this performance difference, and is it a once-off (on load?) or ongoing difference? Are there any other pros or cons with either approach?

    Read the article

  • Cost to GC of using weak references in C#?

    - by Scott Bilas
    In another question, Stephen C says: A second concern is that there are runtime overheads with using weak references. The obvious costs are those of creating weak references and calling get on them. A less obvious cost is that significant extra work needs to be done each time the GC runs. So what exactly is the cost to the GC of a weak ref? What extra work does it need to do, and how big of a deal is it? I can make some educated guesses, but am interested in the actual mechanics.

    Read the article

  • iphone Memory gets freed in debug mode but not in release mode

    - by gdr
    I have been testing my iPhone debug build on both my device and simulator with activity monitor, leaks, and object allocations. The code is pretty well optimized so I have decided to test the release build. I went into the project Menu and set the target build to be release, I then added the necessary header paths that my app is using to the headers search paths and ran the release build on the device with the above mentioned instruments. What I have noticed now is that memory that was freed when I used the debug build does not get freed when using release version. There is one place in my App that I remove a scroll view with some images which frees up a significant amount of memory when I use the debug build, but no memory is freed up in that place when I use the release version. Does someone have any ideas where I need to start looking at? did I setup my release build wrong?

    Read the article

  • Perl, efficient parsing of csv file

    - by Mike
    I'm working on a project that involves parsing a large csv formatted file in Perl and am looking to make things more efficient. My approach has been to split() the file by lines first, and then split() each line again by commas to get the fields. But this suboptimal since at least two passes on the data are required. (once to split by lines, then once again for each line). This is a very large file, so cutting processing in half would be a significant improvement to the entire application. My question is, what is the most time efficient means of parsing a large CSV file using only built in tools? note: Each line has a varying number of tokens, so we can't just ignore lines and split by commas only. Also we can assume fields will contain only alphanumeric ascii data (no special characters or other tricks). Also, i don't want to get into parallel processing, although that might work effectively.

    Read the article

  • Running out of memory but not seeing excessive object allocation in Instruments

    - by Scotty Allen
    I have an iPad app that's crashing due to low memory. However, Instruments doesn't show any significant amount of memory allocated using ObjectAlloc - it stays under 1MB for the lifetime of the application. Leaks shows less than 1kB leaked over the course of the run. Memory monitor shows the free memory on the devices drop significantly with use, eventually dropping to the point that it's out of memory. Here's a screenshot from Instruments: I'm totally stumped. As far as I can tell, this basically says that as far as my app is concerned, I'm never using more than about 750kB, but that the device is still running out of physical memory, which is causing my app to crash/force exit. I'm new to debugging memory issues with XCode. Am I measuring this wrong? Is there another way to see where this memory is going?

    Read the article

  • Is is better to store serialized data or raw html in mysql?

    - by Yegor
    I de-normalized my database, since the application was crawling otherwise, and Im storing a list of categories for each item in the DB as a raw html version, and simply echoing it out in my design. Each category is actually a link, which is include a tag. Naturally, this is abit of a pain, especially if I want to change the look of how the category links are displayed, since I gotta update all the old cached entries. What if I were to store this data as a serialized array instead, and simply unserialize it, and then apply formatting to it in php. Would there be a significant performance decrease over simply echoing out the raw html?

    Read the article

  • Events raised by BackgroundWorker not executed on expected thread

    - by Topdown
    A winforms dialog is using BackgroundWorker to perform some asynchronous operations with significant success. On occasion, the async process being run by the background worker will need to raise events to the winforms app for user response (a message that asks the user if they wish to cancel), the response of which captured in an CancelEventArgs type of the event. Being an implementation of threading, I would have expected the RaiseEvent of the worker to fire, and then the worker would continue, hence requiring me to pause the worker until the response is received. Instead however, the worker is held to wait for the code executed by the raise event to complete. It seems like method I am calling via the event call is actually on the worker thread used by the background worker, and I am surprised, since I expected to see it on the Main Thread which is where the mainform is running. Also surprisingly, there are no cross thread exceptions thrown. Can somebody please explain why this is not as I expect?

    Read the article

  • Delete all records that have no foreign key constraints

    - by Rodney Burton
    I have a SQL 2005 table with millions of rows in it that is being hit by users all day and night. This table is referenced by 20 or so other tables that have foreign key constraints. What I am needing to do on a regular basis is delete all records from this table where the "Active" field is set to false AND there are no other records in any of the child tables that reference the parent record. What is the most efficient way of doing this short of trying to delete each one at a time and letting it cause SQL errors on the ones that violate constraints? Also it is not an option to disable the constraints and I cannot cause locks on the parent table for any significant amount of time.

    Read the article

  • WPF DataGrid Vs Windows Forms DataGridView

    - by Mrk Mnl
    I have experience in WPF and Windows Forms, however have only used the Windows Forms DataGridView and not the WPF DataGrid (which was only included in .Net 4 or could be added to .Net 3.5 from Codeplex, I understand). I am about to devlop an app using one of these controls heavily for large amounts of data and have read performance is an issue with the WPF DataGrid so I may stick to the Windows Forms DataGridView.. Is this the case? I do not want to use a 3rd party control. Does the Windows Forms DataGridView offer significant performance over the WPF DataGrid for large amounts of data? If I were to use WPF I would prefer to use .Net 3.5S SP1, unless the DataGrid in the .Net 4 is significantly better? Also I want to use ADO with DataTable's which I feel is better suited to Windows Forms..

    Read the article

  • Why shouldnt i use flash again?

    - by acidzombie24
    I heard many times i should avoid flash for my website. Yet no one has told me a good reason. I searched for reasons and i see many that are not true (such as text in flash are not indexable by search engines) or may not necessarily be true or significant enough (eating more bandwidth. Would a JS equivalent be bigger or smaller?). My site uses flash to playback sound (m4a). I dont have to worry about indexing, the back button not working, etc. But i have feeling there may be other reasons. What are reasons i shouldnt use flash on my website. I'll note one, the fact iphone/itouch and mobile devices does not support it. Not a big deal for most sites and is obvious. What are reason to avoid flash on my site?

    Read the article

  • Use of LOC to determine project size

    - by acidzombie24
    How many lines of code (LOC) does it take to be considered a large project? How about for just one person writing it? I know this metric is questionable, but there is a significant difference, for a single developer, between 1k and 10k LOC. I typically use space for readability, especially for SQL statements, and I try to reduce the amount of LOC for maintenance purpose to follow as many best practice as i can. For example, I created a unified diff of the code I modified today, and it was over 1k LOC (including comments and blank lines). Is "modified LOC" a better metric? I have ~2k LOC, so it's surprising I modified 1k. I guess rewriting counts as both a deletion and addition which doubles the stats.

    Read the article

  • Apache modules: C module vs mod_wsgi python module - Performance

    - by Gopal
    Hi A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?

    Read the article

  • Non-graphical linearity estimation

    - by aL3xa
    In my previous post, I was looking for correlation ratio (η or η2) routines in R. I was surprised by the fact that no one uses η for linearity checking in the GLM procedures. Let's start form a simple example: how do you check linearity of bivariate correlation? Solely with scatterplot? There are several ways of doing this, one way is to compare linear and non-linear model R2, then to apply F test to seek for significant difference between them. Finally, the question is: How do you check linearity, the "non-grafical" way?

    Read the article

  • Can I expect a performance gain from removing this JOIN?

    - by makeee
    I have a "items" table with 1 million rows and a "users" table with 20,000 rows. When I select from the "items" table I do a join on the "users" table (items.user_id = user.id), so that I can grab the "username" from the users table. I'm considering adding a username column to the items table and removing the join. Can I expect a decent performance increase from this? It's already quite fast, but it would be nice to decrease my load (which is pretty high). The downside is that if the user changes their username, items will still reflect their old username, but this is okay with me if I can expect a decent performance increase. I'm asking stackoverflow because benchmarks aren't telling me too much. Both queries finish very quickly. Regardless, I'm wondering if removing the join would lighten load on the database to any significant degree.

    Read the article

  • What is the safest way for a PHP script to connect to a local PostgreSQL instance on Linux?

    - by Botond Balázs
    I think if I granted the apache user appropriate privileges and used the ident authentication method, that would make the connection more secure because then the password wouldn't need to be stored in a connection string. Also, that way the security of the connection would depend on how secure the host system is. I disabled root login over ssh and only permit public key authentication so I think it is pretty secure. Does this have any significant security benefits or is it just wishful thinking? Is it necessary at all?

    Read the article

  • Why is silverlight so slow? (Especially when compared to Flash)

    - by Sahat
    I hope I don't have to explain what is Silverlight to SO community. Anyway TemplateMonster.com has recently released new Silverlight themes that have been ported from Flash. Silverlight Templates | Template Monster I've noticed a significant lag on my Macbook Pro 13" in loading the template page in Silverlight. And not just Template Monster templates but other silverlight applications on the web as well. Now why is that? I've been hearing how great Silverlight is and how it's a great business platform blah blah blah. And now Microsoft plans to build Windows Phone 7 on top of Silverlight framework. As much as I want to praise Silverlight, so far it's been nothing but a disappointment to me. Could someone enlighten me, what is so great about Silverlight, and why should I put up with that starting up lag? Silverlight was really next up on my "stuff to learn" list this summer, but now I am not so sure...

    Read the article

  • Safe to KILL a mysql process REPLACEing records in a large myisam table?

    - by threecheeseopera
    I have a REPLACE query running for a few days now on a few MyISAM tables, the largest having 20+million records. I need it to stop. It is, basically: REPLACE INTO really_large_table (a,b,c,d) SELECT e,f,g,h FROM big_table INNER JOIN huge_table ON big_table.x LIKE CONCAT('%', huge_table.y, '%'); I need to KILL it, and I am worried that I may corrupt really_large_table. Because the sub-query itself takes a significant amount of time, the REPLACEing probably occurs (relatively) infrequently; if this is true, does this make it less likely for the data to become corrupted? For the curious, here is the SO question asked about the query I am trying to kill.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >