Search Results

Search found 8328 results on 334 pages for 'dour high arch'.

Page 35/334 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • high performance hibernate insert

    - by luke
    I am working on a latency sensitive part of an application, basically i will receive a network event transform the data and then insert all the data into the DB. After profiling i see that basically all my time is spent trying to save the data. here is the code private void insertAllData(Collection<Data> dataItems) { long start_time = System.currentTimeMillis(); long save_time = 0; long commit_time = 0; Transaction tx = null; try { Session s = HibernateSessionFactory.getSession(); s.setCacheMode(CacheMode.IGNORE); s.setFlushMode(FlushMode.NEVER); tx = s.beginTransaction(); for(Data data : dataItems) { s.saveOrUpdate(data); } save_time = System.currentTimeMillis(); tx.commit(); s.flush(); s.clear(); } catch(HibernateException ex) { if(tx != null) tx.rollback(); } commit_time = System.currentTimeMillis(); System.out.println("Save: " + (save_time - start_time)); System.out.println("Commit: " + (commit_time - save_time)); System.out.println(); } The size of the collection is always less than 20. here is the timing data that i see: Save: 27 Commit: 9 Save: 27 Commit: 9 Save: 26 Commit: 9 Save: 36 Commit: 9 Save: 44 Commit: 0 This is confusing to me. I figure that the save should be quick and all the time should be spent on commit. but clearly I'm wrong. I have also tried removing the transaction (its not really necessary) but i saw worse times... I have set hibernate.jdbc.batch_size=20... i need this operation to be as fast as possible, ideally there would only be one roundtrip to the database. How can i do this?

    Read the article

  • Using ThreadPool.QueueUserWorkItem in ASP.NET in a high traffic scenario

    - by Michael Hart
    I've always been under the impression that using the ThreadPool for (let's say non-critical) short-lived background tasks was considered best practice, even in ASP.NET, but then I came across this article that seems to suggest otherwise - the argument being that you should leave the ThreadPool to deal with ASP.NET related requests. So here's how I've been doing small asynchronous tasks so far: ThreadPool.QueueUserWorkItem(s => PostLog(logEvent)) And the article is suggesting instead to create a thread explicitly, similar to: new Thread(() => PostLog(logEvent)){ IsBackground = true }.Start() The first method has the advantage of being managed and bounded, but there's the potential (if the article is correct) that the background tasks are then vying for threads with ASP.NET request-handlers. The second method frees up the ThreadPool, but at the cost of being unbounded and thus potentially using up too many resources. So my question is, is the advice in the article correct? If your site was getting so much traffic that your ThreadPool was getting full, then is it better to go out-of-band, or would a full ThreadPool imply that you're getting to the limit of your resources anyway, in which case you shouldn't be trying to start your own threads? Clarification: I'm just asking in the scope of small non-critical asynchronous tasks (eg, remote logging), not expensive work items that would require a separate process (in these cases I agree you'll need a more robust solution).

    Read the article

  • Android: Image displayed at Webview from Url with high quality loss

    - by Merlino
    I want to display an image from an url with an Webview at Android. With Android phones with Version 1.5 and 1.6 there is no problem. but the same pic and the same code on an AndroidPhone with Version 2.0 and the pic is totaly pixelated. Like Android is resizing the image first to a smaller one and then resizing it back to "normal" size. Unfortunately its important to display the pic without any quality loss. I tried to integrate it in the sourcefolder to show it as an normal image, but at Android 2.0 i get an exception because the image is to big. (At Android 1.6 there is no problem) Any ideas how i can display the image without quality loss with Android 2.0 ?

    Read the article

  • High level macro not recognized - Beginner MASM

    - by Francisco P.
    main proc finit .while ang < 91 invoke func, ang fstp res print real8$(ang), 13, 10 print real8$(res), 13, 10 fld ang fld1 fadd fstp ang .endw ret main endp What's wrong with this piece of MASM code? I get an error on .endw. I have ran some tests to ensure myself of that. Assembler tells me invalid instruction operands. Thank you for your time!

    Read the article

  • ntpd on Fedora Core 6 with high negative time rest values

    - by Mark White
    The basic problem is we have a FC6 server instance running on a virtual machine, and the system time seems to have been slowly varying until it is now causing a problem. The server runs 24/7 and has been up for 155 days. It has been changed to show GMT, and reports the time as (example) 00:15:15 GMT whereas the actual time is 00:00:00 GMT. This is an offset of 915 seconds. selinux has been changed to 'setenforce 0' for testing and I am running as root. I stop the ntpd service and change the time in System|Administration|Date & Time. The time still shows the same with 'date' in bash. There are no error logs. I change the date with 'date --set' in bash. The response confirms the changed date. I run 'date' and the incorrect date is shown. There are no error logs. I start the ntpd service and /var/log/messages shows success with 'time reset -915.720139s'. The date remains unchanged. ntpq -p shows three three time servers all have offsets of around -915 seconds. I stop ntpd service and try 'ntpd -gqx' and get the same result as above - success, but a large negative time reset. I've tried varying combinations of the above, and a few more settings in System|Administration|Date & Time - no change. I just need to reset the system time to GMT. No offset. But I can't wait for ntpd to slew the time over the next few weeks. Any advice is welcome, cheers! Sure this shouldn't be this difficult... Mark...

    Read the article

  • Compressing High Resolution Satellite Images

    - by Monika
    Hi! Please advise the best way to compress satellite Image. Details Uncompressed size - 60 gb Uncompressed format - IMG 4 Bands (To be retained after compression) Preferred compression format - JPEG2000 Lossy enough to aid in Visual analysis. Thanks Monika

    Read the article

  • Usage of open source libraries in high governance and risk-averse large organizations (banks, financ

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source dependencies (and also tools). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. I'm also interested in any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • High density Silverlight charting control

    - by ahosie
    I've been looking into Silverlight charting controls to display a large number of samples, (~10,000 data points in five separate series - ~50k points all up). I have found the existing options produced by Dundas, Visifire, Microsoft etc to be extremely poor performers when displaying more than a few hundred data points. I believe the performance issues with existing chart controls is caused by the heavy use of vector graphics. Ergo one solution would be a client-side chart control that uses the WritableBitmap class to generate a raster chart. Before I fall too far down the wheel re-invention rabbit hole - has anyone found a third party or OSS control that will manage large numbers of data points on a sparkline?

    Read the article

  • High quality graph/waveform display component in C#.

    - by dlopeztt
    Hi, I'm looking for a fast, professionally looking and customizable waveform display component in C#. I'm wanting to display mainly real-time audio waveforms (fast!) in both time and frequency domain. I would like the ability to zoom, change axis settings, display multiple channels, customize the feel and colors etc... Anybody knows of anything, whether commercial or not? Thank you! Diego

    Read the article

  • High-quality PDF to Word conversion in PHP?

    - by cletus
    What's the best way of converting PDF docs to Microsoft Word format in PHP? This can be either as a PHP script or calling a (Linux) executable (with proc_open()). It just needs to be relatively fast and produce quality Word documents (in 97/2000/2003 format). Commercial software is OK.

    Read the article

  • Subsonic Simple Repo for high volume site

    - by kjgilla
    Simple Repo has given me a competitive edge in my consulting. I can finish projects much faster than I could in the "cmd.Parameters.Add(param)" days. As things progress on this end im getting into higher volume sites and wondering if Simple Repo is still the way to go. Im wondering what people's experiences have been putting SR into production vs. NHibernate. Any tips or tricks for using SR in production.

    Read the article

  • php csv high load request

    - by msaif
    i have PHP serve and one csv file . i need to read csv file and send the data to the browser. if individual request = 10,000 or more (may be) then reading csv file from harddisk may be costly.how can i efficiently read csv file from PHP and send the data to the browser. there is no option to read data form relational db. only browser<-------------PHP(apache)<----------------CSV flow pattern.

    Read the article

  • High CPU usage when running several "java -version" in parallel

    - by Prateesh
    This is just out of curiosity to understand i have a small shell script for ((i = 0; i < 50; i++)) do java -version & done when i run this my CPU usage report by sar is as below 07:51:25 PM CPU %user %nice %system %iowait %steal %idle 07:51:30 PM all 6.98 0.00 1.75 1.00 0.00 90.27 07:51:31 PM all 43.00 0.00 12.00 0.00 0.00 45.00 07:51:32 PM all 86.28 0.00 13.72 0.00 0.00 0.00 07:51:33 PM all 5.25 0.00 1.75 0.50 0.00 92.50 As you can see, on the third line the CPU is at 100% My java version is 1.5.0_22-b03.

    Read the article

  • Django admin causes high load for one model...

    - by Joe
    In my Django admin, when I try to view/edit objects from one particular model class the memory usage and CPU rockets up and I have to restart the server. I can view the list of objects fine, but the problem comes when I click on one of the objects. Other models are fine. Working with the object in code (i.e. creating and displaying) is ok, the problem only arises when I try to view an object with the admin interface. The class isn't even particularly exotic: class Comment(models.Model): user = models.ForeignKey(User) thing = models.ForeignKey(Thing) date = models.DateTimeField(auto_now_add=True) content = models.TextField(blank=True, null=True) approved = models.BooleanField(default=True) class Meta: ordering = ['-date'] Any ideas? I'm stumped. The only reason I could think of might be that the thing is quite a large object (a few kb), but as I understand it, it wouldn't get loaded until it was needed (correct?).

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Quickest algorithm for finding sets with high intersection

    - by conradlee
    I have a large number of user IDs (integers), potentially millions. These users all belong to various groups (sets of integers), such that there are on the order of 10 million groups. To simplify my example and get to the essence of it, let's assume that all groups contain 20 user IDs (i.e., all integer sets have a cardinality of 100). I want to find all pairs of integer sets that have an intersection of 15 or greater. Should I compare every pair of sets? (If I keep a data structure that maps userIDs to set membership, this would not be necessary.) What is the quickest way to do this? That is, what should my underlying data structure be for representing the integer sets? Sorted sets, unsorted---can hashing somehow help? And what algorithm should I use to compute set intersection)? I prefer answers that relate C/C++ (especially STL), but also any more general, algorithmic insights are welcome. Update Also, note that I will be running this in parallel in a shared memory environment, so ideas that cleanly extend to a parallel solution are preferred.

    Read the article

  • Usage of Maven (and open source in general) in high governance and risk-averse large organizations (

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source (such as tools like Maven etc). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. Bonus points for any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • Important question about linq to SQL performance on high loaded web applications

    - by Alex
    I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).  After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it. But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ @ -- (increase and decrease values). I used to do it like this UPDATE table SET value=value+1 WHERE ID=@I'd It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved. Stats.registeredusers++; Db.submitchanges(); Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1". But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception" You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002. Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net. So my question is how can you solve the problem with linq

    Read the article

  • Recommendations of a high volume log event viewer in a Java enviroment

    - by Thorbjørn Ravn Andersen
    I am in a situation where I would like to accept a LOT of log events controlled by me - notably the logging agent I am preparing for slf4j - and then analyze them interactively. I am not as such interested in a facility that presents formatted log files, but one that can accept log events as objects and allow me to sort and display on e.g. threads and timelines etc. Chainsaw could maybe be an option but is currently not compatible with logback which I use for technical reasons. Is there any project with stand alone viewers or embedded in an IDE which would be suitable for this kind of log handling. I am aware that I am approaching what might be suitable for a profiler, so if there is a profiler projekt suitable for this kind of data acquisition and display where I can feed the event pipe, I would like to hear about it). Thanks for all feedback Update 2009-03-19: I have found that there is not a log viewer which allows me to see what I would like (a visual display of events with coordinates determined by day and time, etc), so I have decided to create a very terse XML format derived from the log4j XMLLayout adapted to be as readable as possible while still being valid XML-snippets, and then use the Microsoft LogParser to extract the information I need for postprocessing in other tools.

    Read the article

  • MsSql Server high Resource Waits and Head Blocker

    - by MartinHN
    Hi I have a MS SQL Server 2008 Standard installation running a database for a webshop. The current size of the database is 2.5 GB. Running on Windows 2008 Standard. Dual Intel Xeon X5355 @ 2.00 GHz. 4 GB RAM. When I open the Activity Monitor, I see that I have a Wait Time (ms/sec) of 5000 in the "Other" category. In the Processes list, all connections from the webshop, the Head Blocker value is 1. I see every day that when I try to access the website, it can take 20-30 secs before it even starts to "work". I know that it is not network latency. (I have a 301 redirect from the same server that is executed instantly). When the first request has been served, it seems as if it's not a sleep anymore and every subsequent request is served instantly with the speed of light. The problem was worse two weeks ago, until I changed every query to include WITH (NOLOCK). But I still experience the problem, and the Wait times in the Activity Monitor is about the same. The largest table (Images) has 32764 rows (448576 KB). Some tables exceed 300000 rows, thought they're much smaller in size than the Images table. I have the default clustered index for every primary key column, only. Any ideas?

    Read the article

  • Large Y-axis tickInterval in high charts does not work

    - by ckovacs
    I have a chart at this JSFiddle to demonstrate a problem where our charts are not respecting the y-axis tick interval for large values: http://jsfiddle.net/z2cDu/1/ var plots = {"usBytePlots":[[1362009600000,143663192997],[1362096000000,110184848742],[1362182400000,97694974247],[1362268800000,90764690805],[1362355200000,112436517747],[1362441600000,113563368701],[1362528000000,139579327454],[1362614400000,118406594506],[1362700800000,125366899935],[1362787200000,134189435596],[1362873600000,132873135854],[1362960000000,121002328604],[1363046400000,123138222001],[1363132800000,115667785553],[1363219200000,103746172138],[1363305600000,108602633473],[1363392000000,89133998142],[1363478400000,92170701458],[1363564800000,86696922873],[1363651200000,80980159054],[1363737600000,97604615694],[1363824000000,108011666339],[1363910400000,124419138381],[1363996800000,121704988344],[1364083200000,124337959109],[1364169600000,137495512348],[1364256000000,136017103319],[1364342400000,60867510427]],"dsBytePlots":[[1362009600000,1734982247336],[1362096000000,1471928923201],[1362182400000,1453869593201],[1362268800000,1411787942581],[1362355200000,1460252447519],[1362441600000,1595590020177],[1362528000000,1658007074783],[1362614400000,1411941908699],[1362700800000,1447659369450],[1362787200000,1643008799861],[1362873600000,1792357973023],[1362960000000,1575173242169],[1363046400000,1565139003978],[1363132800000,1549211975554],[1363219200000,1438411448469],[1363305600000,1380445413578],[1363392000000,1298319283929],[1363478400000,1194578344720],[1363564800000,1211409679299],[1363651200000,1142416351471],[1363737600000,1223822672626],[1363824000000,1267692136487],[1363910400000,1384335759541],[1363996800000,1577205919828],[1364083200000,1675715948928],[1364169600000,1517593781592],[1364256000000,1562183018457],[1364342400000,681007264598]],"aggregatedTotalBytes":43476367948896,"aggregatedUsBytes":3150320403841,"aggregatedDsBytes":40326047545055,"maxTotalBytes":328186292129,"maxTotalBitsPerSecond":30387619.641574074} ; $('#container').highcharts({ yAxis: { tickInterval: 53687091200 // 500 gigabytes. Maximum y-axis value is approx 1.8TB }, series : [ { color: 'rgba(80, 180, 77, 0.7)', type: 'areaspline', name : 'Downstream', data : plots.dsBytePlots, total: plots.aggregatedDsBytes }, { color: 'rgba(33, 143, 197, 0.7)', type: 'areaspline', name : 'Upstream', data : plots.usBytePlots, total: plots.aggregatedUsBytes }] }); In this example we are charting bandwidth utilization in bytes. The chart has a maximum value of about 1.8TB. We set the y-axis tick interval to exactly 500GB but the rendered y-axis ticks don't make any sense for the given interval.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >