Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 68/3203 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • XmlSerializer Performance Issue when Specifying XmlRootAttribute

    - by Dougc
    I'm currently having a really weird issue and I can't seem to figure out how to resolve it. I've got a fairly complex type which I'm trying to serialize using the XmlSerializer class. This actually functions fine and the type serializes properly, but seems to take a very long time in doing so; around 5 seconds depending on the data in the object. After a bit of profiling I've narrowed the issue down - bizarrely - to specifying an XmlRootAttribute when calling XmlSerializer.Serialize. I do this to change the name of a collection being serialized from ArrayOf to something a bit more meaningful. Once I remove the parameter the operation is almost instant! Any thoughts or suggestions would be excellent as I'm entirely stumped on this one!

    Read the article

  • NSURLConnection receives data even if no data was thrown back

    - by Anna Fortuna
    Let me explain my situation. Currently, I am experimenting long-polling using NSURLConnection. I found this and I decided to try it. What I do is send a request to the server with a timeout interval of 300 secs. (or 5 mins.) Here is a code snippet: NSURL *url = [NSURL URLWithString:urlString]; NSURLRequest *request = [NSURLRequest requestWithURL:url cachePolicy:NSURLCacheStorageAllowedInMemoryOnly timeoutInterval:300]; NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&resp error:&err]; Now I want to test if the connection will "hold" the request if no data was thrown back from the server, so what I did was this: if (data != nil) [self performSelectorOnMainThread:@selector(dataReceived:) withObject:data waitUntilDone:YES]; And the function dataReceived: looks like this: - (void)dataReceived:(NSData *)data { NSLog(@"DATA RECEIVED!"); NSString *string = [NSString stringWithUTF8String:[data bytes]]; NSLog(@"THE DATA: %@", string); } Server-side, I created a function that will return a data once it fits the arguments and returns none if nothing fits. Here is a snippet of the PHP function: function retrieveMessages($vardata) { if (!empty($vardata)) { $result = check_data($vardata) //check_data is the function which returns 1 if $vardata //fits the arguments, and 0 if it fails to fit if ($result == 1) { $jsonArray = array('Data' => $vardata); echo json_encode($jsonArray); } } } As you can see, the function will only return data if the $result is equal to 1. However, even if the function returns nothing, NSURLConnection will still perform the function dataReceived: meaning the NSURLConnection still receives data, albeit an empty one. So can anyone help me here? How will I perform long-polling using NSURLConnection? Basically, I want to maintain the connection as long as no data is returned. So how will I do it? NOTE: I am new to PHP, so if my code is wrong, please point it out so I can correct it.

    Read the article

  • How to maintain an ordered table with Core Data (or SQL) with insertions/deletions?

    - by Jean-Denis Muys
    This question is in the context of Core Data, but if I am not mistaken, it applies equally well to a more general SQL case. I want to maintain an ordered table using Core Data, with the possibility for the user to: reorder rows insert new lines anywhere delete any existing line What's the best data model to do that? I can see two ways: 1) Model it as an array: I add an int position property to my entity 2) Model it as a linked list: I add two one-to-one relations, next and previous from my entity to itself 1) makes it easy to sort, but painful to insert or delete as you then have to update the position of all objects that come after 2) makes it easy to insert or delete, but very difficult to sort. In fact, I don't think I know how to express a Sort Descriptor (SQL ORDER BY clause) for that case. Now I can imagine a variation on 1): 3) add an int ordering property to the entity, but instead of having it count one-by-one, have it count 100 by 100 (for example). Then inserting is as simple as finding any number between the ordering of the previous and next existing objects. The expensive renumbering only has to occur when the 100 holes have been filled. Making that property a float rather than an int makes it even better: it's almost always possible to find a new float midway between two floats. Am I on the right track with solution 3), or is there something smarter?

    Read the article

  • jquery/ajax load new content when available

    - by Tim
    I know this is quite a vague question -- sorry about that. If I have a forum, shoutbox or something similar where users enter and submit data, running on PHP and MySQL, what is the best way to have the newly submitted content automatically display on the page for all users when it is submitted? Much like a live newsfeed, if you like... The effect sort-of works here at stackoverflow, when you are answering a question you are told when new answer is submitted. I want to test for newly submitted content and then display it automatically. Any suggestions? Many thanks :)

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • How can I scrape specific data from a website

    - by Stoney
    I'm trying to scrape data from a website for research. The urls are nicely organized in an example.com/x format, with x as an ascending number and all of the pages are structured in the same way. I just need to grab certain headings and a few numbers which are always in the same locations. I'll then need to get this data into structured form for analysis in Excel. I have used wget before to download pages, but I can't figure out how to grab specific lines of text. Excel has a feature to grab data from the web (Data-From Web) but from what I can see it only allows me to download tables. Unfortunately, the data I need is not in tables.

    Read the article

  • Performance experiences for running Windows 7 on a Thin-Client?

    - by Peter Bernier
    Has anyone else tried installing Windows 7 on thin-client hardware? I'd be very interested to hear about other people's experiences and what sort of hardware tweaks they had to do to get it to work. (Yes, I realize this is completely unsupported.. half the fun of playing with machines and beta/RC versions is trying out unsupported scenarios. :) ) I managed to get Windows 7 installed on a modified Wyse 9450 Thin-Client and while the performance isn't great, it is usable, particularly as an RDP workstation. Before installing 7, I added another 256Mb of ram (512 total), a 60G laptop hard-drive and a PCI videocard to the 9450 (this was in order to increase the supported screen resolution). I basically did this in order to see whether or not it was possible to get 7 installed on such minimal hardware, and see what the performance would be. For a 550Mhz processor, I was reasonably impressed. I've been using the machine for RDP for the last couple of days and it actually seems slightly snappier than the default Windows XP embedded install (although this is more likely the result of the extra hardware). I'll be running some more tests later on as I'm curious to see particularl whether the streaming video performance will improve. I'd love to hear about anyone's experiences getting 7 to work on extremely low-powered hardware. Particularly any sort of tweaks that you've discovered in order to increase performance..

    Read the article

  • To what extent is size a factor in SSD performance?

    - by artif
    To what extent is the size of an SSD a factor in its performance? In my mind, correct me if I'm wrong, a bigger SSD should be, everything else being equal, faster than a smaller one. A bigger SSD would have more erase blocks and thus more leeway for the FTL (flash translation layer) to do garbage collection optimization. Also there would be more time before TRIM became necessary. I see on Wikipedia that it remarks that "The performance of the SSD can scale with the number of parallel NAND flash chips used in the device" so it seems throughput also increases significantly. Also many SSDs contain internal caches of some sort and presumably those caches are larger for correspondingly large SSDs. But supposing this effect exists, I would like a quantitative analysis. Does throughput increase linearly? How much is garbage collection impacted, if at all? Does latency stay the same? And so on. Would the performance of a 8 GB SSD be significantly different from, for example, an 80 GB SSD assuming both used high quality chips, controllers, etc? Are there any resources (webpages, research papers, presentations, books, etc) that discuss correlations between SSD performance (4 KB random write speed, latency, maximum sequential throughput, etc) and size? I realize this does not really sound like a programming question but it is relevant for what I'm working on (using flash for caching hard drive data) which does involve programming. If there is a better place to ask this question, eg a more hardware oriented site, what would that be? Something like the equivalent of stack overflow (or perhaps a forum) for in-depth questions on hardware interfaces, internals, etc would be appreciated.

    Read the article

  • How can I pin point a USB file transfer bottleneck in Unix?

    - by HankHendrix
    I'm experiencing very slow data transfer speeds over USB 2.0 on my nix box and was wondering how I can pin-point the cause of the problem. I've looked into iotop and top but the cpu and mem figures look normal (compared to guides I have checked). The box which is affected is Ubuntu 12.04 32bit Server running on an Asus EEE 701 2G model and I am transferring from the OS over USB 2.0 to an external HDD (which transfers at 30MB/s+ on Windows 7 on other machine). I get rsync write speeds of 1MB/s from OS to USB HDD which seems ridiculously slow. These speeds are consistent with other USB HDDs and sticks.

    Read the article

  • How should I architect my Model and Data Access layer objects in my website?

    - by Robin Winslow
    I've been tasked with designing Data layer for a website at work, and I am very interested in architecture of code for the best flexibility, maintainability and readability. I am generally acutely aware of the value in completely separating out my actual Models from the Data Access layer, so that the Models are completely naive when it comes to Data Access. And in this case it's particularly useful to do this as the Models may be built from the Database or may be built from a Soap web service. So it seems to me to make sense to have Factories in my data access layer which create Model objects. So here's what I have so far (in my made-up pseudocode): class DataAccess.ProductsFromXml extends DataAccess.ProductFactory {} class DataAccess.ProductsFromDatabase extends DataAccess.ProductFactory {} These then get used in the controller in a fashion similar to the following: var xmlProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); var databaseProductCreator = DataAccess.ProductsFromXml(xmlDataProvider); // Returns array of Product model objects var XmlProducts = databaseProductCreator.Products(); // Returns array of Product model objects var DbProducts = xmlProductCreator.Products(); So my question is, is this a good structure for my Data Access layer? Is it a good idea to use a Factory for building my Model objects from the data? Do you think I've misunderstood something? And are there any general patterns I should read up on for how to write my data access objects to create my Model objects?

    Read the article

  • SQLite self-join performance

    - by Derk
    What I essentially want, is to retreive all features and values of products which have a particular feature and value. For example: I want to know all available hard drive sizes of products that have an Intel processor. I have three tables: product_to_value (product_id, feature_id, value_id) features (id, value) // for example Processor family, Storage size, etc. values (id, value) // for example Intel, 60GB, etc The simplified query I have now: SELECT features.name, featurevalues.name, featurevalues.value FROM products, products as prod2, features, features as feat2, values, values as val2 WHERE products.feature = features.id AND products.value = values.id AND products.product = prod2.product AND prod2.feature_id = feat2.id AND prod2.value_id = val2.id AND features.id = ? AND feat2.id = ? All columns have an index. I am using SQLite. The problem is that it's very slow (70ms per query, without the self-join it's <1ms). Is there a smarter way to fetch data like this? Or is this too much to ask from SQLite? I personally think I am simply overlooking something, as I am quite new to SQLite.

    Read the article

  • Mysql select - improve performance

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated.

    Read the article

  • Performance: float to int cast and clipping result to range

    - by durandai
    I'm doing some audio processing with float. The result needs to be converted back to PCM samples, and I noticed that the cast from float to int is surprisingly expensive. Whats furthermore frustrating that I need to clip the result to the range of a short (-32768 to 32767). While I would normally instictively assume that this could be assured by simply casting float to short, this fails miserably in Java, since on the bytecode level it results in F2I followed by I2S. So instead of a simple: int sample = (short) flotVal; I needed to resort to this ugly sequence: int sample = (int) floatVal; if (sample > 32767) { sample = 32767; } else if (sample < -32768) { sample = -32768; } Is there a faster way to do this? (about ~6% of the total runtime seems to be spent on casting, while 6% seem to be not that much at first glance, its astounding when I consider that the processing part involves a good chunk of matrix multiplications and IDCT) EDIT The cast/clipping code above is (not surprisingly) in the body of a loop that reads float values from a float[] and puts them into a byte[]. I have a test suite that measures total runtime on several test cases (processing about 200MB of raw audio data). The 6% were concluded from the runtime difference when the cast assignment "int sample = (int) floatVal" was replaced by assigning the loop index to sample. EDIT @leopoldkot: I'm aware of the truncation in Java, as stated in the original question (F2I, I2S bytecode sequence). I only tried the cast to short because I assumed that Java had an F2S bytecode, which it unfortunately does not (comming originally from an 68K assembly background, where a simple "fmove.w FP0, D0" would have done exactly what I wanted).

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Why does Joomla debug show 446 queries logged and 446 legacy queries logged?

    - by Darye
    I have been called in to fix the performance of a Joomla site that was already setup. I look at the debug output and it shows the same queries twice, once for queries logged and again for legacy queries logged. My guess is that it is actually running the same queries twice make for just under 900 queries per page (hope I am wrong) The Legacy plugin is disabled, so Legacy mode is not on at all. The site uses VirtueMart as well (which BTW isn't working properly if the cache in the Global Config is turned on) Besides the fact that I don't think it should be running 446 queries anyway (sometimes even up to 650 per page ), has anyone every experienced this issue, and where would I look to fix this. Thanks

    Read the article

  • Best Practise for Stopwatch in multi processors machine?

    - by Ahmed Said
    I found a good question for measuring function performance, and the answers recommend to use Stopwatch as follows Stopwatch sw = new Stopwatch(); sw.Start(); //DoWork sw.Stop(); //take sw.Elapsed But is this valid if you are running under multi processors machine? the thread can be switched to another processor, can it? Also the same thing should be in Enviroment.TickCount. If the answer is yes should I wrap my code inside BeginThreadAffinity as follows Thread.BeginThreadAffinity(); Stopwatch sw = new Stopwatch(); sw.Start(); //DoWork sw.Stop(); //take sw.Elapsed Thread.EndThreadAffinity(); P.S The switching can occur over the thread level not only the processor level, for example if the function is running in another thread so the system can switch it to another processor, if that happens, will the Stopwatch be valid after this switching? I am not using Stopwatch for perfromance measurement only but also to simulate timer function using Thread.Sleep (to prevent call overlapping)

    Read the article

  • How to determine cpu, ram needed for rails app?

    - by Ben
    What is the most accurate way to determine the amount of cpu speed and ram needed to run my rails app? I believe there are stress testing tools like Tsung, but how do I determine, for example, that I need X more ram, or X more CPU? I would like to find some way to roughly gauge the performance needs of my application so I can anticipate future needs. I think this data will also be useful for me to decide whether to upgrade one machine, or get another dedicated machine and put all the databases on that one. Essentially, I am concerned about scaling issues, and how to anticipate them. Thanks in advance for the help!

    Read the article

  • SQL server virtual memory usage and perofrmance

    - by user365035
    Hello, I have a very large DB used mostly for analytics. The performance overall is very sluggish. I just noticed that when running the query below, the amount of virtual memory used greatly exceed the amount of physical memory available. Currently, phsycial memory is 10GB (10238 bytes) where as the virtual memory returns significantly more 8388607 bytes. That seems really wrong, but I'm at a bit of a loss on how to proceed. USE [master]; GO select cpu_count , hyperthread_ratio , physical_memory_in_bytes / 1048576 as 'mem_MB' , virtual_memory_in_bytes / 1048576 as 'virtual_mem_MB' , max_workers_count , os_error_mode , os_priority_class from sys.dm_os_sys_info

    Read the article

  • How well does Scala Perform Comapred to Java?

    - by Teja Kantamneni
    The Question actually says it all. The reason behind this question is I am about to start a small side project and want to do it in Scala. I am learning scala for the past one month and now I am comfortable working with it. The scala compiler itself is pretty slow (unless you use fsc). So how well does it perform on JVM? I previously worked on groovy and I had seen sometimes over performed than java. My Question is how well scala perform on JVM compared to Java. I know scala has some very good features(FP, dynamic lang, statically typed...) but end of the day we need the performance...

    Read the article

  • MySQL: Is it faster to use inserts and updates instead of insert on duplicate key update?

    - by Nir
    I have a cron job that updates a large number of rows in a database. Some of the rows are new and therefore inserted and some are updates of existing ones and therefore update. I use insert on duplicate key update for the whole data and get it done in one call. But- I actually know which rows are new and which are updated so I can also do inserts and updates seperately. Will seperating the inserts and updates have advantage in terms of performance? What are the mechanics behind this ? Thanks!

    Read the article

  • Image size guidelines

    - by user502014
    Hi all, This may well be a little of an open-ended question The site I am working on requires to be optimised for performance. One of the key areas is to optimise the file sizes of the images used upon the site. Unfortunatley these images are being created by employees who do not have the required knowledge for creating images for the web, and it is my job to produce a set of guidelines for them to use. I was wondering whether there was any resource/guidlines/literature regarding typical images file sizes for images of different dimensions - as I would like to include something like this to aid them to ensure their images are being created properly. Any info would be greatly appreciated. Thanks in advance

    Read the article

  • Poor performance / speed of regex with lookahead

    - by Hugo Zaragoza
    I have been observing extremely slow execution times with expressions with several lookaheads. I suppose that this is due to underlying data structures, but it seems pretty extreme and I wonder if I do something wrong or if there are known work-arounds. The problem is determining if a set of words are present in a string, in any order. For example we want to find out if two terms "term1" AND "term2" are somewhere in a string. I do this with the expresion: (?=.*\bterm1\b)(?=.*\bterm2\b) But what I observe is that this is an order of magnitude slower than checking first just \bterm1\b and just then \bterm2\b This seems to indicate that I should use an array of patterns instead of a single pattern with lookaheads... is this right? it seems wrong... Here is an example test code and resulting times: public static void speedLookAhead() { Matcher m, m1, m2; boolean find; int its = 1000000; // create long non-matching string char[] str = new char[2000]; for (int i = 0; i < str.length; i++) { str[i] = 'x'; } String test = str.toString(); // First method: use one expression with lookaheads m = Pattern.compile("(?=.*\\bterm1\\b)(?=.*\\bterm2\\b)").matcher(test); long time = System.currentTimeMillis(); ; for (int i = 0; i < its; i++) { m.reset(test); find = m.find(); } time = System.currentTimeMillis() - time; System.out.println(time); // Second method: use two expressions and AND the results m1 = Pattern.compile("\\bterm1\\b").matcher(test); m2 = Pattern.compile("\\bterm2\\b").matcher(test); time = System.currentTimeMillis(); ; for (int i = 0; i < its; i++) { m1.reset(test); m2.reset(test); find = m1.find() && m2.find(); } time = System.currentTimeMillis() - time; System.out.println(time); } This outputs in my computer: 1754 150

    Read the article

  • Calculating percentiles in Excel with "buckets" data instead of the data list itself

    - by G B
    I have a bunch of data in Excel that I need to get certain percentile information from. The problem is that instead of having the data set made up of each value, I instead have info on the number of or "bucket" data. For example, imagine that my actual data set looks like this: 1,1,2,2,2,2,3,3,4,4,4 The data set that I have is this: Value No. of occurrences 1 2 2 4 3 2 4 3 Is there an easy way for me to calculate percentile information (as well as the median) without having to explode the summary data out to full data set? (Once I did that, I know that I could just use the Percentile(A1:A5, p) function) This is important because my data set is very large. If I exploded the data out, I would have hundreds of thousands of rows and I would have to do it for a couple of hundred data sets. Help!

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >