Search Results

Search found 13249 results on 530 pages for 'virtualized performance'.

Page 336/530 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • Unfamiliar Javascript Syntax

    - by user1051643
    Long and short of the story is, whilst reading John Resig's blog (specifically http://ejohn.org/blog/javascript-trie-performance-analysis/) I came across a line which makes absolutely no sense to me whatsoever. Essentially it boils down to object = object[key] = something; (this can be found in the first code block of the article I've linked.) This has proven rather difficult to google, so if anyone can offer some insight / a good online resource for me to learn for myself, I'd much appreciate it.

    Read the article

  • Simple way to do SURBL lookup from Java?

    - by StaxMan
    I am reading about SURBLs (known spam hosts), for purpose of classifying spam, as a batch process. Main access method seems to be via DNS lookups. I was wondering what's the usual way to do such lookups from Java code. Since this is a batch process with no strict performance requirements, I think most important feature would just be simplicity.

    Read the article

  • Using many mutex locks

    - by hanno
    I have a large tree structure on which several threads are working at the same time. Ideally, I would like to have an individual mutex lock for each cell. I looked at the definition of pthread_mutex_t in bits/pthreadtypes.h and it is fairly short, so the memory usage should not be an issue in my case. However, is there any performance penalty when using many (let's say a few thousand) different pthread_mutex_ts for only 8 threads?

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • Heap Behavior in C++

    - by wowus
    Is there anything wrong with the optimization of overloading the global operator new to round up all allocations to the next power of two? Theoretically, this would lower fragmentation at the cost of higher worst-case memory consumption, but does the OS already have redundant behavior with this technique, or does it do its best to conserve memory? Basically, given that memory usage isn't as much of an issue as performance, should I do this?

    Read the article

  • Most efficient way to animate a path?

    - by mystify
    I have a fullscreen path which consists of about 20 lines. Currently I am animating changes in this path using an NSTimer which frequently calls -setNeedsDisplay. Believe me: Performance sucks absolutely. I slightly remember that there was some better way to animate paths on the iPhone. Some kind of special CA layer. I don't remember anymore it's exact name. Who knows?

    Read the article

  • Detect non-closed connections to SQL

    - by JoeJoe
    I've inherited a very large project in ASP.net, SQL 2005 and have found where some SQL connections are not closed - which is bad. Without going thru every line of code, is there a way to detect if connections are not being closed? Performance counter? as a follow up - how does SQL reclaim unclosed connections. I'm using non-pooled connectionstring.

    Read the article

  • memcached vs. internal caching in PHP?

    - by waitinforatrain
    Hey, I'm working on some old(ish) software in PHP that maintains a $cache array to reduce the number of SQL queries. I was thinking of just putting memcached in its place and I'm wondering whether or not to get rid of the internal caching. Would there still be a worthwihle performance increase if I keep the internal caching, or would memcached suffice?

    Read the article

  • Serialize() not using .XmlSerializers.dll produced with Sgen

    - by MDE
    I have a sgen step in my .NET 3.5 library, producing a correct XYZ.XmlSerializers.dll in the output directory. Still having poor serialization performance, I discovered that .NET was still invoking a csc at runtime. Using process monitor, I saw that .NET was searching for a dll named "XYZ.XmlSerializers.-1378521009.dll". Why is there a '-1378521009' in the filename ? How to tell .NET to use the 'normal' DLL produced by sgen ?

    Read the article

  • How can I use one stream and save result to many places?

    - by plasticrabbit
    I using servlet and Apache ServletFileUpload that provides stream to uploaded image. All I want to do is to store that image to db and also store resized (I using JAI) version to db. How can I achieve this without saving image to drive. As I understand stream can be read only once. So I need to store whole image in memory? Is it expensive for performance? Or there are another way?

    Read the article

  • Java template classes using generator or similar?

    - by Hugh Perkins
    Is there some library or generator that I can use to generate multiple templated java classes from a single template? Obviously Java does have a generics implementation itself, but since it uses type-erasure, there are lots of situations where it is less than adequate. For example, if I want to make a self-growing array like this: class EasyArray { T[] backingarray; } (where T is a primitive type), then this isn't possible. This is true for anything which needs an array, for example high-performance templated matrix and vector classes. It should probably be possible to write a code generator which takes a templated class and generates multiple instantiations, for different types, eg for 'double' and 'float' and 'int' and 'String'. Is there something that already exists that does this? Edit: note that using an array of Object is not what I'm looking for, since it's no longer an array of primitives. An array of primitives is very fast, and uses only as much space a sizeof(primitive) * length-of-array. An array of object is an array of pointers/references, that points to Double objects, or similar, which could be scattered all over the place in memory, require garbage collection, allocation, and imply a double-indirection for access. Edit2: good god, voted down for asking for something that probably doesn't currently exist, but is technically possible and feasible? Does that mean that people looking for ways to improve things have already left the java community? Edit3: Here is code to show the difference in performance between primitive and boxed arrays: int N = 10*1000*1000; double[]primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } Object[] objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } tic(); primArray = new double[N]; for( int i = 0; i < N; i++ ) { primArray[i] = 123.0; } toc(); tic(); objArray = new Double[N]; for( int i = 0; i < N; i++ ) { objArray[i] = 123.0; } toc(); Results: double[] array: 148 ms Double[] array: 4614 ms Not even close!

    Read the article

  • Which mysql construct is faster?

    - by Olaseni
    SELECT ..WHERE COL IN(A,B) or SELECT ... WHERE (COL = A or COL = B) I'm trying to find out what are the differences between the two constructs? Would there be significant performance gains either way if utilized on resultsets that are nearing the 1 million mark?

    Read the article

  • Do compiled PHP scripts exist?

    - by dabito
    Hi, I am wondering if anyone has used or read about PHP scripts compiled as a .so extension for Apache... Thing is I think I remember reading about it somewhere but dont know if such a thing exists. This looks promising, but incomplete and abandoned: http://phpcompiler.org/ Im interested because i think it could improve performance... Perhaps someone could point out a framework or apache extension that does this. Thanks!!

    Read the article

  • .NET 4.0 slower than earlier versions, is that true?

    - by Nauman
    Hello All, We are planning to move to .NET framework 4.0 sometime soon... I don't remember the refernce or link, but recently, I read about the latest framework being a little slow in performance when compared to its predecessors. Is that true? has anybody done any tests or have some valid arguments to support this?

    Read the article

  • Is it a good practice to always use smart pointers ?

    - by Dony Borris
    Hi, I find smart pointers to be a lot more comfortable than raw pointers. So is it a good idea to always use smart pointers? ( Please note that I am from Java background and hence don't much like the idea of explicit memory management. So unless there are some serious performance issues with smart pointers, I'd like to stick with them. ) Any advice would be greatly appreciated. Thanks.

    Read the article

  • SQL Server stored procedure in multi threaded environments

    - by Shamika
    Hi, I need to execute some Sql server stored procs in a thread safe manner. At the moment I'm using software locks (C# locks) to achieve this but wonder what kind of features provided by the Sql server itself to achieve thread safety. It seems to be there are some table and row locking features built in to Sql server. Also from a performance perspective what is best approach? Software locks? Or Sql Server built in locks? Thanks, Shamika

    Read the article

  • Zend_Search_Lucene vs SOLR

    - by spacemonkey
    Hi, I have recenlty stumbled into Zend Lucene port of Lucene project. I have a little bit experience with SOLR so I would like to know what is the difference between two of them especially from performance and installation side. As much as I know SOLR requires Tomcat serverlet running in web hosting in order to work, what about Zend Lucene library? I am also a bit confused what means "being implemented on the top of Lucene"?

    Read the article

  • Java JIT compiler compiles at compile time or runtime ?

    - by Tony
    From wiki: In computing, just-in-time compilation (JIT), also known as dynamic translation, is a technique for improving the runtime performance of a computer program. So I guess JVM has another compiler, not javac, that only compiles bytecode to machine code at runtime, while javac compiles sources to bytecode,is that right?

    Read the article

  • where should I do the calculating stuff,PHP or Mysql?

    - by SpawnCxy
    I've been doing a lot of calculating stuff nowadays.Usually I prefer to do this job in PHP rather than Mysql though I know PHP is not good at this cuz I thought mysql may be worse.But I found some performance problem :some pages were loaded so slowly that 30 seconds' timelimit is not enough for them!So I wonder which is the better practice to do the calculations,and any princles for that?Suggestions would be appreciated.

    Read the article

  • new items on GRUB screen in ubuntu/linux

    - by artsince
    I regularly update my ubuntu (10.04), and new minor versions keep accumulating on the GRUB screen. Right now I have 5 different versions listed on the GRUB, even though I always select the latest version to work with. Am I supposed to do anything to get rid of the old version references? Do these old versions affect disk space/performance?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >