Search Results

Search found 13151 results on 527 pages for 'performance counters'.

Page 235/527 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • How to reduce java concurrent mode failure and excessive gc

    - by jimx
    In Java, the concurrent mode failure means that the concurrent collector failed to free up enough memory space form tenured and permanent gen and has to give up and let the full stop-the-world gc kicks in. The end result could be very expensive. I understand this concept but never had a good comprehensive understanding of A) what could cause a concurrent mode failure and B) what's the solution?. This sort of unclearness leads me to write/debug code without much of hints in mind and often has to shop around those performance flags from Foo to Bar without particular reasons, just have to try. I'd like to learn from developers here how your experience is. If you had previous encountered such performance issue, what was the cause and how you addressed it? If you have coding recommendations, please don't be too general. Thanks!

    Read the article

  • Is there a difference between Perl's shift versus assignment from @_ for subroutine parameters?

    - by cowgod
    Let us ignore for a moment Damian Conway's best practice of no more than three positional parameters for any given subroutine. Is there any difference between the two examples below in regards to performance or functionality? Using shift: sub do_something_fantastical { my $foo = shift; my $bar = shift; my $baz = shift; my $qux = shift; my $quux = shift; my $corge = shift; } Using @_: sub do_something_fantastical { my ($foo, $bar, $baz, $qux, $quux, $corge) = @_; } Provided that both examples are the same in terms of performance and functionality, what do people think about one format over the other? Obviously the example using @_ is fewer lines of code, but isn't it more legible to use shift as shown in the other example? Opinions with good reasoning are welcome.

    Read the article

  • from ggplot2 to OOo workflow?

    - by Andreas
    This is not really a programming question, but I try here none the less. I once used latex for my reports. But the people I work with needs to make small edits and do not have latex skillz. Openoffice is then the way to go. But saving ggplot images with dpi 100 makes for really ugly graphs. dpi = 600 is a no go (e.g. huge legend). So what to do? I currently save (still via ggsave) to eps - which openoffice can import. But performance is not good at all. Googling I found a bug for the poor eps performance in OOo, and also talk about a non-implemented svg feature. But none helps me right now. If you work with ggplot2 and OOo - What do you do? I have been unsuccesfull with pdf conversion for some reason.

    Read the article

  • Compression Array of Bytes

    - by Pedro Magalhaes
    Hi, My problem is: I want to store a array of bytes in compressed file, and then I want to read it with a good performance. So I create a array of bytes then pass to a ZLIB algorithm then store it in the file. For my surprise the algorithm doesn't work well., probably because the array is a random sample. Using this approach, it will will be ber easy to read. Just copy the stream to memory, decompress them and copy it to a array of bytes. But i need to compress the file. Do I have to use a algorithm, like RLE, for compresse the byte array? I think that I can store the byte array like a string and then compress it. But i think I am going to have a poor performance on reading data. Sorry for my poor english. Thanks

    Read the article

  • Why do people still use C these days? [closed]

    - by Joshua
    C++ is clearly a far superior language than C, since it has many features that C lacks (although, C++'s object model isn't as ideal as say C#'s). With the coming off the new C++0x standard, why hasn't C been phased out to obscurity? C++ has been around for so long, since the '80s. The Linux kernel has already been ported to C++ with negligible performance differences. I believe, with no evidence, that larger program structures benefit in performance if written in C++ than in C, if only because of object interaction. Don't get me started on "objects-in-C!" libraries, which are all a terrible hack. (Not that C++'s object model is the most ideal, but it is almost up to snuff with C# using common ad-hoc techniques.)

    Read the article

  • Comparison of collection datatypes in C#

    - by Joel in Gö
    Does anyone know of a good overview of the different C# collection types? I am looking for something showing which basic operations such as Add, Remove, RemoveLast etc. are supported, and giving the relative performance. It would be particularly interesting for the various generic classes - and even better if it showed eg. if there is a difference in performance between a List<T> where T is a class and one where T is a struct. A start would be a nice cheat-sheet for the abstract data structures, comparing Linked Lists, Hash Tables etc. etc. Thanks!

    Read the article

  • How does one modify the thread scheduling behavior when using Threading Building Blocks (TBB)?

    - by J Teller
    Does anyone know how to modify the thread scheduling (specifically affinity) when using TBB? Doing a high level analysis on a simple parallel-for application, it seems like TBB is specifying the underlying threads' affinity in a way that reduces performance. Specifically, the cores I'm running on have hyper-threading enabled, and it looks like TBB is affinitizing threads to the same core even if there is a different core left completely unloaded. FWIW, I realize it's likely that TBB is doing the "right thing" and that changing the threads' affinity will only reduce performance. I'd just like to experiment with it to see if that's really the case.

    Read the article

  • Extract a sentence out of sentences separated by delimitors

    - by Laura
    Below is a sample line I have extracted from a website: below a satisfactory level; &quot;an off year for tennis&quot;; &quot;his performance was off&quot; The output displays as: below a satisfactory level; "an off year for tennis"; "his performance was off" I want to get only the first sentence "below a satisfactory level"; Here is the code I have tried after exploring many stackoverflow posts: $data=explode('; ',$str); echo $data[0]; But somehow it is not working. Thanks in advance.

    Read the article

  • How to sum up an array of integers in C#

    - by Filburt
    Is there a better shorter way than iterating over the array? int[] arr = new int[] { 1, 2, 3 }; int sum = 0; for (int i = 0; i < arr.Length; i++) { sum += arr[i]; } clarification: Better primary means cleaner code but hints on performance improvement are also welcome. (Like already mentioned: splitting large arrays). It's not like I was looking for killer performance improvement - I just wondered if this very kind of syntactic sugar wasn't already available: "There's String.Join - what the heck about int[]?".

    Read the article

  • Remove items from SWT tables

    - by Dima
    This is more of an answer I'd like to share for the problem I was chasing for some time in RCP application using large SWT tables. The problem is the performance of SWT Table.remove(int start, int end) method. It gives really bad performance - about 50msec per 100 items on my Windows XP. But the real show stopper was on Vista and Windows 7, where deleting 100 items would take up to 5 seconds! Looking into the source code of the Table shows that there are huge amount of windowing events flying around in this call.. That brings the windowing system to its knees. The solution was to hide the damn thing during this call: table.setVisible(false); table.remove(from, to); table.setVisible(true); That does wonders - deleting 500 items on both XP & Windows7 takes ~15msec, which is just an overhead for printing out time stamps I used. nice :)

    Read the article

  • Database indexes and their Big-O notation

    - by miket2e
    I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O(1) lookup time. Querying on a non-unique index will also give a O(1) time, albeit maybe the '1' is slower than for the unique index (?) Querying on a column without an index will give a O(N) lookup time (full table scan). Is this generally correct ? Will querying on a primary key ever give worse performance than O(1) ? My specific concern is for SQLite, but I'd be interested in knowing to what extent this varies between different databases too.

    Read the article

  • which is better, creating a materialized view or a new table?

    - by Carson
    I have some demanding mysql queries that are needed to grap same up-to-date datasets from 5-7 mysql tables. I am thinking of creating a table or materialized view to gather all demanding columns from other tables, so as to increase performance. If I create that table, I may need to do extra insert / update / delete operation each time other tables updated. if I create materialized view, I am worrying if the performance can be greatly improved. Because data from other tables are changing very frequently. Most likely, the view may need to be created first everytime before selecting it. Any ideas? e.g. how to cache? other extra measures I can do?

    Read the article

  • Is there a module that implements an efficient array type in Erlang?

    - by dsmith
    I have been looking for an array type with the following characteristics in Erlang. append(vector(), term()) O(1) nth(Idx, vector()) O(1) set(Idx, vector(), term()) O(1) insert(Idx, vector(), term()) O(N) remove(Idx, vector()) O(N) I normally use a tuple for this purpose, but the performance characteristics are not what I would want for large N. My testing shows the following performance characteristics... erlang:append_element/2 O(N). erlang:setelement/3 O(N). I have started on a module based on the clojure.lang.PersistentVector implementation, but if it's already been done I won't reinvent the wheel.

    Read the article

  • What Use are Threads Outside of Parallel Problems on MultiCore Systesm?

    - by Robert S. Barnes
    Threads make the design, implementation and debugging of a program significantly more difficult. Yet many people seem to think that every task in a program that can be threaded should be threaded, even on a single core system. I can understand threading something like an MPEG2 decoder that's going to run on a multicore cpu ( which I've done ), but what can justify the significant development costs threading entails when you're talking about a single core system or even a multicore system if your task doesn't gain significant performance from a parallel implementation? Or more succinctly, what kinds of non-performance related problems justify threading? Edit Well I just ran across one instance that's not CPU limited but threads make a big difference: TCP, HTTP and the Multi-Threading Sweet Spot Multiple threads are pretty useful when trying to max out your bandwidth to another peer over a high latency network connection. Non-blocking I/O would use significantly less local CPU resources, but would be much more difficult to design and implement.

    Read the article

  • Inline function and calling cost in C

    - by Eonil
    I'm making a vector/matrix library. (GCC, ARM NEON, iPhone) typedef struct{ float v[4]; } Vector; typedef struct{ Vector v[4]; } Matrix; I passed struct data as pointer to avoid performance degrade from data copying when calling function. So I thought designed function like this: void makeTranslation(const Vector* factor, Matrix* restrict result); But, if function is inline, is there any reason to pass values as pointer for performance? Do those variables copied too? How about register and caches? inline Matrix makeTranslation(Vector factor) __attribute__ ((always_inline)); How do you think about calling costs of each cases?

    Read the article

  • Model of hql query firing at back end by hql engine?

    - by Maddy.Shik
    I want to understand how hibernate execute hql query internally or in other models how hql query engine works. Please suggest some good links for same? One of reason for reading is following problem. Class Branch { //lazy loaded @joincolumn(name="company_id") Company company; } Since company is heavy object so it is lazy loaded. now i have hql query "from Branch as branch where branch.Company.id=:companyId" my concern is that if for firing above query, hql engine has to retrieve company object then its a performance hit and i would prefer to add one more property in Branch class i.e. companyId. So in this case hql query would be "from Branch as branch where branch.companyId=:companyId" If hql engine first generate sql from hql followed by firing of sql query itself, then there should be no performance issue. Please let me know if problem is not understandable.

    Read the article

  • How does c# type safety affect the garbage collection?

    - by Indeera
    I'm dealing with code that handles large buffers ( 100MB) and manipulation of these is done in unsafe blocks. I'd like to refactor these to avoid unsafe code. I'm wondering about the likely memory performance gains (positive/negative/neutral) before I embark on that. I assert that if the compiler can verify types, it could possibly generate better code and that could also mean good GC performance. Is this a valid assertion? What is your experience? Thanks.

    Read the article

  • Icons in Silverlight: Images vs. Vectors

    - by Shnitzel
    I like using the vector drawing feature of Expression Blend to create icons. That way I can change colors easily on my icons without having to resort to an image editor. But my question is... Say I have a treeview control that has an icon next to each tree element and say I have hundreds of elements. Do you think using images is faster - performance wise than using vector icons? B/c I'd rather use vectors but I'm wondering about performance concerns.

    Read the article

  • Is a program compiled with -g gcc flag slower than the same program compiled without -g?

    - by e271p314
    I'm compiling a program with -O3 for performance and -g for debug symbols (in case of crash I can use the core dump). One thing bothers me a lot, does the -g option results in a performance penalty? When I look on the output of the compilation with and without -g, I see that the output without -g is 80% smaller than the output of the compilation with -g. If the extra space goes for the debug symbols, I don't care about it (I guess) since this part is not used during runtime. But if for each instruction in the compilation output without -g I need to do 4 more instructions in the compilation output with -g than I certainly prefer to stop using -g option even at the cost of not being able to process core dumps. How to know the size of the debug symbols section inside the program and in general does compilation with -g creates a program which runs slower than the same code compiled without -g?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • What is the optimal number of threads for performing IO operations in java?

    - by marc
    In Goetz's "Java Concurrency in Practice", in a footnote on page 101, he writes "For computational problems like this that do not I/O and access no shared data, Ncpu or Ncpu+1 threads yield optimal throughput; more threads do not help, and may in fact degrade performance..." My question is, when performing I/O operations such as file writing, file reading, file deleting, etc, are there guidelines for the number of threads to use to achieve maximum performance? I understand this will be just a guide number, since disk speeds and a host of other factors play into this. Still, I'm wondering: can 20 threads write 1000 separate files to disk faster than 4 threads can on a 4-cpu machine?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >