Search Results

Search found 562 results on 23 pages for 'profiling'.

Page 8/23 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • xperf won't use my DLL's pdb

    - by dauphic
    I'm attempting to use xperf to profile my (unmanaged) DLL which is loaded by another (managed) application. Basically, xperf isn't loading the symbols for my DLL; in the CPU usage summary, my DLL's function column says 'Unknown' xperf gives me: xperf: Using symbol path: C:\app\mydll\releaseu;SRV*c:\symbols*http://msdl.microsoft.com/download/symbols xperf: Using executable path: C:\app\mydll\releaseu;SRV*c:\symbols*http://msdl.microsoft.com/download/symbols xperf: Using SymCache path: \SymCache Where C:\app\mydll\releaseu is where my DLL's .pdb file is located. Using xperf -symbols or clicking Load Symbols in the viewer doesn't load my DLL's symbols. A folder for mydll isn't created in C:\Symbols, I'm only getting the symbols from Microsoft. Any idea how to make it work?

    Read the article

  • Using WPFPerf to profile a WPF 4.0 application doesn't show me any information

    - by Adrian
    I am trying to use WPFPerf to profile a WPF 4.0 application (I have the latest WPFPerf that should work on WPF 4.0 aps). I start the tool Visual Profiler from WPFPerf, I start my aplication, but after that nothing happens and the element tree from the Visual Profiler is empty. No other error message is shown. Can anyone tell me what am I not doint right? As an additional information, when I try to analize my .exe assembly or any other assembly from my application, I get a BadFormatException saying that the assembly was build with a newer version of .NET. From the download page http://go.microsoft.com/fwlink/?LinkID=191420 I see that this version of WPFPerf should be ok for my app

    Read the article

  • Good .net4 profiler

    - by Martin
    What is a good profiler for .NET 4.0? I'm willing to spend some money, but not too much (say up to £50) I'm developing games for windows phone and xbox using XNA, which means that the projects are commercial. I mention it because I've seen several which are free for non commercial use. Extra points for Visual studio integration. Nb. I'm using Visual Studio Professional 2010

    Read the article

  • Performance of std::pow - cache misses???

    - by Eamon Nerbonne
    I've been trying to optimize a numeric program of mine, and have run into something of a mystery. I'm looping over code that performs thousands of floating point operations, and just 1 call to pow nevertheless, that call takes 5% of the time... That's not necessarily a critical issue, but it is odd, so I'd like to understand what's happening. When I profiled for cache misses, VS.NET 2010RC's profiler reports that virtually all cache misses are occurring in std::pow... so... what's up with that? Is there a faster alternative? I tried powf, but that's only slightly faster; it's still responsible for an abnormal number of cache misses. Why would a basic function like pow cause cache-misses?

    Read the article

  • Fail to profile remote java app using TPTP

    - by rperez
    Hi all, I am trying to profile CPU usage using TPTP. Application to profile run on Linux RH AS5. I installed and configured Agent Controller like described here I ran the java application using the command java '-agentlib:JPIBootLoader=JPIAgent:server=standalone,file=log.trcxml;CGProf' MyApp The monitoring station is All-In-one TPTP version 4.6.2. I followed the stepes described here on Eclipse - on the "Profile Configuration" I choose a new configuration for "Attach to Agent", set the host to my remote linux machine where MyApp is running, test connection succeed and when I get to the "Agents" tab, I see "Pending...", a background process "Feching children for host" is running and can't find anything which makes it impossible to profile. Any idea?

    Read the article

  • how to profile silverlight mvvm application with a lot of custom controls

    - by tomo
    There is a quite big LOB silverlight application and we wrote a lot of custom controls which are rather heavy in drawing. All data is loaded by RIA service, processed and bound (using INofityPropertyChanged interface) to the view. The problem is that first drawing takes a lot time. Following calls to the service (server) and redrawing is quite fast. I used Equatec profiler to track the problem. I saw that processing takes a couple of miliseconds only so my idea is that the drawing by SL engine is slow. I'm wondering if it is possible to profile somehow processes inside SL to check which drawing operations are taking too much time. Are there any guidelines how to implement faster drawing of complex custom controls?

    Read the article

  • How to profile the execution of an OSGi deployment?

    - by Jaime Soriano
    I'm starting the development of an OSGi bundle for an application that will be deployed in a device with some hardware limitations. I'd like to know how could I profile the execution of that bundle to be always sure that it's going to fit with its dependencies in the final device. It would be nice to have a profiler to know how much memory is each bundle using, to localize bottle necks and to compare different implementations of the same service. Is there any profiler for OSGi deployments or should I use a general Java profiler? For developing I'm using Pax runner with Apache felix to run the bundle and maven to manage project dependencies and building.

    Read the article

  • Performance profiler for a java application

    - by Nitin Garg
    I need to optimize a java application. It makes some 3rd party calls. I need some good tool to accurately measure the time taken by individual api calls. To give an idea of complexity- the application takes a data source file containing 10 lakh rows, and it takes around one hour to complete the processing. As a part of processing , it makes some 3rd party calls (including some network calls). I need to identify which calls are taking more time then others, and based on that, find out a way to optimize the application. Any suggestions would be appreciated.

    Read the article

  • in free(): error: junk pointer, too high to make sense Segmentation fault: 11 (core dumped) gprof

    - by Mayank
    I am trying to profile my application. For this I compiled my code with -pg and -lc_p option, it compiled successfully During run time I am getting the following error. in free(): error: junk pointer, too high to make sense Segmentation fault: 11 (core dumped) Doing GDB gives error as. (gdb) b main Breakpoint 1 at 0x5124d4: (gdb) r warning: Unable to get location for thread creation breakpoint: generic error [New LWP 100085] cacheIp in free(): error: junk pointer, too high to make sense Program received signal SIGSEGV, Segmentation fault. [Switching to LWP 100085] 0x00000000006c3a1f in pthread_sigmask () My application is multi threaded and is a combination of C and C++ code. uname -a FreeBSD 6.3-RELEASE FreeBSD 6.3-RELEASE #0: Wed Jan 16 01:43:02 UTC 2008 [email protected]:/usr/obj/usr/src/sys/SMP amd64 Why is the code crashing. Am I missing something.

    Read the article

  • JetBrains dotTrace, is it possible to profile source code line by line? else I need another tool

    - by m3ntat
    I am using JetBrains dotTrace, I've profiled my app which is entirely CPU bound. But the results as you walk down the tree don't sum to the level above in the tree, I only see method calls not the body lines of the node in questions method. Is it possible to profile the source code line by line. i.e for one node: SimulatePair() 99.04% --nextUniform() 30.12% --IDCF() 24.08% So the method calls nextUniform + IDCF use 54% of the time in SimulatePair (or 54% total execution time I'm not sure how to read this) regardless what is happening the other 46% of SimulatePair I need some detail on a line by line basis. Any help or alternative tools is much appreciated. Thanks

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • jmap -histo is missing a lot of memory

    - by ripper234
    I have a JVM with 12 gigs of total RAM, out of which 7 GB is allocated to the old generation. There seems to be some memory leak, because almost the entire old gen is full, and will not release when I schedule a GC (the process is not doing anything else at that time). A jmap -histo dump only reveals less than 1 gigabyte worth of objects. Where are the missing 6 gigs? What better tool do you propose for diagnosing this? Here is the top of the jmap output: num #instances #bytes class name ---------------------------------------------- 1: 429853 68725736 <constMethodKlass> 2: 429853 51594040 <methodKlass> 3: 37503 49611368 <constantPoolKlass> 4: 37503 31109576 <instanceKlassKlass> 5: 191716 28019968 [C 6: 32573 26933152 <constantPoolCacheKlass> 7: 86158 13789560 [I 8: 53532 11244232 [B 9: 284 10507216 [J 10: 137608 7210664 <symbolKlass> 11: 203072 6498304 java.lang.String 12: 10132 5219512 <methodDataKlass> 13: 39694 4128176 java.lang.Class 14: 55713 3792816 [S 15: 61816 3141936 [[I 16: 90109 2883488 java.util.HashMap$Entry

    Read the article

  • Profile memory-performance for part of an rails project

    - by Florian Pilz
    I want to test the profile usage of an important library-class of my rails-project. It uses ActiveRecord so I need all rails dependencies to profile it. As far as I know, I need a patched ruby (rubygc) so script/profile and script/benchmark can track memory usage. I tried to follow this official guide to patch the source code of ruby 1.8.6 (p399) and 1.8.7 (p248), but both fail with the following message: patching file gc.c Hunk #2 succeeded at 50 with fuzz 2 (offset 2 lines). Hunk #3 succeeded at 87 with fuzz 2 (offset 6 lines). Hunk #4 succeeded at 153 with fuzz 1 (offset 45 lines). Hunk #5 succeeded at 409 with fuzz 2 (offset 274 lines). Hunk #6 FAILED at 462. Hunk #7 FAILED at 506. Hunk #8 FAILED at 520. Hunk #9 FAILED at 745. Hunk #10 FAILED at 754. Hunk #11 FAILED at 923. Hunk #12 succeeded at 711 (offset 46 lines). Hunk #13 succeeded at 730 (offset 46 lines). Hunk #14 succeeded at 766 (offset 55 lines). Hunk #15 succeeded at 1428 (offset 87 lines). Hunk #16 succeeded at 1492 (offset 89 lines). Hunk #17 FAILED at 1541. Hunk #18 FAILED at 1551. Hunk #19 succeeded at 1571 (offset 91 lines). Hunk #20 succeeded at 1592 (offset 91 lines). Hunk #21 succeeded at 1601 (offset 91 lines). Hunk #22 succeeded at 1826 (offset 108 lines). Hunk #23 succeeded at 1843 (offset 108 lines). Hunk #24 succeeded at 1926 (offset 108 lines). Hunk #25 succeeded at 2118 (offset 108 lines). Hunk #26 succeeded at 2563 (offset 100 lines). Hunk #27 succeeded at 2611 with fuzz 1 (offset 102 lines). Hunk #28 succeeded at 2628 (offset 102 lines). 8 out of 28 hunks FAILED -- saving rejects to file gc.c.rej patching file intern.h Hunk #1 succeeded at 268 (offset 15 lines). I also tried to use ruby-prof, but I always get the error "uninitialized constant RubyProf::Test". I don't know how to use the gem "memory" and neither "memprof" nor "bleak_house" could be installed successfully. If I get a patched ruby running, I should be fine. But any other possibility to profile the memory of library classes are welcome. Thanks for helping!

    Read the article

  • Running out of memory while analyzing a Java Heap Dump

    - by Abel Morelos
    Hi, I have a curious problem, I need to analyze a Java heap dump (from an IBM JRE) which has 1.5GB in size, the problem is that while analyzing the dump (I've tried HeapAnalyzer and the IBM Memory Analyzer 0.5) the tools runs out of memory I can't really analyze the dump. I have 3GB of RAM in my machine, but seems like it's not enough to analyze the 1.5 GB dump, My question is, do you know a specific tool for heap dump analysis (supporting IBM JRE dumps) that I could run with the amount of memory I have? Thanks.

    Read the article

  • What are some of best Javascript memory detecting tools?

    - by Philip Fourie
    Our team is faced with slow but serious Javascript memory leak. We have read up on the normal causes for memory leaks in Javascript (eg. closures and circular references). We tried to avoid those pitfalls in the code but it likely we still have unknown mistakes left in our code. I started my search for available tools but would like input from people with actual experience with these tools. Some of the tools I found so far (but have no idea how good and useful they would be for our problem): Sieve Drip JavaScript Memory Leak Detector Our search is not limited to free tools, it will be a bonus, but more importantly something that will get the job done. We do the following in our Javascript code: AJAX calls to a .NET WCF back-end that send back JSON data Manipulate the DOM Keep a fairly sized object model in the Javascript to store current state

    Read the article

  • PL/SQL profiler missing data

    - by user289429
    We are using pl/sql profiler to collect metrics. We noticed that on one of the environment the plsql_profiler_runs table is populated with the total execution time but the finer details that gets collected in the table plsql_profiler_data is missing. Any idea why this would be happening? We do use dbms_profiler.flush_data() before stopping the profiler and have seen this work fine in another environment.

    Read the article

  • Access cost of dynamically created objects with dynamically allocated members

    - by user343547
    I'm building an application which will have dynamic allocated objects of type A each with a dynamically allocated member (v) similar to the below class class A { int a; int b; int* v; }; where: The memory for v will be allocated in the constructor. v will be allocated once when an object of type A is created and will never need to be resized. The size of v will vary across all instances of A. The application will potentially have a huge number of such objects and mostly need to stream a large number of these objects through the CPU but only need to perform very simple computations on the members variables. Could having v dynamically allocated could mean that an instance of A and its member v are not located together in memory? What tools and techniques can be used to test if this fragmentation is a performance bottleneck? If such fragmentation is a performance issue, are there any techniques that could allow A and v to allocated in a continuous region of memory? Or are there any techniques to aid memory access such as pre-fetching scheme? for example get an object of type A operate on the other member variables whilst pre-fetching v. If the size of v or an acceptable maximum size could be known at compile time would replacing v with a fixed sized array like int v[max_length] lead to better performance? The target platforms are standard desktop machines with x86/AMD64 processors, Windows or Linux OSes and compiled using either GCC or MSVC compilers.

    Read the article

  • Optimizing Python code with many attribute and dictionary lookups

    - by gotgenes
    I have written a program in Python which spends a large amount of time looking up attributes of objects and values from dictionary keys. I would like to know if there's any way I can optimize these lookup times, potentially with a C extension, to reduce the time of execution, or if I need to simply re-implement the program in a compiled language. The program implements some algorithms using a graph. It runs prohibitively slowly on our data sets, so I profiled the code with cProfile using a reduced data set that could actually complete. The vast majority of the time is being burned in one function, and specifically in two statements, generator expressions, within the function: The generator expression at line 202 is neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) and the generator expression at line 204 is neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) The source code for this function of context provided below. selected_nodes is a set of nodes in the interaction_graph, which is a NetworkX Graph instance. node_neighbors is an iterator from Graph.neighbors_iter(). Graph itself uses dictionaries for storing nodes and edges. Its Graph.node attribute is a dictionary which stores nodes and their attributes (e.g., 'weight') in dictionaries belonging to each node. Each of these lookups should be amortized constant time (i.e., O(1)), however, I am still paying a large penalty for the lookups. Is there some way which I can speed up these lookups (e.g., by writing parts of this as a C extension), or do I need to move the program to a compiled language? Below is the full source code for the function that provides the context; the vast majority of execution time is spent within this function. def calculate_node_z_prime( node, interaction_graph, selected_nodes ): """Calculates a z'-score for a given node. The z'-score is based on the z-scores (weights) of the neighbors of the given node, and proportional to the z-score (weight) of the given node. Specifically, we find the maximum z-score of all neighbors of the given node that are also members of the given set of selected nodes, multiply this z-score by the z-score of the given node, and return this value as the z'-score for the given node. If the given node has no neighbors in the interaction graph, the z'-score is defined as zero. Returns the z'-score as zero or a positive floating point value. :Parameters: - `node`: the node for which to compute the z-prime score - `interaction_graph`: graph containing the gene-gene or gene product-gene product interactions - `selected_nodes`: a `set` of nodes fitting some criterion of interest (e.g., annotated with a term of interest) """ node_neighbors = interaction_graph.neighbors_iter(node) neighbors_in_selected_nodes = (neighbor for neighbor in node_neighbors if neighbor in selected_nodes) neighbor_z_scores = (interaction_graph.node[neighbor]['weight'] for neighbor in neighbors_in_selected_nodes) try: max_z_score = max(neighbor_z_scores) # max() throws a ValueError if its argument has no elements; in this # case, we need to set the max_z_score to zero except ValueError, e: # Check to make certain max() raised this error if 'max()' in e.args[0]: max_z_score = 0 else: raise e z_prime = interaction_graph.node[node]['weight'] * max_z_score return z_prime Here are the top couple of calls according to cProfiler, sorted by time. ncalls tottime percall cumtime percall filename:lineno(function) 156067701 352.313 0.000 642.072 0.000 bpln_contextual.py:204(<genexpr>) 156067701 289.759 0.000 289.759 0.000 bpln_contextual.py:202(<genexpr>) 13963893 174.047 0.000 816.119 0.000 {max} 13963885 69.804 0.000 936.754 0.000 bpln_contextual.py:171(calculate_node_z_prime) 7116883 61.982 0.000 61.982 0.000 {method 'update' of 'set' objects}

    Read the article

  • add properties to users google app engine

    - by juanefren
    What is the best way to save a user profile with Google App Engine (Python) ? What I did to solve this problem is create another Model, with a UserProperty, but requesting the profile from the user I have to do something like this: if user: profile = Profile.all().filter('user =', user).fetch(1) if profile: property = s.get().property Any ideas?

    Read the article

  • How to monitor MySQL query errors, timeouts and logon attempts?

    - by Abel
    While setting up a third party closed source CMS (Sitefinity) the setup doesn't create all tables and procedures necessary to run it. The software lacks a logging system itself and it made me wonder: could I trace and monitor failing SQL statements from MySQL? This serves more than only the purpose of solving my issue with Sitefinity. More often I wonder what's send to the MySQL server, not wanting to dive into the software products or setup a debugging environment etc. I tried JetProfiler (only performance) and looked through a few others, but although they monitor a lot, they don't monitor query failures, timeouts or logon attempts. Does anyone know a profiler, tracer, monitoring tool, commercial or free, that can show me this information?

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >