Search Results

Search found 13403 results on 537 pages for 'epm performance tuning'.

Page 243/537 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Bullet Physics, when to choose which DynamicsWorld?

    - by Sqeaky
    I have a few general questions about the bullet physics library. Here is my current understanding in a nutshell: btDiscreteDynamicsWorld - Simplest physics world, only handles rigid bodies, maybe it has better performance. btSoftRigidDynamicsWorld - The only physics world that can work with large jello moulds btContinuousDynamicsWorld - If you have really fast objects this will prevent them from prenetrating each other or flying through each other, but is otherwise like a btDiscreteDynamicsWorld. Is my understanding of the btDiscreetDynamicsWorld, btContinuousDynamicsWorld and btSoftRigidDynamicsWorld classes in terms of functionality, purpose, and performance correct? Why does the user manual recommend the btDiscreteDynamicsWorld class? btSoftRigidDynamicsWorld appears to be the only world that can handle soft bodies, so what if we wanted Continuous Physics integration and Soft bodies? How fast is fast enough to consider using a btContinuousDynamicsWorld, and what are the drawbacks of using one? Edit: My Buddy Mako also posted this question on The Bullet forums: http://www.bulletphysics.org/Bullet/phpBB3/viewtopic.php?f=9&t=4863

    Read the article

  • Face recognition Library

    - by Janusz
    I'm looking for a free face recognition library for a university project. I'm not looking for face detection. I'm looking for actual recognition. That means finding images that contain specified faces or libraries that calculate distances between specific faces. I'm using OpenCV for detecting the faces and a rough Eigenfaces Algorithm for the recognition now. But I thought there should be something out there with a better performance then a self written Eigenfaces Algorithm. I don't talk about speed as performance I'm looking for a library with better results as an simple Eigenfaces approach I took a look at faint but it seems the library is not very reusable for my own applications. I'm happy with a library in Python, Java, C++, C or something like that. The best thing would be if it can be run on a Windowsmachine

    Read the article

  • CPU consumption of my process

    - by Abruzzo Forte e Gentile
    Hi all I would like to use Performance Monitor to check the CPU consumption of my process. Right now I am working on a MultiCore machine. If I have a look at my process in TASK MANAGER I see that my process consumes 20% of CPU. If I start performance monitor, I select Process--% Processor Time I see values peaking up and over 100%. Do you know why and how to get the real measure? I also looked at the CPU consumption for all of my 4 cores, but I don't know exactly how to attribute consumption to my process. If you can suggest a link or url about how to read CPU usage I would really appreciate! Thanks a lot! AFG

    Read the article

  • C or C++ to write a compiler?

    - by H.Josef
    I want to write a compiler for a custom markup language, I want to get optimum performance and I also want to have a good scalable design. Multi-paradigm programming language (C++) is more suitable to implement modern design patterns, but I think that will degrade performance a little bit (think of RTTI for example) which more or less might make C a better choice. I wonder what is the best language (C, C++ or even objective C) if someone wants to create a modern compiler (in the sense of complying to modern software engineering principles as a software) that is fast, efficient, and well designed.

    Read the article

  • Managed language for scientific computing software

    - by heisen
    Scientific computing is algorithm intensive and can also be data intensive. It often needs to use a lot of memory to run analysis and release it before continuing with the next. Sometime it also uses memory pool to recycle memory for each analysis. Managed language is interesting here because it can allow the developer to concentrate on the application logic. Since it might need to deal with huge dataset, performance is important too. But how can we control memory and performance with managed language?

    Read the article

  • Serverside image processing

    - by spol
    I am designing a web application that does server side image processing in real time. Processing tasks include applying different effects like grayscale, blur, oil paint, pencil sketch etc on images in various formats. I want to build it using java/servlets which I am already familiar with. I found 3 options, 1) Use pure java imaging libraries like java.awt or http://www.jhlabs.com/ip/index.html 2) Use command line tools like Gimp/ImageMagick 3) Use c,c++ image libraries that have java bindings. I don't know which of the above options is good keeping the performance in mind. It looks like option 2) and 3) are good performance wise, but I want to be sure before I rule out 1). I have also heard gimp cannot be run using command line unless gtk or xwindows is already installed on the server. Will there be any such problems with 2) or 3) while running them server side? Also please suggest any good image processing libraries for this purpose.

    Read the article

  • OutOfMemoryException - out of ideas

    - by Captain Comic
    Hi I have a net Windows service that is constantly throwing OutOfMemoryException. The service has two builds for x86 and x64 Windows. However on x64 it consumes a lot more memory. I have tried profiling it with various memory profilers. But I cannot get a clue what the problem is. The diagnosis - service consumes lot of VMSize. Also I tried to look at performance counters (perfmon.exe). What I can see is that heap size is growing and %GC time is 19%. My application has threads and locking objects, DB connections and WCF interface. See first app in list The link to picture with performance counters view http://s006.radikal.ru/i215/1003/0b/ddb3d6c80809.jpg

    Read the article

  • compile AMR-nb codec with RVCT for WinCE/Window Mobile

    - by pps
    Hello everybody, I'm working on amr speech codec (porting/optimization) I have an arm (for WinCE) optimized version from voiceage and I use it as a reference in performance testing. So far, binary produced with my lib beats the other one by around 20-30%! I use Vs2008 and I have limited access to ARM instruction set I can generate with Microsoft compiler. So I tried to look for alternative compiler to see what would be performance difference. I have RVCT compiler, but it produces elf binaries/object files. However, I run my test on a wince mobile phone (TyTn 2) so I need to find a way to run code compiled with RVCT on WinCE. Some of the options are 1) to produce assembly listing (-S option of armcc), and try to assemble with some other assembler that can create COFF (MS assembler for arm) 2) compile and convert generated ELF object file to COFF object (seems like objcopy of gnu binutils could help me with that) 3) using fromelf utility supplied by RVCT create BIN file and somehow try to mangle the bits so I can execute them ;) My first attempt is to create a simple c++ file with one exported function, compile it with RVCT and then try to run that function on the smartphone. The emitted assembly cannot be assembled by the ms assembler (not only they are not compatible, but also ms assembler rejects some of the instructions generated with RVCT compiler; ASR opcode in my case) Then I tried to convert ELF object to coff format and I can't find any information on that. There is a gcc port for ce and objcopy from that toolset is supposed to be able to do the task. However, I can't get it working. I tried different switches, but I have no idea what exactly I need to specify as bfdname for input and output format. So, I couldn't get it working either. Dumping with fromelf and using generated bin file seems to be overkill, so I decided to ask you guys if there is anything I should try to do or maybe someone has already done similar task and could help me. Basically, all I want to do is to compile my code with RVCT compiler and see what's the performance difference. My code has zero dependencies on any c runtime functions. thanks!

    Read the article

  • C++ project type: unicode vs multi-byte; pros and cons

    - by Stefan Valianu
    I'm wondering what the Stack Overflow community thinks when it comes to creating a project (thinking primarily c++ here) with a unicode or a multi-byte character set. Are there pros to going Unicode straight from the start, implying all your strings will be in wide format? Are there performance issues / larger memory requirements because of a standard use of a larger character? Is there an advantage to this method? Do some processor architectures handle wide characters better? Are there any reasons to make your project Unicode if you don't plan on supporting additional languages? What reasons would one have for creating a project with a multi-byte character set? How do all of the factors above collide in a high performance environment (such as a modern video game) ?

    Read the article

  • UITableViewCell with selectable/copyable text that also detects URLs on the iPhone

    - by Jasarien
    Hi guys, I have a problem. Part of my app requires text to be shown in a table. The text needs to be selectable/copyable (but not editable) and any URLs within the text need to be highlighted and and when tapped allow me to take that URL and open my embedded browser. I have seen a couple of solutions that solve one of either of these problems, but not both. Solution 1: Icon Factory's IFTweetLabel The first solution I tried was to use the IFTweetLabel class made possible by Icon Factory and used in Twitterrific. While this solution allows for links (or anything you can find with a regex) to be detected to be handled on a case by case basis, it doesn't allow for selecting and copying. There is also an issue where if a URL is long enough to be wrapped, the button that the class overlays above the URL to make it interactive cannot wrap and draws off screen, looking very odd. Solution 2: Use IFTweetLabel and handle copy manually The second thing I tried was to keep IFTweetLabel in place to handle the links, but to implement the copying using a long-tap gesture, like how the SMS app handles it. This was just about working, but it doesn't allow for arbitrary selection of text, the whole text is copied, or none is copied at all... Pretty black and white. Solution 3: UITextView My third attempt was to add a UITextView as a subview of the table cell. The only thing that this doesn't solve is the fact that detected URLs cannot be handled by me. The text view uses UIApplication's openURL: method which quits my app and launched Safari. Also, as the table view can get quite large, the number of UITextViews added as subviews cause a noticeable performance drag on scrolling throughout the table, especially on iPhone 3G era devices (because of the creation, layout, compositing whenever a cell is scrolled on screen, etc). So my question to all you knowledgeable folk out there is: What can I do? Would a UIWebView be the best option? Aside from a performance drag, I think a webview would solve all the above issues, and if I remember correctly, back in the 2.0 days, the Apple documentation actually recommended web views where text formatting / hyperlinks were required. Can anyone think of a way to achieve this without a performance drag? Many thanks in advance to everyone who can help.

    Read the article

  • Large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Improve mysql JDBC insert call

    - by richs
    i have a legacy Java system that every time it gets an order it makes a JDBC call to a stored procedure for each field in the order. Generally the stored procedure will get called 20 to 30 times for each order. The store procedure is just doing an insert into a table for each field. i need to improve the performance of this operation. one thought i had was to create an insert query string that does multiple inserts in one JDBC call. MySql supports a multiple insert string. INSERT INTO PersonAge (name, age) VALUES ('Helen', 24), ('Katrina', 21), ('Samia', 22), ('Hui Ling', 25), ('Yumie', 29) This has the advantage of only requiring one JDBC call per order. Any other ideas on how to improve performance?

    Read the article

  • Better to build or buy a compute grid platform?

    - by James B
    I am looking to do some quite processor-intensive brute force processing for string matching. I have run my prototype in a multi-threaded environment and compared the performance to an implementation using Gridgain with a couple of nodes (also multithreaded). The performance I observed was that my Gridgain implementation performed slower to my multithreaded implementation. It could be the case that there was a flaw in my gridgain implementation, but it was only a prototype, and I thought the results were indicative. So my question is this: What are the advantages of having to learn and then build an implementation for a particular grid platform (hadoop, gridgain, or EC2 if going hosted - other suggestions welcome), when one could fairly easily put together a lightweight compute grid platform with a much shallower learning curve?...i.e. what do we get for free with these cloud/grid platforms that are worth having/tricky to implement? (Please note, I don't have any need for a data grid) Cheers, -James (p.s. Happy to make this community wiki if needbe)

    Read the article

  • Eager loading vs. many queries with PHP, SQLite

    - by Mike
    I have an application that has an n+1 query problem, but when I implemented a way to load the data eagerly, I found absolutely no performance gain. I do use an identity map, so objects are only created once. Here's a benchmark of ~3000 objects. first query + first object creation: 0.00636100769043 sec. memory usage: 190008 bytes iterate through all objects (queries + objects creation): 1.98003697395 sec. memory usage: 7717116 bytes And here's one when I use eager loading. query: 0.0881109237671 sec. memory usage: 6948004 bytes object creation: 1.91053009033 sec. memory usage: 12650368 bytes iterate through all objects: 1.96605396271 sec. memory usage: 12686836 bytes So my questions are Is SQLite just magically lightning fast when it comes to small queries? (I'm used to working with MySQL.) Does this just seem wrong to anyone? Shouldn't eager loading have given much better performance?

    Read the article

  • To store images from UIGetScreenImage() in NSMutable Array

    - by sujyanarayan
    Hi, I'm getting images from UIGetScreenImage() and storing directly in mutable array like:- image = [UIImage imageWithScreenContents]; [array addObject:image]; [image release]; I've set this code in timer so I cant use UIImagePNGRepresentation() to store as NSData as it reduces the performance. I want to use this array directly after sometime i.e after capturing 1000 images in 100 seconds. When I use the code below:- UIImage *im = [[UIImage alloc] init]; im = [array objectAtIndex:i]; UIImageWriteToSavedPhotosAlbum(im, nil, nil, nil); the application crashes. And I dont want to use UIImagePNG or JPGRepresentation() in timer as it reduces performance. My problem is how to use this array so that it is converted into image. If anybody has idea related to it please share with me. Thanks in Advance.

    Read the article

  • Extracting data from multiple servers SQL 2005 SSIS

    - by Raj
    I have created an SSIS package to connect to multiple SQL servers, create a database, a table and a stored procedure. The package also creates a job and schedules it to run every 5 minutes. The requirement is to collect performance metrics. I am using an ado object variable to get the server names and all the above tasks are in a for each loop and everything works fine. Now the problem: I need to create a data flow task, which will connect to each of these servers in turn, copy the performance metrics data over to a central server and purge the source table. I am unable to get this task to work. This task fails with "Unable to obtain Connection" error. Any help will be greatly appreciated. SQL Server Version : 2005 Thanks, Raj

    Read the article

  • Adjacency List Tree Using Recursive WITH (Postgres 8.4) instead of Nested Set

    - by Koobz
    I'm looking for a Django tree library and doing my best to avoid Nested Sets (they're a nightmare to maintain). The cons of the adjacency list model have always been an inability to fetch descendants without resorting to multiple queries. The WITH clause in Postgres seems like a solid solution to this problem. Has anyone seen any performance reports regarding WITH vs. Nested Set? I assume the Nested set will still be faster but as long as they're in the same complexity class, I could swallow a 2x performance discrepancy. Django-Treebeard interests me. Does anyone know if they've implemented the WITH clause when running under Postgres? Has anyone here made the switch away from Nested Sets in light of the WITH clause?

    Read the article

  • What Simple Changes Made the Biggest Improvements to Your Delphi Programs

    - by lkessler
    I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory. What small simple changes have you made to your Delphi code that had the biggest impact on the performance of you program by noticeably reducing execution time or memory use? Thanks everyone for all your answers. Many great tips. For completeness, I'll post a few important articles on Delphi optimization that I found. Before you start optimizing Delphi code at About.com Speed and Size: Top 10 Tricks also at About.com Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

  • silverlight for .NET / CLR based numerical computing on osx

    - by Jonathan Shore
    I'm interested in using F# for numerical work, but my platforms are not windows based. Mono still has a significant performance penalty for programs that generate a significant amount of short-lived objects (as would be typical for functional languages). Silverlight is available on OSX. I had seen some reference indicating that assemblies compiled in the usual way could not be referenced, but not clear on the details. I'm not interested in UIs, but wondering whether could use the VM bundled with silverlight effectively for execution? I would want to be able to reference a large library of numerical models I already have in java (cross-compiled via IKVM to .NET assemblies) and a new codebase written in F#. My hope would be that the silverlight VM on OSX has good performance and can reference external assemblies and native libraries. Is this doable?

    Read the article

  • DDD: Client-side script to enforce invariants

    - by Mosh
    Hello, One thing that I'm confused about in regards to DDD is that our domain is supposed to handle all business logic and enforce invariants. I have noticed some people (me included) handle certain invariants in the presentation layer (i.e. WebForms, Views, etc) with javascript. This is mainly done to improve performance so the server is not hit for every request which may be invalid. Even though this approach may be beneficial performance-wise, it violates DDD principles. What if the business rules are changed? This way we don't have a rich domain where all the business rules are captured. In case of a change, we should change the domain as well as the presentation layer. Has anyone come across this situation before? I'd like to know your thoughts on this. Cheers, Mosh

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • Difference between Apache Tapestry and Apache Wicket

    - by Stephan Schmidt
    Apache Wicket ( http://wicket.apache.org/ ) and Apache Tapestry ( http://wicket.apache.org/ ) are both component oriented web frameworks - contrary to action based frameworks like Stripes - by the Apache Foundation. Both allow you to build your application from components in Java. They both look very similar to me. What are the differences between those two frameworks? Has someone experience in both? Specifically: How is their performance, how much can state handling be customized, can they be used stateless? What is the difference in their component model? What would you choose for which applications? How do they integrate with Guice, Spring, JSR 299? Edit: I have read the documentation for both and I have used both. The questions cannot be answered sufficently from reading the documentation, but from the experience from using these for some time, e.g. how to use Wicket in a stateless mode for high performance sites. Thanks.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >