Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 319/925 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • Large number array compression

    - by gatapia
    Hi All, I've got a javascript application that sends a large amount of numerical data down the wire. This data is then stored in a database. I am having size issues (too much bandwidth, database getting too big). I am now ready to sacrifice some performance for compression. I was thinking of implementing a base 62 number.toString(62) and parseInt(compressed, 62). This would certainly reduce the size of the data but before I go ahead and do this I thought I would put it to the folks here as I know there must be some outside the box solution I have not considered. The basic specs are: - Compress large number arrays into strings for JSONP transfer (So I think UTF is out) - Be relatively fast, look I'm not expecting same performance as I have now but I also don't want gzip compression either. Any ideas would be greatly appreciated. Thanks Guido Tapia

    Read the article

  • Keeping files or database records? Java and Python

    - by danpalmer
    My website will use a Neural Network to predict thing based on user data. The user can select the data to be used in training the network and then use their trained network to predict things. I am using a framework to create, train and query the networks. This uses Java. The framework has persistence for saving a network to an XML file. What is the best way to store these files? I can see several potential ideas, but I need help on choosing which is best: Save each network to a separate XML file with a name that is stored in the database. Load this each time. Save all the networks to the same XML file with each network having a different name that is stored in the database. Somehow pass what would normally be written to an XML file to the Django site for writing to the database. This would need to be returned to the Java code when a prediction needs to be made. I am able to do 1 or 2, but I think their performance will be quite limited and I am on shared hosting at the moment, so I don't know how pleased they would be with thousands of files. Also, after adding a few thousand records to one XML file, I was noticing a massive performance hit on saving to it. If I were able to implement version 3 somehow I think it would be best. No issues with separate processes accessing the database and I think performance would be better. Not to mention having no files lying around. However, the stuff in the neural network framework I am using (Encog) for saving to a file needs access to a Java file object, not a string that could be saved to a database. Unless there is some Java magic I can do here (I know very little Java), the only way I can see of doing this would be with a temporary files but I don't know if this is the correct way to do it. I would appreciate any ideas on the best way to implement any of the above 3 ideas or any alternatives. Thanks!

    Read the article

  • Improve mysql JDBC insert call

    - by richs
    i have a legacy Java system that every time it gets an order it makes a JDBC call to a stored procedure for each field in the order. Generally the stored procedure will get called 20 to 30 times for each order. The store procedure is just doing an insert into a table for each field. i need to improve the performance of this operation. one thought i had was to create an insert query string that does multiple inserts in one JDBC call. MySql supports a multiple insert string. INSERT INTO PersonAge (name, age) VALUES ('Helen', 24), ('Katrina', 21), ('Samia', 22), ('Hui Ling', 25), ('Yumie', 29) This has the advantage of only requiring one JDBC call per order. Any other ideas on how to improve performance?

    Read the article

  • Using VB6 + WSH with Windows Compression

    - by OneNerd
    Having trouble with WSH and Windows Compression. My goal is to be able to zip up files (not folders, but individual files from various locations, which I have stored in an array) using the built-in Windows Compression. I am using VB6. Here is my routine (vb6 code): Dim objShell Dim objFolder Set objShell = CreateObject("Shell.Application") Set objFolder = objShell.namespace(savePath & "\export.zip") ' -- ' loop through array holding files to zip For i = 0 To filePointer objFolder.CopyHere (filesToZip(i)) Next ' -- Set objShell = Nothing Set objFolder = Nothing It works, but issues arise when there are more than a few files. I start getting errors from Windows (presumably, its calling the compression too fast, and the zip file is locked). I cant seem to figure out how to WAIT until the COPYHERE function completes before calling the next one to avoid issues. Does anyone have any experience with this? Thanks -

    Read the article

  • Better to build or buy a compute grid platform?

    - by James B
    I am looking to do some quite processor-intensive brute force processing for string matching. I have run my prototype in a multi-threaded environment and compared the performance to an implementation using Gridgain with a couple of nodes (also multithreaded). The performance I observed was that my Gridgain implementation performed slower to my multithreaded implementation. It could be the case that there was a flaw in my gridgain implementation, but it was only a prototype, and I thought the results were indicative. So my question is this: What are the advantages of having to learn and then build an implementation for a particular grid platform (hadoop, gridgain, or EC2 if going hosted - other suggestions welcome), when one could fairly easily put together a lightweight compute grid platform with a much shallower learning curve?...i.e. what do we get for free with these cloud/grid platforms that are worth having/tricky to implement? (Please note, I don't have any need for a data grid) Cheers, -James (p.s. Happy to make this community wiki if needbe)

    Read the article

  • Eager loading vs. many queries with PHP, SQLite

    - by Mike
    I have an application that has an n+1 query problem, but when I implemented a way to load the data eagerly, I found absolutely no performance gain. I do use an identity map, so objects are only created once. Here's a benchmark of ~3000 objects. first query + first object creation: 0.00636100769043 sec. memory usage: 190008 bytes iterate through all objects (queries + objects creation): 1.98003697395 sec. memory usage: 7717116 bytes And here's one when I use eager loading. query: 0.0881109237671 sec. memory usage: 6948004 bytes object creation: 1.91053009033 sec. memory usage: 12650368 bytes iterate through all objects: 1.96605396271 sec. memory usage: 12686836 bytes So my questions are Is SQLite just magically lightning fast when it comes to small queries? (I'm used to working with MySQL.) Does this just seem wrong to anyone? Shouldn't eager loading have given much better performance?

    Read the article

  • Extracting data from multiple servers SQL 2005 SSIS

    - by Raj
    I have created an SSIS package to connect to multiple SQL servers, create a database, a table and a stored procedure. The package also creates a job and schedules it to run every 5 minutes. The requirement is to collect performance metrics. I am using an ado object variable to get the server names and all the above tasks are in a for each loop and everything works fine. Now the problem: I need to create a data flow task, which will connect to each of these servers in turn, copy the performance metrics data over to a central server and purge the source table. I am unable to get this task to work. This task fails with "Unable to obtain Connection" error. Any help will be greatly appreciated. SQL Server Version : 2005 Thanks, Raj

    Read the article

  • What to look for in estimating a PowerBuilder Conversion Project?

    - by tekiegreg
    Hi there, I've been trying to do a spec for a PowerBuilder 9 to 11.5 migration of a relatively complex application. Granted PowerBuilder is not really my specialty I'm having issues trying to justify an estimate for this part of the project (and the PowerBuilder people I've been talking with have had some personal issues lately and are out of communication). These are some of the metrics that we have seen and can evaluate: -PBL Files -Main Windows -Data Windows -Functions (no we don't have the source available on this project) What metrics in particular are helpful and how long would any given "unit" such as a Data Window take?

    Read the article

  • To store images from UIGetScreenImage() in NSMutable Array

    - by sujyanarayan
    Hi, I'm getting images from UIGetScreenImage() and storing directly in mutable array like:- image = [UIImage imageWithScreenContents]; [array addObject:image]; [image release]; I've set this code in timer so I cant use UIImagePNGRepresentation() to store as NSData as it reduces the performance. I want to use this array directly after sometime i.e after capturing 1000 images in 100 seconds. When I use the code below:- UIImage *im = [[UIImage alloc] init]; im = [array objectAtIndex:i]; UIImageWriteToSavedPhotosAlbum(im, nil, nil, nil); the application crashes. And I dont want to use UIImagePNG or JPGRepresentation() in timer as it reduces performance. My problem is how to use this array so that it is converted into image. If anybody has idea related to it please share with me. Thanks in Advance.

    Read the article

  • Adjacency List Tree Using Recursive WITH (Postgres 8.4) instead of Nested Set

    - by Koobz
    I'm looking for a Django tree library and doing my best to avoid Nested Sets (they're a nightmare to maintain). The cons of the adjacency list model have always been an inability to fetch descendants without resorting to multiple queries. The WITH clause in Postgres seems like a solid solution to this problem. Has anyone seen any performance reports regarding WITH vs. Nested Set? I assume the Nested set will still be faster but as long as they're in the same complexity class, I could swallow a 2x performance discrepancy. Django-Treebeard interests me. Does anyone know if they've implemented the WITH clause when running under Postgres? Has anyone here made the switch away from Nested Sets in light of the WITH clause?

    Read the article

  • What Simple Changes Made the Biggest Improvements to Your Delphi Programs

    - by lkessler
    I have a Delphi 2009 program that handles a lot of data and needs to be as fast as possible and not use too much memory. What small simple changes have you made to your Delphi code that had the biggest impact on the performance of you program by noticeably reducing execution time or memory use? Thanks everyone for all your answers. Many great tips. For completeness, I'll post a few important articles on Delphi optimization that I found. Before you start optimizing Delphi code at About.com Speed and Size: Top 10 Tricks also at About.com Code Optimization Fundamentals and Delphi Optimization Guidelines at High Performance Delphi, relating to Delphi 7 but still very pertinent.

    Read the article

  • Optimizing a large iteration of PHP objects (EAV-based)

    - by Aron Rotteveel
    I am currently working on a project that utilizes the EAV model. This turns out to work quite well, but like many others I am now stumbling upon some performance issues. The data set in this particular case consists of aproximately 2500 entities, each with aprox. 150 attributes. Each entity and each attribute is represented by a PHP-object. Since most parts of the application only iterate through a filtered set of entities, we have not had very large issues yet. Now, however, I am working on an algorithm that requires iteration over the entire dataset, which causes a major impact on performance. This information is perhaps not very much to work with, but since this is an architectural problem, I am hoping for a architectural pattern to help me on the way as well. Each entity, including it's attributes takes up aprox. 500KB of memory.

    Read the article

  • silverlight for .NET / CLR based numerical computing on osx

    - by Jonathan Shore
    I'm interested in using F# for numerical work, but my platforms are not windows based. Mono still has a significant performance penalty for programs that generate a significant amount of short-lived objects (as would be typical for functional languages). Silverlight is available on OSX. I had seen some reference indicating that assemblies compiled in the usual way could not be referenced, but not clear on the details. I'm not interested in UIs, but wondering whether could use the VM bundled with silverlight effectively for execution? I would want to be able to reference a large library of numerical models I already have in java (cross-compiled via IKVM to .NET assemblies) and a new codebase written in F#. My hope would be that the silverlight VM on OSX has good performance and can reference external assemblies and native libraries. Is this doable?

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • DDD: Client-side script to enforce invariants

    - by Mosh
    Hello, One thing that I'm confused about in regards to DDD is that our domain is supposed to handle all business logic and enforce invariants. I have noticed some people (me included) handle certain invariants in the presentation layer (i.e. WebForms, Views, etc) with javascript. This is mainly done to improve performance so the server is not hit for every request which may be invalid. Even though this approach may be beneficial performance-wise, it violates DDD principles. What if the business rules are changed? This way we don't have a rich domain where all the business rules are captured. In case of a change, we should change the domain as well as the presentation layer. Has anyone come across this situation before? I'd like to know your thoughts on this. Cheers, Mosh

    Read the article

  • Difference between Apache Tapestry and Apache Wicket

    - by Stephan Schmidt
    Apache Wicket ( http://wicket.apache.org/ ) and Apache Tapestry ( http://wicket.apache.org/ ) are both component oriented web frameworks - contrary to action based frameworks like Stripes - by the Apache Foundation. Both allow you to build your application from components in Java. They both look very similar to me. What are the differences between those two frameworks? Has someone experience in both? Specifically: How is their performance, how much can state handling be customized, can they be used stateless? What is the difference in their component model? What would you choose for which applications? How do they integrate with Guice, Spring, JSR 299? Edit: I have read the documentation for both and I have used both. The questions cannot be answered sufficently from reading the documentation, but from the experience from using these for some time, e.g. how to use Wicket in a stateless mode for high performance sites. Thanks.

    Read the article

  • TinyMCE or HTML5's contentEditable attribute?

    - by Mike
    I have always hated wysiwyg editors but most of the applications I develop they are necessary for our clients to edit their content. After trying out a few different ones I seemed to like tinyMCE the best. Although powerful and seems to generate fairly good HTML it is not without its issues. Recently I have been thinking about creating a custom wysiwyg that suits my needs perfectly taking advantage of the contentEditable attribute. Is this HTML5 feature ready? Will I have many cross browser issues? What are its limitations? I guess my question finally boils down to; Will it be worth throwing in the towel on 3rd party wysiwygs and moving to contentEditable regions?

    Read the article

  • How to analyze dump file from delphi dll?

    - by Yann
    I'm an escalation engineer on a product which use both c# and delphi 2006 code. In most cases c# issues are debugged with windbg and delphi 2006 issues with eurekalog. But when the issue is a delphi memory usage, eurekalog doesn't give enough information to fix the issue and the only thing i have to debug it is a full memory dump file. I cannot (or i don't know how to) load symbol file in windbg because it is a .map file and not a .pdb file. So my questions are: Does anyone know how to load symbol from .map file in windbg? (Converting .map to .pdb or other) Does anyone know a tool to analyze dump file for delphi application?

    Read the article

  • Jboss logging issue

    - by balaji
    I'm Working as deployer and server administrator. We use Jboss 4.0x AS to deploy our applications. The issue I'm facing is, Whenever we redeploy/restart the server, server.log is getting created but after sometime the logging goes off. Yes it is not at all updating the server.log file. Due to this, we could not trace the other critical issues we have. Actually we have two separate nodes and we do deploy/restarting the server separately on two nodes. We are facing the issue in both of our test and production environment. I could not trace out where exactly the issue is. Could you please help me in resolving the issue? If we have any other issues, we can check the log files. If log itself is not getting updated/logged, how can we move further in analyzing the issues without the recent/updated logs? Below are the logs found in the stdout.log: 18:55:50,303 INFO [Server] Core system initialized 18:55:52,296 INFO [WebService] Using RMI server codebase: http://kl121tez.is.klmcorp.net:8083/ 18:55:52,313 INFO [Log4jService$URLWatchTimerTask] Configuring from URL: resource:log4j.xml 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFTraceLogger without a class name. 18:55:52,860 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasRBPFMessageLogger without a class name. 18:55:54,273 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheTraceLogger without a class name. 18:55:54,274 ERROR [STDERR] LOG0026E The Log Manager cannot create the object AmasCacheMessageLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCTraceLogger without a class name. 18:55:54,334 ERROR [STDERR] LOG0026E The Log Manager cannot create the object JACCMessageLogger without a class name. 18:55:56,059 INFO [ServiceEndpointManager] WebServices: jbossws-1.0.3.SP1 (date=200609291417) 18:55:56,635 INFO [Embedded] Catalina naming disabled 18:55:56,671 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,672 INFO [ClusterRuleSetFactory] Unable to find a cluster rule set in the classpath. Will load the default rule set. 18:55:56,843 INFO [Http11BaseProtocol] Initializing Coyote HTTP/1.1 on http-0.0.0.0-8180 18:55:56,844 INFO [Catalina] Initialization processed in 172 ms 18:55:56,844 INFO [StandardService] Starting service jboss.web

    Read the article

  • Schemas and tables versus user-ids in a single table using PostgreSQL

    - by gvkv
    I'm developing a web app and I've come to a fork in the road with respect to database structure and I don't know which direction to take. I have a database with user information that I can structure one of two ways. The first is to create a schema and a set of tables for each user (duplicating the structure for each user) and the second is to create a single set of tables and query information based on user-id. Suppose 100000 users. Here are my questions: Considering security, performance, scalability and administration where does each choice lie? Would the answers change for 1000000 or 10000? Is there a set of best practices that lead to one choice or the other? It seems to me that multiple schemas are more secure since it's trivial to restrict user privileges but what about performance and scalability? Administration seems like a wash since dumping (and restoring) lots of schemas isn't any more difficult than dumping a few.

    Read the article

  • Any good interview Questions to ask a sybase dba.

    - by scot
    Hi, I am a java developer and i will be interviewing sybase dbas along with my boss. I know some basic stuff about sybase. Iam looking for good interview questions that i can ask for a sybase dba. they will be having a min of 4 years of experience. I am looking for them to have really good knowledge in performance and tuning related areas like how to measure database performance and suggest ways to improve database design or sybase configuration etc. Help much appreciated. BR

    Read the article

  • Is there anything for Python that is like readability.js?

    - by Emre Sevinç
    Hi, I'm looking for a package / module / function etc. that is approximately the Python equivalent of Arc90's readability.js http://lab.arc90.com/experiments/readability http://lab.arc90.com/experiments/readability/js/readability.js so that I can give it some input.html and the result is cleaned up version of that html page's "main text". I want this so that I can use it on the server-side (unlike the JS version that runs only on browser side). Any ideas? PS: I have tried Rhino + env.js and that combination works but the performance is unacceptable it takes minutes to clean up most of the html content :( (still couldn't find why there is such a big performance difference).

    Read the article

  • Using SqlCacheDependency to get real time updates? - ASP.NET

    - by user102533
    I would like to display real time updates on a web page (based on a status field in a database table that is altered by an external process). Based on my research, there are several ways of doing this. Long Polling (Comet) - This seems to be complex to implement Regular Polling - I can have an AJAX method trigger a database hit every 5seconds to get the current status. But I fear this will have performance issues. Then I read about using SqlCacheDependency - basically the cache gets invalidated based on a field in the table. I am assuming I can use the event trigerred when the cache is invalidated to show the new update to the user? What's an easy solution that will not have performance issues? anyone?

    Read the article

  • Popularity Algorithm - SQL / Django

    - by RadiantHex
    Hi folks, I've been looking into popularity algorithms used on sites such as Reddit, Digg and even Stackoverflow. Reddit algorithm: t = (time of entry post) - (Dec 8, 2005) x = upvotes - downvotes y = {1 if x > 0, 0 if x = 0, -1 if x < 0) z = {1 if x < 0, otherwise x} log(z) + (y * t)/45000 I have always performed simple ordering within SQL, I'm wondering how I should deal with such ordering. Should it be used to define a table, or could I build an SQL with the ordering within the formula (without hindering performance)? I am also wondering, if it is possible to use multiple ordering algorithms in different occasions, without incurring into performance problems. I'm using Django and PostgreSQL. Help would be much appreciated! ^^

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >