Search Results

Search found 12267 results on 491 pages for 'out of memory'.

Page 285/491 | < Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >

  • Is using the Class instance as a Map key a best practice?

    - by Pangea
    I have read somewhere that using the class instances as below is not a good idea as they might cause memory leaks. Can someone tell me if if that is a valid statement? Or are they any problems using it this way? Map<Class<?>,String> classToInstance=new HashMap(); classToInstanceMap.put(String.class,"Test obj");

    Read the article

  • Is there any IDE integration for JBoss AS 6?

    - by Jonathan Frank
    We have switched to JBoss 6 to make it possible to use a wider range of Java EE technologies. We chose JBoss because of its small memory footprint compared to other application servers, so we have no other choice. Do you know any developer tools that can be integrated with JBoss AS 6? Thanks in advance Jonathan Frank

    Read the article

  • C++ Professional Code Analysis Tools

    - by Voulnet
    Hello there, I would like to ask about the available (free or not) Static and Dynamic code analysis tools that can be used to C++ applications ESPECIALLY COM and ActiveX. I am currently using Visual Studio's /analyze compiler option, which is good and all but I still feel there is lots of analysis to be done. I'm talking about a C++ application where memory management and code security is of utmost importance.

    Read the article

  • Faster integer division when denominator is known?

    - by aaa
    hi I am working on GPU device which has very high division integer latency, several hundred cycles. I am looking to optimize divisions. All divisions by denominator which is in a set { 1,3,6,10 }, however numerator is a runtime positive value, roughly 32000 or less. due to memory constraints, lookup table is not option. Can you think of alternatives? I have thought of computing float point inverses, and using those to multiply numerator. Thanks

    Read the article

  • How is hashCode() calculated in Java.

    - by Jothi
    What value is hashCode() method is returning in java?. i read that it is an memory reference of an object. when i print hascode value for new Integer(1), its 1. for String(a) - 97. so i confused. is it ascii or what type of value is?

    Read the article

  • Caching in mmap

    - by myahya
    I am using mmap call to read from a very big file using simple pointer arithmetic in C++. The problem is that when I read small chunks of data (in the order of KBs) multiple times, each read take the same amount of time as the previous one. How can I know if the disk is being accessed to fulfill my request or whether the request is being fulfilled from main memory (page cache) in calls after the first one.

    Read the article

  • Char* vs std::string

    - by Lockyer
    Is there any advantage to using char*'s instead of std::string? I know char*'s are usually defined on the stack, so we know exactly how much memory we'll use, is this actually a good argument for their use? Or is std::string better in every way?

    Read the article

  • Max file size for File.ReadAllLines

    - by user283897
    Hi, I need to read and process a text file. My processing would be easier if I could use the File.ReadAllLines method but I'm not sure what is the maximum size of the file that could be read with this method without reading by chunks. I understand that the file size depends on the computer memory. But are still there any recommendations for an average machine? I would greatly appreciate your fast response. Thanks, Lev

    Read the article

  • Are there any drawbacks to using helper :all in Rails

    - by Rob Jones
    In Rails 'helper :all' makes all your helpers 'available' to all your controllers. This is concise and convenient, but does it have any memory and/or performance implications compared to explicitly calling the helpers that each controller actually needs? It's unclear form the docs whether using it involves 'require'ing all those files, or whether autoload is being used. I can't tell from the source in the Rails framework docs. thanks

    Read the article

  • SQLite connection pooling with Fluent NHibernate

    - by Groo
    Is there a way to setup SQLite connection pooling using Fluent NHibernate configuration? E.g. equivalent of DataSource=:memory: would be: var sessionFactory = Fluently .Configure() .Database(SQLiteConfiguration.Standard.InMemory) (etc.) Is there something eqivalent to "Pooling=True;Max Pool Size=1;"?

    Read the article

  • I want to create adjacency matrix using python

    - by A A
    I have very large data set it is almost 450000 lines and two rows, i want to compute adjacency matrix using python, because previously i have tried to do it in matlab, and it shows memory error because of large data values. my data values also start from 100 and goes upto 450000, Anyone can help me in this issue, as i am new to python. I have to first import the file into python using excel sheet or notepad and then compute the adjacency matrix

    Read the article

  • Rmagick on BitNami not working

    - by Steven
    I am running a rails app on BitNami rubystack, and try to upload picture using file_column i get this error, check below: LoadError (998: Invalid access to memory location. - C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/rmagick-2.9.0-x86-mswin32/ ext/RMagick2.so): C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/rmagick-2.9.0-x86-mswin32/ext/RMagick2.so C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:156:in require' C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:521:innew_constants_in

    Read the article

  • Best Practice for Utilities Class?

    - by Sonny Boy
    Hey all, We currently have a utilities class that handles a lot of string formatting, date displays, and similar functionality and it's a shared/static class. Is this the "correct" way of doing things or should we be instanciating the utility class as and when we need it? Our main goal here is to reduce memory footprint but performance of the application is also a consideration. Thanks, Matt PS. We're using .NET 2.0

    Read the article

  • An affordable way to use multiple Delayed::Job queues

    - by NudeCanalTroll
    I have a Ruby on Rails app that needs process many background jobs simultaneously: anywhere from 5-6 at a time to up to 50-60 at a time depending on the time of day. Right now my app is running on Heroku, which charges $.05/hour per worker, regardless of how much CPU or memory the worker is using. This is costing me a boatload each month... up to $1200/mo. Are there any hosts that will allow me to do what I'm doing for significantly cheaper?

    Read the article

  • Handling national language prefix for checkconstraints

    - by Chris Chilvers
    I'm trying to create a check constraint such as CHECK Type IN (N'Create', N'Remove') for an enumeration's value. Sqlite complains about this syntax and only accepts CHECK Type IN ('Create', 'Remove'). The main database will be Sql Server 2005, but I use sqlite's in memory database for unit tests. Is there any way to get sqlite to recognise the national language (N) prefix? Alternatively, is there an easy way when using FluentNHibernate to adapt an nvarchar constant to match the database's dialect?

    Read the article

  • What's the fastest way to determine if a file adheres to a particular class's NSCoding implementatio

    - by Justin Searls
    Given: An application that accesses a directory of files: some plain text, some binary files that adhere to a particular NSCoding implementation, and perhaps other binary files it simply doesn't understand how to process. I want to be able to figure out which of the files in that directory adhere to my NSCoding class, and I'd prefer not to have to fall back on the naïve approach of loading the entirety of each file into memory, attempting to unarchive each. Anyone have an elegant approach or pattern to this problem?

    Read the article

  • Efficiency questions

    - by rayman
    Hi, I have to manage XML documents and Strings in my app. In terms of efficiency and memory usage, will a collection like ArrayList be much more expensive than String[]? Also, I could store the content as a regular String or XML. Is working with XML also more expensive? (When I say expensive, I am referring to the use of system resources.) If there are differences, are they significant? Thanks, Ray.

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Convert large raster graphics image(bitmap, PNG, JPEG, etc) to non-vector postscript in C#

    - by Dennis Cheung
    How to convert an large image and embed it into postscript? I used to convert the bitmap into HEX codes and render with colorimage. It works for small icons but I hit a /limitcheck error in ghostscript when I try to embed little larger images. It seem there is a memory limit for bitmap in ghostscript. I am looking a solution which can run without 3rd party/pre-processing other then ghostscript itself.

    Read the article

  • How do Relational Databases Work Under the Hood?

    - by Pierreten
    I've always been interested in how you can throw some SQL at at database, and it nearly instantaneously returns your results in an orderly manner without thinking about it as anything other than a black box. What is really going on? I'm pretty sure it has something to do with how values are laid out regularly in memory, similar to an array; but aside from that, I don't know much else. How is SQL parsed in a manner to facilitate all of this?

    Read the article

< Previous Page | 281 282 283 284 285 286 287 288 289 290 291 292  | Next Page >