Search Results

Search found 13869 results on 555 pages for 'memory dump'.

Page 324/555 | < Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >

  • How to implement " char * ftoa(float num) " without sprintf() library function in C, C++ and JAVA

    - by SIVA
    Today I appeared for an interview, and the question was writing my own "char * ftoa(float num) " in C, C++ and Java. Yes, I know float numbers follow IEEE standard while allocating their memory, but I don't know float to char conversion by using Mantissa and Exponent in C. I don't have any idea to solve the above problem in C++ and JAVA. I/P to the ftoa(): 1.23 O/P from the ftoa(): 1.23 (char format). Thanks in advance ...

    Read the article

  • Fread binary file dynamic size string [C]

    - by Blackbinary
    I've been working on this assignment, where I need to read in "records" and write them to a file, and then have the ability to read/find them later. On each run of the program, the user can decide to write a new record, or read an old record (either by Name or #) The file is binary, here is its definition: typedef struct{ char * name; char * address; short addressLength, nameLength; int phoneNumber; }employeeRecord; employeeRecord record; The way the program works, it will store the structure, then the name, then the address. Name and address are dynamically allocated, which is why it is necessary to read the structure first to find the size of the name and address, allocate memory for them, then read them into that memory. For debugging purposes I have two programs at the moment. I have my file writing program, and file reading. My actual problem is this, when I read a file I have written, i read in the structure, print out the phone # to make sure it works (which works fine), and then fread the name (now being able to use record.nameLength which reports the proper value too). Fread however, does not return a usable name, it returns blank. I see two problems, either I haven't written the name to the file correctly, or I haven't read it in correctly. Here is how i write to the file: where fp is the file pointer. record.name is a proper value, so is record.nameLength. Also i am writing the name including the null terminator. (e.g. 'Jack\0') fwrite(&record,sizeof record,1,fp); fwrite(record.name,sizeof(char),record.nameLength,fp); fwrite(record.address,sizeof(char),record.addressLength,fp); And i then close the file. here is how i read the file: fp = fopen("employeeRecord","r"); fread(&record,sizeof record,1,fp); printf("Number: %d\n",record.phoneNumber); char *nameString = malloc(sizeof(char)*record.nameLength); printf("\nName Length: %d",record.nameLength); fread(nameString,sizeof(char),record.nameLength,fp); printf("\nName: %s",nameString); Notice there is some debug stuff in there (name length and number, both of which are correct). So i know the file opened properly, and I can use the name length fine. Why then is my output blank, or a newline, or something like that? (The output is just Name: with nothing after it, and program finishes just fine) Thanks for the help.

    Read the article

  • C++ Professional Code Analysis Tools

    - by Voulnet
    Hello there, I would like to ask about the available (free or not) Static and Dynamic code analysis tools that can be used to C++ applications ESPECIALLY COM and ActiveX. I am currently using Visual Studio's /analyze compiler option, which is good and all but I still feel there is lots of analysis to be done. I'm talking about a C++ application where memory management and code security is of utmost importance.

    Read the article

  • .Net: Adding files and folders to SETUP Project programmatically

    - by Manish
    So, here is the scenario: I want to create a installer which would just dump few files and folders at a location specified by user. But the problem is these files are required to be picked up from a fixed source folder and then the installer is build. Also, these files may change any time and then again a new version of the installer is required to be created. So this needs to be done programmatically. Also, how can I add some coding stuff in setup projects? (New to SETUP PROJECTS) How? Any ideas/comments appreciated...

    Read the article

  • version control on large files

    - by Dustin Getz
    We happily use SVN for SCM at work. Currently I've got our binary assets in the same SVN repository as our code. SVN supports very large files (it transmits them 'streamily' to keep memory usage sane), but it is SLOOWWWWW. What asset management software do you recommend, for about a GB (and growing) worth of assets? We would prefer branching and merging (different assets & config files go to different customers).

    Read the article

  • PDB file from different versions of Visual Studio

    - by m3rLinEz
    I have an old DLL file which was built with VC++ 6. Now I need to investigate the dump file but I don't have its PDB available. The stacktrace reported by WinDbg is also inaccurate. Is it possible to rebuild the project with later versions of Visual Studio i.e. 2003, 2005, 2008, have the PDB generated, and use this to map addresses to symbols in the old DLL? Is there something like VC 6.0 compatible mode for building project? Obtaining VC++ 6 is one option, but it looks like VS6.0 has already vanished from MSDN subscriber download page :( Thanks!

    Read the article

  • Is using the Class instance as a Map key a best practice?

    - by Pangea
    I have read somewhere that using the class instances as below is not a good idea as they might cause memory leaks. Can someone tell me if if that is a valid statement? Or are they any problems using it this way? Map<Class<?>,String> classToInstance=new HashMap(); classToInstanceMap.put(String.class,"Test obj");

    Read the article

  • Is there any IDE integration for JBoss AS 6?

    - by Jonathan Frank
    We have switched to JBoss 6 to make it possible to use a wider range of Java EE technologies. We chose JBoss because of its small memory footprint compared to other application servers, so we have no other choice. Do you know any developer tools that can be integrated with JBoss AS 6? Thanks in advance Jonathan Frank

    Read the article

  • Producer / Consumer - I/O Disk

    - by Pedro Magalhaes
    Hi, I have a compressed file in the disk, that a partitioned in blocks. I read a block from disk decompress it to memory and the read the data. It is possible to create a producer/consumer, one thread that recovers compacted blocks from disk and put in a queue and another thread that decompress and read the data? Will the performance be better? Thanks!

    Read the article

  • Looking for an alternative to cfdump

    - by invertedSpear
    I think I just realized how restrictive my web host is when they wouldn't let me use cfdump. This actually kind of angers me, cause really, what harm is dump going to do? Anyway my question is has anyone written a cfdump alternative that will kick out complex types of data or can link me to a site with a code example? Can't really used cfc's or udfs either cause guess what, they're blocked too. Anyway looking for something simple that I can just paste in my cfml and I will be happy. It's sad that I used to be able to do this, but have forgotten a lot of that skillset since I moved into Flex and AS. oh and they're using cf7, so no cf8 or 9 tricks ;-) Thanks in advance.

    Read the article

  • Android service killed

    - by Erdal
    I have a Service running in the same process as my Application. Sometimes the Android OS decides to kill my service (probably due to low memory). My question is: does my Application get killed along with the Service? or how does it work exactly? Thanks!

    Read the article

  • deleting an object

    - by Nima
    Hi, First, when you want to free the memory assigned to an object in C++, which one is preferred? Explicitly calling deconstructor or using delete? Object* object = new Object(...); ... delete object; OR object->~Object(); Second, does the delete operator call the deconstructor implicitly? Thanks,

    Read the article

  • Disabling Xdebug's dumping of caught exceptions

    - by nuqqsa
    By default Xdebug will dump any exception regardless of whether it is caught or not: try { throw new Exception(); } catch (Exception $e) { } echo 'life goes on'; With XDebug enabled and the default settings this piece of code will actually output something like the following (nicely formatted): ( ! ) Exception: in /test.php on line 3 Call Stack # Time Memory Function Location 1 0.0003 52596 {main}( ) ../test.php:0 life goes on Is it possible to disable this behaviour and have it dumping only the uncaught exceptions? Thanks in advance. UPDATE: I'm about to conclude that this is a bug, since xdebug.show_exception_trace is disabled by default yet it doesn't behave as expected (using Xdebug v2.0.5 with PHP 5.2.10 on Ubuntu 9.10).

    Read the article

  • Getting A File's Mime Type In Java

    - by Lee Theobald
    I was just wondering how most people fetch a mime type from a file in Java? So far I've tried two utils: JMimeMagic & Mime-Util. The first gave me memory exceptions, the second doesn't close its streams off properly. I was just wondering if anyone else had a method/library that they used and worked correctly?

    Read the article

  • Best free flowchart software?

    - by Click Upvote
    I need to map out a complex algorithm with lots of conditional options. Need an easy to use flowchart software, preferably free since I need it for just a one time use. Would prefer something lightweight which doesn't eat up all the CPU memory. Any ideas?

    Read the article

  • How is hashCode() calculated in Java.

    - by Jothi
    What value is hashCode() method is returning in java?. i read that it is an memory reference of an object. when i print hascode value for new Integer(1), its 1. for String(a) - 97. so i confused. is it ascii or what type of value is?

    Read the article

  • [Gdata] GetAuthSubToken returns None

    - by Matt
    Hey guys, I am a little lost on how to get the auth token. Here is the code I am using on the return from authorizing my app: client = gdata.service.GDataService() gdata.alt.appengine.run_on_appengine(client) sessionToken = gdata.auth.extract_auth_sub_token_from_url(self.request.uri) client.UpgradeToSessionToken(sessionToken) logging.info(client.GetAuthSubToken()) what gets logged is "None" so that does seem right :-( if I use this: temp = client.upgrade_to_session_token(sessionToken) logging.info(dump(temp)) I get this: {'scopes': ['http://www.google.com/calendar/feeds/'], 'auth_header': 'AuthSub token=CNKe7drpFRDzp8uVARjD-s-wAg'} so I can see that I am getting a AuthSub Token and I guess I could just parse that and grab the token but that doesn't seem like the way things should work. If I try to use AuthSubTokenInfo I get this: Traceback (most recent call last): File "/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/google/appengine/ext/webapp/__init__.py", line 507, in __call__ handler.get(*groups) File "controllers/indexController.py", line 47, in get logging.info(client.AuthSubTokenInfo()) File "/Users/matthusby/Dropbox/appengine/projects/FBCal/gdata/service.py", line 938, in AuthSubTokenInfo token = self.token_store.find_token(scopes[0]) TypeError: 'NoneType' object is unsubscriptable so it looks like my token_store is not getting filled in correctly, is that something I should be doing? Also I am using gdata 2.0.9 Thanks Matt

    Read the article

  • Faster integer division when denominator is known?

    - by aaa
    hi I am working on GPU device which has very high division integer latency, several hundred cycles. I am looking to optimize divisions. All divisions by denominator which is in a set { 1,3,6,10 }, however numerator is a runtime positive value, roughly 32000 or less. due to memory constraints, lookup table is not option. Can you think of alternatives? I have thought of computing float point inverses, and using those to multiply numerator. Thanks

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Caching in mmap

    - by myahya
    I am using mmap call to read from a very big file using simple pointer arithmetic in C++. The problem is that when I read small chunks of data (in the order of KBs) multiple times, each read take the same amount of time as the previous one. How can I know if the disk is being accessed to fulfill my request or whether the request is being fulfilled from main memory (page cache) in calls after the first one.

    Read the article

  • An affordable way to use multiple Delayed::Job queues

    - by NudeCanalTroll
    I have a Ruby on Rails app that needs process many background jobs simultaneously: anywhere from 5-6 at a time to up to 50-60 at a time depending on the time of day. Right now my app is running on Heroku, which charges $.05/hour per worker, regardless of how much CPU or memory the worker is using. This is costing me a boatload each month... up to $1200/mo. Are there any hosts that will allow me to do what I'm doing for significantly cheaper?

    Read the article

  • Max file size for File.ReadAllLines

    - by user283897
    Hi, I need to read and process a text file. My processing would be easier if I could use the File.ReadAllLines method but I'm not sure what is the maximum size of the file that could be read with this method without reading by chunks. I understand that the file size depends on the computer memory. But are still there any recommendations for an average machine? I would greatly appreciate your fast response. Thanks, Lev

    Read the article

  • Are there any drawbacks to using helper :all in Rails

    - by Rob Jones
    In Rails 'helper :all' makes all your helpers 'available' to all your controllers. This is concise and convenient, but does it have any memory and/or performance implications compared to explicitly calling the helpers that each controller actually needs? It's unclear form the docs whether using it involves 'require'ing all those files, or whether autoload is being used. I can't tell from the source in the Rails framework docs. thanks

    Read the article

  • Rmagick on BitNami not working

    - by Steven
    I am running a rails app on BitNami rubystack, and try to upload picture using file_column i get this error, check below: LoadError (998: Invalid access to memory location. - C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/rmagick-2.9.0-x86-mswin32/ ext/RMagick2.so): C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/rmagick-2.9.0-x86-mswin32/ext/RMagick2.so C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:156:in require' C:/Program Files (x86)/BitNami RubyStack/ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:521:innew_constants_in

    Read the article

< Previous Page | 320 321 322 323 324 325 326 327 328 329 330 331  | Next Page >