Search Results

Search found 12588 results on 504 pages for 'memory allocation'.

Page 395/504 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • How can one use multi threading in php applications

    - by Steve Obbayi
    Is there a realistic way of implementing a multi-threaded model in php whether truly or just simulating it. Some time back it was suggested that you can force the operating system to load another instance of the php executable and handle other simultaneous processes. The problem with this is that when the php code finished executing the php instance remains in memory because there is no way to kill it from within php. so if you are simulating several threads you can imagine whats going to happen. So am still looking for a way multi-threading can be done or simulated effectively from within php. Any ideas?

    Read the article

  • Problem with increment in inline ARM assembly

    - by tech74
    Hi , i have the following bit of inline ARM assembly, it works in a debug build but crashes in a release build of iphone sdk 3.1. The problem is the add instructions where i am incrementing the address of the C variables output and x by 4 bytes, this is supposed to increment by the size of a float. I think when i increment at some such stage i am overwriting something, can anyone say which is the best way to handle this Thanks C code that the asm is replacing, sum,output and x are all floats for(int i = 0; i< count; i++) sum+= output[i]* (*x++) asm volatile( ".align 4 \n\t" "mov r4,%3 \n\t" "flds s0,[%0] \n\t" "0: \n\t" "flds s1,[%2] \n\t" //"add %3,%3,#4 \n\t" "flds s2,[%1] \n\t" //"add %2,%2,#4 \n\t" "subs r4,r4, #1 \n\t" "fmacs s0, s1, s2 \n\t" "bne 0b \n\t" "fsts s0,[%0] \n\t" : : "r" (&sum), "r" (output), "r" (x),"r" (count) : "r0","r4","cc", "memory", "s0","s1","s2" );

    Read the article

  • Do you know of a C dictionary that supports COW transactions?

    - by Tim Post
    I'm looking for a key - value dictionary library written in C that supports a theoretically unlimited number of cheap transactions. I'd like to have one dictionary in memory, with hundreds of threads starting transactions, possibly modifying the dictionary, ending (completing) the transaction or potentially aborting the transaction. Only 50% of the time will these threads actually modify the dictionary. Most dictionary transaction implementations that I've seen copy always, instead of copying on write, whenever a transaction is started. Given the expected size ( 1GB) of the dictionary, I'm hoping to find something that COWs only when something is actually changed during a transaction. I'm also hoping for something that is packaged by most major GNU/Linux distributions. Any suggestions or links are very much appreciated.

    Read the article

  • Hibernate Hql find result size for paginator

    - by KCore
    Hi, I need to add paginator for my Hibernate application. I applied it to some of my database operations which I perform using Criteria by setting Projection.count().This is working fine. But when I use hql to query, I can't seem to get and efficient method to get the result count. If I do query.list().size() it takes lot of time and I think hibernate does load all the objects in memory. Can anyone please suggest an efficient method to retrieve the result count when using hql?

    Read the article

  • Using UIImageViews for 'pages' in an iPhone/iPad storybook app?

    - by outtoplayinc
    I'm new to iPhone programming, and well, what seems obvious to me may seem silly to a seasoned coder. I did a few 'switching views' tutorials on Youtube, and basically, they seems to work nicely for adding pages to a storybook type app. You add a UIViewController and associated view for each page. My question is would this become insanely slow, or a memory hog if I continued this method for say....35+ pages? Each page would also have a sound file associated with it that would play narration when a page load and stops when we leave. Basically, think of a powerpoint type app, with sound, possibly animated image elements, next & back buttons. I'm probably thinking of this very simplistically, but that's where my experience is at for the moment. Any insight or tips as to better and or more efficient ways to proceed would be greatly appreciated.

    Read the article

  • Microsoft Enterprise Logging Application Block - Reading Log File

    - by Or A
    Hi, I'm using MS log application block for logging my application event into a file called app-trace.log which located on the c:\temp folder. I'm trying to find the best way to read this file at runtime and display it when the user asks for it. now i have 2 issues: it seems that this kind of feature is not supported by the framework, hence i have to write this reader myself. am i missing something here? is there any better way of getting this data (w/o buffering it in the memory or saving it into another file). if i'm taking the only alternative that left for me, and implementing the reader myself, when i'm tring to do: System.IO.FileStream fs = new System.IO.FileStream(@"c:\temp\app-trace.log", FileMode.Open, FileAccess.Read); i'm getting "File being used by another process c#", probably the file is locked by the application block. is there any way to access and read it anyhow? Thank

    Read the article

  • Reaplaceing the Import Table in PE file by standart LoadLibrary...

    - by user308368
    Hello. I have an executable (PE) file that load a dll file as represented in the Import table... let say: PEFile.exe Modules.dll my question is how can i remove Modules.dll's import_descriptor from the imports and do its work by loadLibrary without the rely on the import table and without destroy the file???... My bigger problem his i could not understand exactly how the Import thing works... after the loader read the information he needs to do the import's thing, i believe he use the LoadLibrary, GetProcAddress APIs... but i couldn't understated what he doing with the pointers he get... he putting them somewhere in memory... and then what just call them?!? all the papers i found in the net explain the structure of the import table, but i didn't found a paper that explain how it is really work and get used... i hope you cold understand my Gibberish English... Thank you!

    Read the article

  • SQL Server Compact Edition 3.5 performance

    - by Wili
    I am using SQL Server CE 3.5 SP1 in one of my client applications. When a user loads the program and starts using it, performance is fine. If the user lets the program sit idle for a while, it takes a considerable amount of time (10 or more seconds) for the program to respond. Every time the user asks for a new screen, a call is made to the SQL CE database to get the data for that screen. It seems like the hard drive may be going to sleep and then when the database is accessed, the hard drive has to wake back up. Is it possible to load the entire database into memory and work from that? Are there any other suggestions on how to increase performance?

    Read the article

  • Why does C++ allow variable length arrays that aren't dynamically allocated?

    - by Maulrus
    I'm relatively new to C++, and from the beginning it's been drilled into me that you can't do something like int x; cin >> x; int array[x]; Instead, you must use dynamic memory. However, I recently discovered that the above will compile (though I get a -pedantic warning saying it's forbidden by ISO C++). I know that it's obviously a bad idea to do it if it's not allowed by the standard, but I previously didn't even know this was possible. My question is, why does g++ allow variable length arrays that aren't dynamically allocated if it's not allowed by the standard? Also, if it's possible for the compiler to do it, why isn't it in the standard?

    Read the article

  • Reading files with Java

    - by sikas
    I would like to know how can I read a file byte by byte then perform some operation every n bytes. for example: Say I have a file of size = 50 bytes, I want to divide it into blocks each of n bytes. Then each block is sent to a function for some operations to be done on those bytes. The blocks are to be created during the read process and sent to the function when the block reaches n bytes so that I don`t use much memory for storing all blocks. I want the output of the function to be written/appended on a new file. This is what I've reached to read, yet I don't know it it is right: fc = new JFileChooser(); File f = fc.getSelectedFile(); FileInputStream in = new FileInputStream(f); byte[] b = new byte[16]; in.read(b); I haven't done anything yet for the write process.

    Read the article

  • Clojure for a lisp illiterate

    - by dbyrne
    I am a lifelong object-oriented programmer. My job is primarily java development, but I have experience in a number of languages. Ruby gave me my first real taste of functional programming. I loved the features Ruby borrowed from the functional paradigm such as closures and continuations. Eventually, I graduated to Scala. This has been a great way to gradually learn to approach non-trivial problems in a functional manner. Now I am interested in Clojure. I know all the sexy features that make it enticing (software transactional memory, macros, etc.), but I just can't get used to "thinking in lisp". I've seen Rich Hickey's screencasts aimed at java programmers, but they are geared towards explaining language features and not approaching real world problems. I am looking for any advice or resources which have made this transition easier for others.

    Read the article

  • Perl: getting handle for stdin to be used in cgi-bin script

    - by Daniel
    Using perl 5.8.8 on windows server I am writing a perl cgi script using Archive::Zip with to create on fly a zip that must be download by users: no issues on that side. The zip is managed in memory, no physical file is written to disk using temporary files or whatever. I am wondering how to allow zip downloading writing the stream to the browser. What I have done is something like: binmode (STDOUT); $zip->writeToFileHandle(*STDOUT, 0); but i feel insecure about this way to get the STDOUT as file handle. Is it correct and robust? There is a better way? Many thanks for your advices

    Read the article

  • Which class will be instantiated

    - by Michael
    Say I have 2 subclasses from UIViewController, class A and class B. In Main nib file an object is representing class A and it is set to load file from Secondary nib file. The owner of Secondary nib is of class B. The question is - from which class an object in Main nib file will be instanciated once the nib files unarchived in the memory? The reason this question arised is that I have to take care myself if such reference to external NIB file present, to ensure that the first nib's object and second nib's owner is same. Please correct me if my statement is wrong.

    Read the article

  • What is a cross-platform way to get the current directory?

    - by rubenvb
    I need a cross-platform way to get the current working directory (yes, getcwd does what I want). I thought this might do the trick: #ifdef _WIN32 #include <direct.h> #define getcwd _getcwd // stupid MSFT "deprecation" warning #elif #include <unistd.h> #endif #include <string> #include <iostream> using namespace std; int main() { string s_cwd(getcwd(NULL,0)); cout << "CWD is: " << s_cwd << endl; } I got this reading: _getcwd at MSDN getcwd at Kernel.org getcwd at Apple.com There should be no memory leaks, and is should work on a Mac as well, correct?

    Read the article

  • DataSet XML export is empty

    - by Shaine
    I've got in-memory dataset with couple of tables that is populated in code. Data-bound grids on the gui show table contents without a problem. Then I try to export the dataset into XML: ds.WriteXml(fdSave.FileName, XmlWriteMode.WriteSchema); and get empty XML (with couple of lines regarding dataset names but without any tables) If I export table directly I've got all the data but dataset name is obviously wrong: ds.Fields.WriteXml(fdSave.FileName, XmlWriteMode.WriteSchema); What am I missing? Is there any reasonable way to write the whole dataset into file?

    Read the article

  • Is the use of union in this matrix class completely safe?

    - by identitycrisisuk
    Unions aren't something I've used that often and after looking at a few other questions on them here it seems like there is almost always some kind of caveat where they might not work. Eg. structs possibly having unexpected padding or endian differences. Came across this in a math library I'm using though and I wondered if it is a totally safe usage. I assume that multidimensional arrays don't have any extra padding and since the type is the same for both definitions they are guaranteed to take up exactly the same amount of memory? template<typename T> class Matrix44T { ... union { T M[16]; T m[4][4]; } m; }; Are there any downsides to this setup? Would the order of definition make any difference to how this works?

    Read the article

  • How do I read UTF-8 characters via a pointer?

    - by Jen
    Suppose I have UTF-8 content stored in memory, how do I read the characters using a pointer? I presume I need to watch for the 8th bit indicating a multi-byte character, but how exactly do I turn the sequence into a valid Unicode character? Also, is wchar_t the proper type to store a single Unicode character? This is what I have in mind: wchar_t readNextChar (char** p) { char ch = *p++; if (ch & 128) { // This is a multi-byte character, what do I do now? // char chNext = *p++; // ... but how do I assemble the Unicode character? ... } ... }

    Read the article

  • Can I force MySQL to output results before query is completed?

    - by Gordon Royle
    I have a large MySQL table (about 750 million rows) and I just want to extract a couple of columns. SELECT id, delid FROM tbl_name; No joins or selection criteria or anything. There is an index on both fields (separately). In principle, it could just start reading the table and spitting out the values immediately, but in practice the whole system just chews up memory and basically grinds to a halt. It seems like the entire query is being executed and the output stored somewhere before ANY output is produced... I've searched on unbuffering, turning off caches etc, but just cannot find the answer. (mysqldump is almost what I want except it dumps the whole table - but at least it just starts producing output immediately)

    Read the article

  • Save NSCache Contents to Disk

    - by Cory Imdieke
    I'm writing an app that needs to keep an in-memory cache of a bunch of objects, but that doesn't get out of hand so I'm planning on using NSCache to store it all. Looks like it will take care of purging and such for me, which is fantastic. I'd also like to persist the cache between launches, so I need to write the cache data to disk. Is there an easy way to save the NSCache contents to a plist or something? Are there perhaps better ways to accomplish this using something other than NSCache? This app will be on the iPhone, so I'll need only classes that are available in iOS 4+ and not just OS X. Thanks!

    Read the article

  • Will Emacs --batch run in cron will hang when require user input?

    - by J Spen
    I have a job in crontab that requires emacs --batch but if the file is currently open it requests (s, p, q) to (steal, quit, etc...) which is fine if this file is being edited to not run the script but I want to make sure it kills the cron running script so it's not sitting in the background taking up memory. I have the output set to go to a log file so I can see this happening but no way to tell whether the script was terminated even though asked for user input. Does cron terminate these scripts and how to check the PID to make sure?

    Read the article

  • Cache bandwidth per tick for modern CPUs

    - by osgx
    Hello What is a speed of cache accessing for modern CPUs? How many bytes can be read or written from memory every processor clock tick by Intel P4, Core2, Corei7, AMD? Please, answer with both theoretical (width of ld/sd unit with its throughput in uOPs/tick) and practical numbers (even memcpy speed tests, or STREAM benchmark), if any. PS it is question, related to maximal rate of load/store instructions in assembler. There can be theoretical rate of loading (all Instructions Per Tick are widest loads), but processor can give only part of such, a practical limit of loading.

    Read the article

  • Preparing for the next C++ standard

    - by Neil Butterworth
    The spate of questions regarding BOOST_FOREACH prompts me to ask users of the Boost library what (if anything) they are doing to prepare their code for portability to the proposed new C++ standard (aka C++0x). For example, do you write code like this if you use shared_ptr: #ifdef CPPOX #include <memory> #else #include "boost/shared_ptr.hpp" #endif There is also the namespace issue - in the future, shared_ptr will be part of the std, namespace - how do you deal with that? I'm interested in these questions because I've decided to bite the bullet and start learning boost seriously, and I'd like to use best practices in my code. Not exactly a flood of answers - does this mean it's a non-issue? Anyway, thanks to those that replied; I'm accepting jalfs answer because I like being advised to do nothing!

    Read the article

  • Improving the performance of an nHibernate Data Access Layer.

    - by Amitabh
    I am working on improving the performance of DataAccess Layer of an existing Asp.Net Web Application. The scenerios are. Its a web based application in Asp.Net. DataAccess layer is built using NHibernate 1.2 and exposed as WCF Service. The Entity class is marked with DataContract. Lazy loading is not used and because of the eager-fetching of the relations there is huge no of database objects are loaded in the memory. No of hits to the database is also high. For example I profiled the application using NHProfiler and there were about 50+ sql calls to load one of the Entity object using the primary key. I also can not change code much as its an existing live application with no NUnit test cases at all. Please can I get some suggestions here?

    Read the article

  • Object for storing strings geted from prints

    - by evg
    class MyWriter: def __init__(self, stdout): self.stdout = stdout self.dumps = [] def write(self, text): self.stdout.write(smart_unicode(text).encode('cp1251')) self.dumps.append(text) def close(self): self.stdout.close() writer = MyWriter(sys.stdout) save = sys.stdout sys.stdout = writer I use self.dumps list to store geted data from prints. Is it exists more convinient object for storing string lines in memory? ideally i want dump it to one big string. I can get it like this "\n".join(self.dumps) from code above. Mb it's better to just concat strings - self.dumps += text ?

    Read the article

  • Backbone JS central model where all views can use

    - by chchrist
    I am new to backbone js and require js. I use requirejS to organize my backbone code into modules. I don't know if this has any importance to what I want though. I want to have a central model where all my views will have access to. They should be able to get and set values. I don't want to use it as each view model though. I need to keep in memory search options, user status (logged in/out) etc. Any ideas? EDIT Maybe the answer is here? Share resources across different amd modules

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >