Search Results

Search found 21089 results on 844 pages for 'virtual memory'.

Page 376/844 | < Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >

  • Singelton on iPhone Simulator vs Singelton on real Device

    - by Helge Becker
    I am using a Singelton for some shared stuff. In the simulator, the app crashes ocasionally. Tracking the crash down shows that the the properties of my Singelton became dealocated. Those crashes never happend on a real device. Does the iPHone simulator handle memory managemend different? GC maybe? Changed the singelton to match this pattern. The iPhone Simulator dont crash now, but I am not sure about the memory handling on the real device. I assume that this solution will cause problems. What do you think?

    Read the article

  • Resolving Assemblies, the fuzzy way

    - by David Rutten
    Here's the setup: A pure DotNET class library is loaded by an unmanaged desktop application. The Class Library acts as a plugin. This plugin loads little baby plugins of its own (all DotNET Class Libraries), and it does so by reading the dll into memory as a byte-stream, then Assembly asm = Assembly.Load(COFF_Image); The problem arises when those little baby plugins have references to other dlls. Since they are loaded via the memory rather than directly from the disk, the framework often cannot find these referenced assemblies and is thus incapable of loading them. I can add an AssemblyResolver handler to my project and I can see these referenced assemblies drop past. I have a reasonably good idea about where to find these referenced assemblies on the disk, but how can I make sure that the Assmebly I load is the correct one? In short, how do I reliably go from the System.ResolveEventArgs.Name field to a dll file path, presuming I have a list of all the folders where this dll could be hiding)?

    Read the article

  • Send Keyboard Input to VMWare from C#

    - by Robert
    Hi. I want to send mouse clicks and keyboards keys to a window running a virtual machine such as VMware or VirtualBox. I want to pilotate it from the host OS, from an application written in C#. I can move and click the mouse. But I can't send keyboard input. I tried with SendKeys but it doesn't work. With every other "normal" window, it works. But I think it's related to how VMware or Virtual BOx intercept keyboard events. Any idea?

    Read the article

  • c# Generic overloaded method dispatching ambiguous

    - by sebgod
    Hello, I just hit a situation where a method dispatch was ambiguous and wondered if anyone could explain on what basis the compiler (.NET 4.0.30319) chooses what overload to call interface IfaceA { } interface IfaceB<T> { void Add(IfaceA a); T Add(T t); } class ConcreteA : IfaceA { } class abstract BaseClassB<T> : IfaceB<T> { public virtual T Add(T t) { ... } public virtual void Add(IfaceA a) { ... } } class ConcreteB : BaseClassB<IfaceA> { // does not override one of the relevant methods } void code() { var concreteB = new ConcreteB(); // it will call void Add(IfaceA a) concreteB.Add(new ConcreteA()); } In any case, why does the compiler not warn me or even why does it compile? Thank you very much for any answers.

    Read the article

  • Replacing objects, handling clones, dealing with write logs

    - by Alix
    Hi everyone, I'm dealing with a problem I can't figure out how to solve, and I'd love to hear some suggestions. [NOTE: I realise I'm asking several questions; however, answers need to take into account all of the issues, so I cannot split this into several questions] Here's the deal: I'm implementing a system that underlies user applications and that protect shared objects from concurrent accesses. The application programmer (whose application will run on top of my system) defines such shared objects like this: public class MyAtomicObject { // These are just examples of fields you may want to have in your class. public virtual int x { get; set; } public virtual List<int> list { get; set; } public virtual MyClassA objA { get; set; } public virtual MyClassB objB { get; set; } } As you can see they declare the fields of their class as auto-generated properties (auto-generated means they don't need to implement get and set). This is so that I can go in and extend their class and implement each get and set myself in order to handle possible concurrent accesses, etc. This is all well and good, but now it starts to get ugly: the application threads run transactions, like this: The thread signals it's starting a transaction. This means we now need to monitor its accesses to the fields of the atomic objects. The thread runs its code, possibly accessing fields for reading or writing. If there are accesses for writing, we'll hide them from the other transactions (other threads), and only make them visible in step 3. This is because the transaction may fail and have to roll back (undo) its updates, and in that case we don't want other threads to see its "dirty" data. The thread signals it wants to commit the transaction. If the commit is successful, the updates it made will now become visible to everyone else. Otherwise, the transaction will abort, the updates will remain invisible, and no one will ever know the transaction was there. So basically the concept of transaction is a series of accesses that appear to have happened atomically, that is, all at the same time, in the same instant, which would be the moment of successful commit. (This is as opposed to its updates becoming visible as it makes them) In order to hide the write accesses in step 2, I clone the accessed field (let's say it's the field list) and put it in the transaction's write log. After that, any time the transaction accesses list, it will actually be accessing the clone in its write log, and not the global copy everyone else sees. Like this, any changes it makes will be done to the (invisible) clone, not to the global copy. If in step 3 the commit is successful, the transaction should replace the global copy with the updated list it has in its write log, and then the changes become visible for everyone else at once. It would be something like this: myAtomicObject.list = updatedCloneOfListInTheWriteLog; Problem #1: possible references to the list. Let's say someone puts a reference to the global list in a dictionary. When I do... myAtomicObject.list = updatedCloneOfListInTheWriteLog; ...I'm just replacing the reference in the field list, but not the real object (I'm not overwriting the data), so in the dictionary we'll still have a reference to the old version of the list. A possible solution would be to overwrite the data (in the case of a list, empty the global list and add all the elements of the clone). More generically, I would need to copy the fields of one list to the other. I can do this with reflection, but that's not very pretty. Is there any other way to do it? Problem #2: even if problem #1 is solved, I still have a similar problem with the clone: the application programmer doesn't know I'm giving him a clone and not the global copy. What if he puts the clone in a dictionary? Then at commit there will be some references to the global copy and some to the clone, when in truth they should all point to the same object. I thought about providing a wrapper object that contains both the cloned list and a pointer to the global copy, but the programmer doesn't know about this wrapper, so they're not going to use the pointer at all. The wrapper would be like this: public class Wrapper<T> : T { // This would be the pointer to the global copy. The local data is contained in whatever fields the wrapper inherits from T. private T thisPtr; } I do need this wrapper for comparisons: if I have a dictionary that has an entry with the global copy as key, if I look it up with the clone, like this: dictionary[updatedCloneOfListInTheWriteLog] I need it to return the entry, that is, to think that updatedCloneOfListInTheWriteLog and the global copy are the same thing. For this, I can just override Equals, GetHashCode, operator== and operator!=, no problem. However I still don't know how to solve the case in which the programmer unknowingly inserts a reference to the clone in a dictionary. Problem #3: the wrapper must extend the class of the object it wraps (if it's wrapping MyClassA, it must extend MyClassA) so that it's accepted wherever an object of that class (MyClass) would be accepted. However, that class (MyClassA) may be final. This is pretty horrible :$. Any suggestions? I don't need to use a wrapper, anything you can think of is fine. What I cannot change is the write log (I need to have a write log) and the fact that the programmer doesn't know about the clone. I hope I've made some sense. Feel free to ask for more info if something needs some clearing up. Thanks so much!

    Read the article

  • Sqlite. How to create an index in attached DB?

    - by kappa
    I have a problem with adding index to memory database attached to main database. 1) I open the database (F) from file 2) Attach the :memory: (M) database 3) Create tables in database M 4) Copy data from F to M I would also like to create an index in database M, but don't know how to do that. This code creates index but in F database: sQuery = "CREATE INDEX IF NOT EXISTS [INDID] ON [PANEL]([ID] ASC);"; I tried to add the name qualifier before table name like this: sQuery = "CREATE INDEX IF NOT EXISTS [INDID] ON [M.PANEL]([ID] ASC);"; but SQLite returns with message that column main.M.PANEL does not exist. What can I do?

    Read the article

  • Capturing stdout within the same process in Python

    - by danben
    I've got a python script that calls a bunch of functions, each of which writes output to stdout. Sometimes when I run it, I'd like to send the output in an e-mail (along with a generated file). I'd like to know how I can capture the output in memory so I can use the email module to build the e-mail. My ideas so far were: use a memory-mapped file (but it seems like I have to reserve space on disk for this, and I don't know how long the output will be) bypass all this and pipe the output to sendmail (but this may be difficult if I also want to attach the file)

    Read the article

  • C++'s std::string pools, debug builds? std::string and valgrind problems

    - by Den.Jekk
    Hello, I have a problem with many valgrind warnings about possible memory leaks in std::string, like this one: 120 bytes in 4 blocks are possibly lost in loss record 4,192 of 4,687 at 0x4A06819: operator new(unsigned long) (vg_replace_malloc.c:230) by 0x383B89B8B0: std::string::_Rep::_S_create(unsigned long, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.8) by 0x383B89C3B4: (within /usr/lib64/libstdc++.so.6.0.8) by 0x383B89C4A9: std::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string(char const*, unsigned long, std::allocator<char> const&) (in /usr/lib64/libstdc++.so.6.0.8) I'm wondering: does std::string (GCC 4.1.2) use any memory pools? if so, is there any way to disable the pools (in form of a debug build etc.)? Regards, Den

    Read the article

  • Core Data data type for just the date - not including time

    - by Jason
    I am new at Core Data, and it seems like it is a great way to manage the data store. However I am also very memory-conscious due to the fact that the iPhone doesn't have that much of it. I was a little surprised to see that the data types are so limited - eg. there is a Date type which includes also the time, but no Date type for just the date! All the time information takes up precious bytes of memory, if I just wanted an attribute with the date (e.g. 2/15/2010 rather than 2/15/2010 02:34:48), how could I do this? Is it possible?

    Read the article

  • Anyone Know a Great Sparse One Dimensional Array Library in Python?

    - by TheJacobTaylor
    I am working on an algorithm in Python that uses arrays heavily. The arrays are typically sparse and are read from and written to constantly. I am currently using relatively large native arrays and the performance is good but the memory usage is high (as expected). I would like to be able to have the array implementation not waste space for values that are not used and allow an index offset other than zero. As an example, if my numbers start at 1,000,000 I would like to be able to index my array starting at 1,000,000 and not be required to waste memory with a million unused values. Array reads and writes needs to be fast. Expanding into new territory can be a small delay but reads and writes should be O(1) if possible. Does anybody know of a library that can do it? Thanks!

    Read the article

  • How to write a datatable to excel work book at client side

    - by senthilkumar
    Hi Guys Currently i am doing one project in that we need to generate report from database. Since my server memory is too low im getting 'Out of Memory' Exception when im writing it at serverside and also when i write directly to a excel file using http header as excel file im not able to create multiple sheets since my database table is huge more than 65536 rows. I saw many solution using a third party tool but i cant use those into mine..If anyone already worked on this please give me some direction. Also i tried using javascript but for that i need to use datagrid at server side?? but in my project i m not allowed to use like this.

    Read the article

  • NHibernate and objects with value-semantics

    - by Groo
    Problem: If I pass a class with value semantics (Equals method overridden) to NHibernate, NHibernate tries to save it to db even though it just saved an entity equal by value (but not by reference) to the database. What am I doing wrong? Here is a simplified example model for my problem: Let's say I have a Person entity and a City entity. One thread (producer) is creating new Person objects which belong to a specific existing City, and another thread (consumer) is saving them to a repository (using NHibernate as DAL). Since there is lot of objects being flushed at a time, I am using Guid.Comb id's to ensure that each insert is made using a single SQL command. City is an object with value-type semantics (equal by name only -- for this example purposes only): public class City : IEquatable<City> { public virtual Guid Id { get; private set; } public virtual string Name { get; set; } public virtual bool Equals(City other) { if (other == null) return false; return this.Name == other.Name; } public override bool Equals(object obj) { return Equals(obj as City); } public override int GetHashCode() { return this.Name.GetHashCode(); } } Fluent NH mapping is something like: public class PersonMap : ClassMap<Person> { public PersonMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); References(x => x.City) .Cascade.SaveUpdate(); } } public class CityMap : ClassMap<City> { public CityMap() { Id(x => x.Id) .GeneratedBy.GuidComb(); Map(x => x.Name); } } Right now (with my current NHibernate mapping config), my consumer thread maintains a dictionary of cities and replaces their references in incoming person objects (otherwise NHibernate will see a new, non-cached City object and try to save it as well), and I need to do it for every produced Person object. Since I have implemented City class to behave like a value type, I hoped that NHibernate would compare Cities by value and not try to save them each time -- i.e. I would only need to do a lookup once per session and not care about them anymore. Is this possible, and if yes, what am I doing wrong here?

    Read the article

  • Read random lines from huge CSV file in Python

    - by jbssm
    I have this quite big CSV file (15 Gb) and I need to read about 1 million random lines from it. As far as I can see - and implement - the CSV utility in Python only allows to iterate sequentially in the file. It's very memory consuming to read the all file into memory to use some random choosing and it's very time consuming to go trough all the file and discard some values and choose others, so, is there anyway to choose some random line from the CSV file and read only that line? I tried without success: import csv with open('linear_e_LAN2A_F_0_435keV.csv') as file: reader = csv.reader(file) print reader[someRandomInteger] A sample of the CSV file: 331.093,329.735 251.188,249.994 374.468,373.782 295.643,295.159 83.9058,0 380.709,116.221 352.238,351.891 183.809,182.615 257.277,201.302 61.4598,40.7106

    Read the article

  • NGINX/PHP downloading instead of executing

    - by Travis D
    I have an NGINX server with fastcgi/PHP running on it. I need to add userdirs to it, but I can't get PHP to execute the files - it just asks me if I want to download it. It does work without the userdir (eg: it works on physibots.info/hugs.php, but not physibots.info/~kisses/hugs.php) Any help is greatly appreciated. Config: server { listen 80; server_name physibots.info; access_log /home/virtual/physibots.info/logs/access.log; root /home/virtual/physibots.info/public_html; location ~ ^/~(.+?)(/.*)?\.php$ { fastcgi_param SCRIPT_FILENAME /home/$1/public_html$fastcgi_script_name; fastcgi_pass unix:/tmp/php.socket; } location ~ ^/~(.+?)(/.*)?$ { alias /home/$1/public_html$2; autoindex on; } location ~ \.php$ { try_files $uri /error.html/$uri?null; fastcgi_pass unix:/tmp/php.socket; } }

    Read the article

  • C++ conditional compilation

    - by Shaown
    I have the following code snippet #ifdef DO_LOG #define log(p) record(p) #else #define log(p) #endif void record(char *data){ ..... ..... } Now if I call log("hello world") in my code and DO_LOG isn't defined, will the line be compiled, in other words will it eat up the memory for the string "hello world"? P.S. There are a lot of record calls in the program and it is memory sensitive, so is there any other way to conditionally compile so that it only depends on the #define DO_LOG? Thanks in advance.

    Read the article

  • Type Declaration - Pointer Asterisk Position

    - by sahs
    Hello, in C++, the following means "allocate memory for an int pointer": int* number; So, the asterisk is part of the variable type; without it, that would mean something else (that's why I usually don't separate the asterisk from the variable type). Then what is the reason the asterisk is considered something else, instead of being part of the type? For example, it seems better, if the following meant "allocate memory for two int pointers": int* number1, number2; Am I wrong?

    Read the article

  • How to return a string literal from a function

    - by skydoor
    Hi I am always confused about return a string literal or a string from a function. I was told that there might be memory leak because you don't know when the memory will be deleted? For example, in the code below, how to implement foo() so that make the output of the code is "Hello World"? void foo ( ) // you can add parameters here. { } int main () { char *c; foo ( ); printf ("%s",c); return 0; } Also if the return type of foo() is not void, but you can return char*, what should it be.

    Read the article

  • Can my app arrange a gdb breakpoint or watch?

    - by Larry Gritz
    Is there a way for my code to be instrumented to insert a break point or watch on a memory location that will be honored by gdb? (And presumably have no effect when gdb is not attached.) I know how to do such things as gdb commands within the gdb session, but for certain types of debugging it would be really handy to do it "programmatically", if you know what I mean -- for example, the bug only happens with a particular circumstance, not any of the first 11,024 times the crashing routine is called, or the first 43,028,503 times that memory location is modified, so setting a simple break point on the routine or watch point on the variable is not helpful -- it's all false positives. I'm concerned mostly about Linux, but curious about if similar solutions exist for OS X (or Windows, though obviously not with gdb).

    Read the article

  • Side effects of reordering columns in PostgreSQL

    - by Summer
    I sometimes re-order the columns in my Postgres DB. Since Postgres can only add columns at the end of tables, I end up re-ordering by adding new columns at the end of the table, setting them equal to existing columns, and then dropping the original columns. My question is: what does PostgreSQL do with the memory that's freed by dropped columns? Does it automatically re-use the memory, so a single record consumes the same amount of space as it did beforehand? But that would require a re-write of the whole table, so to avoid that, does it just keep a bunch of blank space around in each record? Thanks! ~S

    Read the article

  • Speed-up of readonly MyISAM table

    - by Ozzy
    We have a large MyISAM table that is used to archive old data. This archiving is performed every month, and except from these occasions data is never written to the table. Is there anyway to "tell" MySQL that this table is read-only, so that MySQL might optimize the performance of reads from this table? I've looked at the MEMORY storage engine, but the problem is that this table is so large that it would take a large portion of the servers memory, which I don't want. Hope my question is clear enough, I'm a novice when it comes to db administration so any input or suggestions are welcome.

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • thread management in nbody code of cuda-sdk

    - by xnov
    When I read the nbody code in Cuda-SDK, I went through some lines in the code and I found that it is a little bit different than their paper in GPUGems3 "Fast N-Body Simulation with CUDA". My questions are: First, why the blockIdx.x is still involved in loading memory from global to share memory as written in the following code? for (int tile = blockIdx.y; tile < numTiles + blockIdx.y; tile++) { sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(blockIdx.x + q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : //this line positions[WRAP(blockIdx.x + tile, gridDim.x) * p + threadIdx.x]; //this line __syncthreads(); // This is the "tile_calculation" function from the GPUG3 article. acc = gravitation(bodyPos, acc); __syncthreads(); } isn't it supposed to be like this according to paper? I wonder why sharedPos[threadIdx.x+blockDim.x*threadIdx.y] = multithreadBodies ? positions[WRAP(q * tile + threadIdx.y, gridDim.x) * p + threadIdx.x] : positions[WRAP(tile, gridDim.x) * p + threadIdx.x]; Second, in the multiple threads per body why the threadIdx.x is still involved? Isn't it supposed to be a fix value or not involving at all because the sum only due to threadIdx.y if (multithreadBodies) { SX_SUM(threadIdx.x, threadIdx.y).x = acc.x; //this line SX_SUM(threadIdx.x, threadIdx.y).y = acc.y; //this line SX_SUM(threadIdx.x, threadIdx.y).z = acc.z; //this line __syncthreads(); // Save the result in global memory for the integration step if (threadIdx.y == 0) { for (int i = 1; i < blockDim.y; i++) { acc.x += SX_SUM(threadIdx.x,i).x; //this line acc.y += SX_SUM(threadIdx.x,i).y; //this line acc.z += SX_SUM(threadIdx.x,i).z; //this line } } } Can anyone explain this to me? Is it some kind of optimization for faster code?

    Read the article

  • Advice on "Invalid Pointer Operation" when using complex records

    - by Xaz
    Env: Delphi 2007 <JustificationI tend to use complex records quite frequently as they offer almost all of the advantages of classes but with much simpler handling.</Justification Anyhoo, one particularly complex record I have just implemented is trashing memory (later leading to an "Invalid Pointer Operation" error). This is an example of the memory trashing code: sSignature := gProfiles.Profile[_stPrimary].Signature.Formatted(True); On the second time i call it i get "Invalid Pointer Operation" It works OK if i call it like this: AProfile := gProfiles.Profile[_stPrimary]; ASignature := AProfile.Signature; sSignature := ASignature.Formatted(True); Background Code: gProfiles: TProfiles; TProfiles = Record private FPrimaryProfileID: Integer; FCachedProfile: TProfile; ... public < much code removed > property Profile[ProfileType: TProfileType]: TProfile Read GetProfile; end; function TProfiles.GetProfile(ProfileType: TProfileType): TProfile; begin case ProfileType of _stPrimary : Result := ProfileByID(FPrimaryProfileID); ... end; end; function TProfiles.ProfileByID(iID: Integer): TProfile; begin <snip> if LoadProfileOfID(iID, FCachedProfile) then begin Result := FCachedProfile; end else ... end; TProfile = Record private ... public ... Signature: TSignature; ... end; TSignature = Record private public PlainTextFormat : string; HTMLFormat : string; // The text to insert into a message when using this profile function Formatted(bHTML: boolean): string; end; function TSignature.Formatted(bHTML: boolean): string; begin if bHTML then result := HTMLFormat else result := PlainTextFormat; < SNIP MUCH CODE > end; OK, so I have a record within a record within a record, which is approaching Inception level confusion and I'm the first to admit is not really a good model. Clearly i am going to have to restructure it. What I would like from you gurus is a better understanding of why it is trashing the memory (something to do with the string object that is created then freed...) so that i can avoid making these kinds of errors in future. Thanks

    Read the article

  • are runtime linking library globals shared among plugins loaded with dlopen?

    - by conejoroy
    I've a C++ program that links at runtime with, lets say, mylib.so. then, the same program uses dlopen()/dlsym() to load a function from myplugin.so, dynamic library that in turn has dependencies to mylib.so. My question is: will the program AND the function in the plugin access the same globals defined in mydlib.so in the same memory area reserved for the program, or each will be assigned different, unrelated copies in its own memory space? if the latter is the default behaviour, is it possible to change that? Thanks in advance =)!

    Read the article

< Previous Page | 372 373 374 375 376 377 378 379 380 381 382 383  | Next Page >