Search Results

Search found 18119 results on 725 pages for 'shared memory'.

Page 42/725 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • IPhone SDK - Leaking Memory with performSelectorInBackground

    - by Steblo
    Hi. Maybe someone can help me with this strange thing: If a user clicks on a button, a new UITableView is pushed to the navigation controller. This new view is doing some database querying which takes some time. Therefore I wanted to do the loading in background. What works WITHOUT leaking memory (but freezes the screen until everything is done): WorkController *tmp=[[WorkController alloc] initWithStyle:UITableViewStyleGrouped]; self.workController=tmp; [tmp release]; [self.workController loadList]; // Does the DB Query [self.workController pushViewController:self.workController animated:YES]; Now I tried to do this: // Show Wait indicator .... WorkController *tmp=[[WorkController alloc] initWithStyle:UITableViewStyleGrouped]; self.workController=tmp; [tmp release]; [self performSelectorInBackground:@selector(getController) withObject:nil]; } -(void) getController { [self.workController loadList]; // Does the DB Query [self.navigationController pushViewController:self.workController animated:YES]; } This also works but is leaking memory and I don't know why ! Can you help ? By the way - is it possible for an App to get into AppStore with a small memory leak ? Or will this be checked first of all ? Thanks in advance !

    Read the article

  • C#: Accessing PerformanceCounters for the ".NET CLR Memory category"

    - by Mads Ravn
    I'm trying to access the performance counters located in ".NET CLR Memory category" through C# using the PerformanceCounter class. However a cannot instantiate the categories with what I would expect was the correct category/counter name new PerformanceCounter(".NET CLR Memory", "# bytes in all heaps", Process.GetCurrentProcess().ProcessName); I tried looping through categories and counters using the following code string[] categories = PerformanceCounterCategory.GetCategories().Select(c => c.CategoryName).OrderBy(s => s).ToArray(); string toInspect = string.Join(",\r\n", categories); System.Text.StringBuilder interestingToInspect = new System.Text.StringBuilder(); string[] interestingCategories = categories.Where(s => s.StartsWith(".NET") || s.Contains("Memory")).ToArray(); foreach (string interestingCategory in interestingCategories) { PerformanceCounterCategory cat = new PerformanceCounterCategory(interestingCategory); foreach (PerformanceCounter counter in cat.GetCounters()) { interestingToInspect.AppendLine(interestingCategory + ":" + counter.CounterName); } } toInspect = interestingToInspect.ToString(); But could not find anything that seems to match. Is it not possible to observe these values from within the CLR or am I doing something wrong. The environment, should it matter, is .NET 4.0 running on a 64-bit windows 7 box.

    Read the article

  • Free Memory Occupied by Std List, Vector, Map etc

    - by Graviton
    Coming from a C# background, I have only vaguest idea on memory management on C++-- all I know is that I would have to free the memory manually. As a result my C++ code is written in such a way that objects of the type std::vector, std::list, std::map are freely instantiated, used, but not freed. I didn't realize this point until I am almost done with my programs, now my code is consisted of the following kinds of patterns: struct Point_2 { double x; double y; }; struct Point_3 { double x; double y; double z; }; list<list<Point_2>> Computation::ComputationJob(list<Point_3> pts3D, vector<Point_2> vectors) { map<Point_2, double> pt2DMap=ConstructPointMap(pts3D); vector<Point_2> vectorList = ConstructVectors(vectors); list<list<Point_2>> faceList2D=ConstructPoints(vectorList , pt2DMap); return faceList2D; } My question is, must I free every.single.one of the list usage ( in the above example, this means that I would have to free pt2DMap, vectorList and faceList2D)? That would be very tedious! I might just as well rewrite my Computation class so that it is less prone to memory leak. Any idea how to fix this?

    Read the article

  • Boost Unit testing memory reuse causing tests that should fail to pass

    - by Knyphe
    We have started using the boost unit testing library for a large existing code base, and I have run into some trouble with unit tests incorrectly passing, seemingly due to the reuse of memory on the stack. Here is my situation: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } The first test passed correctly, initializing all the variables. The constructor in the second unit test did not correctly set EntityType or DataPosition, but the unit test passed. I was able to get it to fail by placing some variables on the stack in the second test, like so: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { int a, b; SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } If there is only one int, only the dataPos CHECK_EQUAL fails, but if there are two, both EntityType and DataPos fail, so it seems pretty clear that this is an issue with the variables being created on the same stack memory or some such. Is there a good way to clear the memory between each unit test, or am I potentially using the library incorrectly or writing bad tests? Any help would be appreciated.

    Read the article

  • Linux C debugging library to detect memory corruptions

    - by calandoa
    When working sometimes ago on an embedded system with a simple MMU, I used to program dynamically this MMU to detect memory corruptions. For instance, at some moment at runtime, the foo variable was overwritten with some unexpected data (probably by a dangling pointer or whatever). So I added the additional debugging code : at init, the memory used by foo was indicated as a forbidden region to the MMU; each time foo was accessed on purpose, access to the region was allowed just before then forbidden just after; a MMU irq handler was added to dump the master and the address responsible of the violation. This was actually some kind of watchpoint, but directly self-handled by the code itself. Now, I would like to reuse the same trick, but on a x86 platform. The problem is that I am very far from understanding how is working the MMU on this platform, and how it is used by Linux, but I wonder if any library/tool/system call already exist to deal with this problem. Note that I am aware that various tools exist like Valgrind or GDB to manage memory problems, but as far as I know, none of these tools car be dynamically reconfigured by the debugged code. I am mainly interested for user space under Linux, but any info on kernel mode or under Windows is also welcome!

    Read the article

  • Memory leaks while using array of double

    - by Gacek
    I have a part of code that operates on large arrays of double (containing about 6000 elements at least) and executes several hundred times (usually 800) . When I use standard loop, like that: double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... } The memory usage rises for about 40MB (from 40MB at the beggining of the loop, to the 80MB at the end). When I force to use the garbage collector to execute at every iteration, the memory usage stays at the level of 40MB (the rise is unsignificant). double[] singleRow = new double[6000]; int maxI = 800; for(int i=0; i<maxI; i++) { singleRow = someObject.producesOutput(); //... // do something with singleRow // ... GC.Collect() } But the execution time is 3 times longer! (it is crucial) How can I force the C# to use the same area of memory instead of allocating new ones? Note: I have the access to the code of someObject class, so if it would be needed, I can change it.

    Read the article

  • Usage of VIsual Memory Leak Detector

    - by Yan Cheng CHEOK
    I found a very interesting memory leak detector by using Visual C++. http://www.codeproject.com/KB/applications/visualleakdetector.aspx I try it out, but cannot make it works to detect a memory leak code. I am using MS Visual Studio 2008. Any step I had missed out? #include "stdafx.h" #include "vld.h" #include <iostream> void fun() { new int[1000]; } int _tmain(int argc, _TCHAR* argv[]) { fun(); std::cout << "lead?" << std::endl; getchar(); return 0; } The output when I run in debug mode is : ... ... 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC80.CRT_1fc8b3b9a1e18e3b_8.0.50727.4053_x-ww_e6967989\msvcr80.dll', Symbols loaded. 'Test.exe': Loaded 'C:\WINDOWS\system32\msvcrt.dll', Symbols loaded (source information stripped). 'Test.exe': Loaded 'C:\WINDOWS\WinSxS\x86_Microsoft.VC90.DebugCRT_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_f863c71f\msvcp90d.dll', Symbols loaded. 'Test.exe': Loaded 'C:\Program Files\Visual Leak Detector\bin\dbghelp.dll', Symbols loaded (source information stripped). Visual Leak Detector Version 1.9d installed. No memory leaks detected. Visual Leak Detector is now exiting. The program '[5468] Test.exe: Native' has exited with code 0 (0x0).

    Read the article

  • iPhone: Low memory crash...

    - by MacTouch
    Once again I'm hunting memory leaks and other crazy mistakes in my code. :) I have a cache with frequently used files (images, data records etc. with a TTL of about one week and a size limited cache (100MB)). There are sometimes more then 15000 files in a directory. On application exit the cache writes an control file with the current cache size along with other useful information. If the applications crashes for some reason (sh.. happens sometimes) I have in such case to calculate the size of all files on application start to make sure I know the cache size. My app crashes at this point because of low memory and I have no clue why. Memory leak detector does not show any leaks at all. I do not see any too. What's wrong with the code below? Is there any other fast way to calculate the total size of all files within a directory on iPhone? Maybe without to enumerate the whole contents of the directory? The code is executed on the main thread. NSUInteger result = 0; NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSDirectoryEnumerator *dirEnum = [[[NSFileManager defaultManager] enumeratorAtPath:path] retain]; int i = 0; while ([dirEnum nextObject]) { NSDictionary *attributes = [dirEnum fileAttributes]; NSNumber* fileSize = [attributes objectForKey:NSFileSize]; result += [fileSize unsignedIntValue]; if (++i % 500 == 0) { // I tried lower values too [pool drain]; } } [dirEnum release]; dirEnum = nil; [pool release]; pool = nil; Thanks, MacTouch

    Read the article

  • FileConnection Blackberry memory usage

    - by Dean
    Hello, I'm writing a blackberry application that reads ints and strings out of a database. This is my first time dealing with reading/writing on the blackberry, so forgive me if this is a dumb question. The database file I'm reading is only about 4kB I open the file with the following code fconn = (FileConnection) Connector.open("file_path_here", Connector.READ); if(fconn.exists()==false){fconn.close();return;} is = fconn.openDataInputStream(); while(!eof){ //etc... } is.close(); fconn.close(); The problem is, this code appears to be eating a LOT of memory. Using breakpoints and the "Memory Statistics" view, I determined the following: calling "Connector.open" creates 71 objects and changes "RAM Bytes in use" by 5376 calling "fconn.openDataInputStream();" increases RAM usage by a whopping 75920 Is this normal? Or am I doing something wrong? And how can I fix this? 75MB of RAM is a LOT of memory to waste on a handheld device, especially when the file I'm reading is only 4kB and I haven't even begun reading any data! How is this even possible?

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • Proper Memory Management for Objective-C Method

    - by Justin
    Hi, I'm programming an iPhone app and I had a question about memory management in one of my methods. I'm still a little new to managing memory manually, so I'm sorry if this question seems elementary. Below is a method designed to allow a number pad to place buttons in a label based on their tag, this way I don't need to make a method for each button. The method works fine, I'm just wondering if I'm responsible for releasing any of the variables I make in the function. The application crashes if I try to release any of the variables, so I'm a little confused about my responsibility regarding memory. Here's the method: FYI the variable firstValue is my label, it's the only variable not declared in the method. -(IBAction)inputNumbersFromButtons:(id)sender { UIButton *placeHolderButton = [[UIButton alloc] init]; placeHolderButton = sender; NSString *placeHolderString = [[NSString alloc] init]; placeHolderString = [placeHolderString stringByAppendingString:firstValue.text]; NSString *addThisNumber = [[NSString alloc] init]; int i = placeHolderButton.tag; addThisNumber = [NSString stringWithFormat:@"%i", i]; NSString *newLabelText = [[NSString alloc] init]; newLabelText = [placeHolderString stringByAppendingString:addThisNumber]; [firstValue setText:newLabelText]; //[placeHolderButton release]; //[placeHolderString release]; //[addThisNumber release]; //[newLabelText release]; } The application works fine with those last four lines commented out, but it seems to me like I should be releasing these variables here. If I'm wrong about that I'd welcome a quick explanation about when it's necessary to release variables declared in functions and when it's not. Thanks.

    Read the article

  • Memory Management with returning char* function

    - by RageD
    Hello all, Today, without much thought, I wrote a simple function return to a char* based on a switch statement of given enum values. This, however, made me wonder how I could release that memory. What I did was something like this: char* func() { char* retval = new char; // Switch blah blah - will always return some value other than NULL since default: return retval; } I apologize if this is a naive question, but what is the best way to release the memory seeing as I cannot delete the memory after the return and, obviously, if I delete it before, I won't have a returned value. What I was thinking as a viable solution was something like this void func(char*& in) { // blah blah switch make it do something } int main() { char* val = new char; func(val); // Do whatever with func (normally func within a data structure with specific enum set so could run multiple times to change output) val = NULL; delete val; val = NULL; return 0; } Would anyone have anymore insight on this and/or explanation on which to use? Regards, Dennis M.

    Read the article

  • PHP memory exhausted when running through thousands of records

    - by James Skidmore
    I'm running the following code over a set of 5,000 results. It's failing due to the memory being exhausted. foreach ($data as $key => $report) { $data[$key]['data'] = unserialize($report['serialized_values']); } I know I can up the memory limit, but I'd like to run this without a problem instead. I'm not going to be able to keep upping the memory forever. EDIT The $data is in this format: [1] => Array ( [0] => 127654619178790249 [report_id] => 127654619178790249 [1] => 1 [user_id] => 1 [2] => 2010-12-31 19:43:24 [sent_on] => 2010-12-31 19:43:24 [3] => [fax_trans_id] => [4] => 1234567890 [fax_to_nums] => 1234567890 [5] => ' long html string here', [html_content] => 'long html string here', [6] => 'serialization_string_here', [serialized_values] => 'serialization_string_here', [7] => 70 [id] => 70 )

    Read the article

  • Does ReleaseStringUTF do more than free memory?

    - by Bayou Bob
    Consider the following C code segments. Segment 1: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; retString = (*env)->GetStringUTFChars(env, jstr, NULL); return retString; } void useSomeString(JNIEnv *env, jobject jobj, char *mName) { jclass cl = (*env)->GetObjectClass(env, jobj); jmethodId mId = (*env)->GetMethodID(env, cl, mName, "()Ljava/lang/String;"); jstring jstr = (*env)->CallObjectMethod(env, obj, id, NULL); char * myString = getSomeString(env, jstr); /* ... use myString without modifing it */ free(myString); } Because myString is freed in useSomeString, I do not think I am creating a memory leak; however, I am not sure. The JNI spec specifically requires the use of ReleaseStringUTFChars. Since I am getting a C style 'char *' pointer from GetStringUTFChars, I believe the memory reference exists on the C stack and not in the JAVA heap so it is not in danger of being Garbage Collected; however, I am not sure. I know that changing getSomeString as follows would be safer (and probably preferable). Segment 2: char * getSomeString(JNIEnv *env, jstring jstr) { char * retString; char * intermedString; intermedString = (*env)->GetStringUTFChars(env, jstr, NULL); retString = strdup(intermedString); (*env)->ReleaseStringUTFChars(env, jstr, intermedString); return retString; } Because of our 'process' I need to build an argument on why getSomeString in Segment 2 is preferable to Segment 1. Is anyone aware of any documentation or references which detail the behavior of GetStringUTFChars and ReleaseStringUTFChars in relation to where memory is allocated or what (if any) additional bookkeeping is done (i.e. local Reference Pointer to the Java Heap being created, etc). What are the specific consequences of ignoring that bookkeeping. Thanks in advance.

    Read the article

  • php memory how much is too much

    - by Rob
    I'm currently re-writing my site using my own framework (it's very simple and does exactly what I need, i've no need for something like Zend or Cake PHP). I've done alot of work in making sure everything is cached properly, caching pages in files so avoid sql queries and generally limiting the number of sql queries. Overall it looks like it's very speedy. The average time taken for the front page (taken over 100 times) is 0.046152 microseconds. But one thing i'm not sure about is whether i've done enough to reduce php memory usage. The only time i've ever encountered problems with it is when uploading large files. Using memory_get_peak_usage(TRUE), which I THINK returns the highest amount of memory used whilst the script has been running, the average (taken over 100 times) is 1572864 bytes. Is that good? I realise you don't know what it is i'm doing (it's rather simple, get the 10 latest articles, the comment count for each, get the user controls, popular tags in the sidebar etc). But would you be at all worried with a script using that sort of memory getting hit 50,000 times a day? Or once every second at peak times? I realise that this is a very open ended question. Hopefully you can understand that it's a bit of a stab in the dark and i'm really just looking for some re-assurance that it's not going to die horribly come re-launch day.

    Read the article

  • Error -12 hibernation image. Not enough free memory (sometimes)

    - by user99306
    I am having a problem with hibernation in Ubuntu 12.10, it had worked fine in 12.04. When I try and hibernate it sometimes appears to hibernate throws up an error and returns me to the desktop. The error I get is this: PM: Creating hibernation image: PM: Need to copy 375021 pages PM: Normal pages needed: 117957 + 1024, available pages: 110205 PM: Not enough free memory PM: Error -12 creating hibernation image Now I understand what the error means, but it doesn't make sense. My swap file is 5GB and is seldom ever used as I have 4GB of RAM. I know it is recommended to have 1.5 times more swap than RAM etc, but space doesn't seem to be the problem, despite the error. For example, I rarely use more than about 25%-30% of my RAM, yet still have the problem above. Moreover on a fresh boot and login, with no programs open and using only about 12% of RAM, I can get the above error - yet at other times I can hibernate whilst using 25% of my RAM. Also if I keep trying to hibernate, it eventually does after throwing up the above error four or five times. A successful hibernation looks like this: PM: Creating hibernation image: PM: Need to copy 295511 pages PM: Normal pages needed: 95534 + 1024, available pages: 132627 Is there some setting that I need to tweak or something I need to do before hibernating to avoid this problem? I guess the question could be better interpreted as: Is there some way of safely flushing the buffers and the cache before hibernation? Other than attempting to hibernate several times until it is successful! Thanks in advance.

    Read the article

  • Static vs. dynamic memory allocation - lots of constant objects, only small part of them used at runtime

    - by k29
    Here are two options: Option 1: enum QuizCategory { CATEGORY_1(new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...), CATEGORY_2(new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...), ... ; public MyCollection<Question> collection; private QuizCategory(MyCollection<Question> collection) { this.collection = collection; } public Question getRandom() { return collection.getRandomQuestion(); } } Option 2: enum QuizCategory2 { CATEGORY_1 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_A) .add(Question.QUESTION_B) .add...; } }, CATEGORY_2 { @Override protected MyCollection<Question> populateWithQuestions() { return new MyCollection<Question>() .add(Question.QUESTION_B) .add(Question.QUESTION_C) .add...; } }; public Question getRandom() { MyCollection<Question> collection = populateWithQuestions(); return collection.getRandomQuestion(); } protected abstract MyCollection<Question> populateWithQuestions(); } There will be around 1000 categories, each containing 10 - 300 questions (100 on average). At runtime typically only 10 categories and 30 questions will be used. Each question is itself an enum constant (with its fields and methods). I'm trying to decide between those two options in the mobile application context. I haven't done any measurements since I have yet to write the questions and would like to gather more information before committing to one or another option. As far as I understand: (a) Option 1 will perform better since there will be no need to populate the collection and then garbage-collect the questions; (b) Option 1 will require extra memory: 1000 categories x 100 questions x 4 bytes for each reference = 400 Kb, which is not significant. So I'm leaning to Option 1, but just wondered if I'm correct in my assumptions and not missing something important? Perhaps someone has faced a similar dilemma? Or perhaps it doesn't actually matter that much?

    Read the article

  • MapViewOfFile shared between 32bit and 64bit processes

    - by MK
    Hi, I'm trying to use MapViewOfFile in a 64 bit process on a file that is already mapped to memory of another 32 bit process. It fails and gives me an "access denied" error. Is this a known Windows limitation or am I doing something wrong? Same code works fine with 2 32bit processes. The code sort of looks like this: hMapFile = OpenFileMapping(FILE_MAP_ALL_ACCESS, FALSE, szShmName); if (NULL == hMapFile) { /* failed to open - create new (this happens in the 32 bit app) */ SECURITY_ATTRIBUTES sa; sa.nLength = sizeof(SECURITY_ATTRIBUTES); sa.bInheritHandle = FALSE; /* give access to members of administrators group */ BOOL success = ConvertStringSecurityDescriptorToSecurityDescriptor( "D:(A;OICI;GA;;;BA)", SDDL_REVISION_1, &(sa.lpSecurityDescriptor), NULL); HANDLE hShmFile = CreateFile(FILE_FAXCOM_SHM, FILE_ALL_ACCESS, 0, &sa, OPEN_ALWAYS, 0, NULL); hMapFile = CreateFileMapping(hShmFile, &sa, PAGE_READWRITE, 0, SHM_SIZE, szShmName); CloseHandle(hShmFile); } // this one fails in 64 bit app pShm = MapViewOfFile(hMapFile, FILE_MAP_ALL_ACCESS, 0, 0, SHM_SIZE);

    Read the article

  • Fluent NHibernate - subclasses with shared reference

    - by ollie
    Edit: changed class names. I'm using Fluent NHibernate (v 1.0.0.614) automapping on the following set of classes (where Entity is the base class provided in the S#arp Architecture framework): public class Car : Entity { public virtual int ModelYear { get; set; } public virtual Company Manufacturer { get; set; } } public class Sedan : Car { public virtual bool WonSedanOfYear { get; set; } } public class Company : Entity { public virtual IList<Sedan> Sedans { get; set; } } This results in the following Configuration (as written to hbm.xml): <class name="Company" table="Companies"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <bag cascade="all" inverse="true" name="Sedans" mutable="true"> <key> <column name="`CompanyID`" /> </key> <one-to-many class="Sedan" /> </bag> </class> <class name="Car" table="Cars"> <id name="Id" type="System.Int32" unsaved-value="0"> <column name="`ID`" /> <generator class="identity" /> </id> <property name="ModelYear" type="System.Int32"> <column name="`ModelYear`" /> </property> <many-to-one cascade="save-update" class="Company" name="Manufacturer"> <column name="`CompanyID`" /> </many-to-one> <joined-subclass name="Sedan"> <key> <column name="`CarID`" /> </key> <property name="WonSedanOfYear" type="System.Boolean"> <column name="`WonSedanOfYear`" /> </property> </joined-subclass> </class> So far so good! But now comes the ugly part. The generated database tables: Table: Companies Columns: ID (PK, int, not null) Table: Cars Columns: ID (PK, int, not null) ModelYear (int, null) CompanyID (FK, int, null) Table: Sedan Columns: CarID (PK, FK, int, not null) WonSedanOfYear (bit, null) CompanyID (FK, int, null) Instead of one FK for Company, I get two! How can I ensure I only get one FK for Company? Override the automapping? Put a convention in place? Or is this a bug? Your thoughts are appreciated.

    Read the article

  • classes and static variables in shared libraries

    - by abel
    I am trying to write something in c++ with an architecture like: App -- Core (.so) <-- Plugins (.so's) for linux, mac and windows. The Core is implicitly linked to App and Plugins are explicitly linked with dlopen/LoadLibrary to App. The problem I have: static variables in Core are duplicated at run-time -- Plugins and App have different copys of them. at least on mac, when a Plugin returns a pointer to App, dynamic casting that pointer in App always result in NULL. Can anyone give me some explanations and instructions for different platforms please? I know this may seem lazy to ask them all here but I really cannot find a systematic answer to this question.

    Read the article

  • Optimizing a shared buffer in a producer/consumer multithreaded environment

    - by Etan
    I have some project where I have a single producer thread which writes events into a buffer, and an additional single consumer thread which takes events from the buffer. My goal is to optimize this thing for a single machine to achieve maximum throughput. Currently, I am using some simple lock-free ring buffer (lock-free is possible since I have only one consumer and one producer thread and therefore the pointers are only updated by a single thread). #define BUF_SIZE 32768 struct buf_t { volatile int writepos; volatile void * buffer[BUF_SIZE]; volatile int readpos;) }; void produce (buf_t *b, void * e) { int next = (b->writepos+1) % BUF_SIZE; while (b->readpos == next); // queue is full. wait b->buffer[b->writepos] = e; b->writepos = next; } void * consume (buf_t *b) { while (b->readpos == b->writepos); // nothing to consume. wait int next = (b->readpos+1) % BUF_SIZE; void * res = b->buffer[b->readpos]; b->readpos = next; return res; } buf_t *alloc () { buf_t *b = (buf_t *)malloc(sizeof(buf_t)); b->writepos = 0; b->readpos = 0; return b; } However, this implementation is not yet fast enough and should be optimized further. I've tried with different BUF_SIZE values and got some speed-up. Additionaly, I've moved writepos before the buffer and readpos after the buffer to ensure that both variables are on different cache lines which resulted also in some speed. What I need is a speedup of about 400 %. Do you have any ideas how I could achieve this using things like padding etc?

    Read the article

  • malloc()/free() behavior differs between Debian and Redhat

    - by StasM
    I have a Linux app (written in C) that allocates large amount of memory (~60M) in small chunks through malloc() and then frees it (the app continues to run then). This memory is not returned to the OS but stays allocated to the process. Now, the interesting thing here is that this behavior happens only on RedHat Linux and clones (Fedora, Centos, etc.) while on Debian systems the memory is returned back to the OS after all freeing is done. Any ideas why there could be the difference between the two or which setting may control it, etc.?

    Read the article

  • .Net OutOfMemory on Server but not Desktop

    - by Jörg Battermann
    Is it possible that the .Net framework behaves differently when it comes to garbage collection / memory limitations on server environments? I am running explicitly x86 compiled apps on a 64bit server machine with 32gbs of physical ram and I am running out of memory (SystemOutOfMemoryException) even though nothing but that particular app is running and the server/all other apps utilize 520mb total.. but I cannot reproduce that behaviour on my own (client win7) machine. Now I know that the app -is- memory intensive, but why is it causing problems on the server and not on the client?

    Read the article

  • Has the `message-passing/shared-state' dilemma (concurrency & distribution) taken form of a `Holywar

    - by Bubba88
    I'm not too well-informed about the state of the discussion about which model is better, so I would like to ask a pretty straight question: Does it look like two opposing views having really heatened dispute? E.g. like prototype/class based OOP or dynamic vs. static typing (though these are really not much fitting examples, I just do not know how to express my question more clearly)

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >