Search Results

Search found 12872 results on 515 pages for 'memory alignment'.

Page 455/515 | < Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >

  • iOS - when to remove subviews from UITableViewCell

    - by Milad Ghattavi
    I have a UITableView which a UIImage is added as a subview to each cell of it. The images are PNG transparent. The problem is when I scroll through the UITableView, the images get overlapped and then I receive the memory warning and stuff. here's the current code for configuring a cell: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } int cellNumber = indexPath.row + 1; NSString *cellImage1 = [NSString stringWithFormat:@"c%i.png", cellNumber]; UIImage *theImage = [UIImage imageNamed:cellImage1]; UIImageView *cellImage = [[UIImageView alloc] initWithImage:theImage]; [cell.contentView addSubview:cellImage]; return cell; } I know the code for removing the UIImage subview would be like: [cell.imageView removeFromSuperview]; But I don't know where to put it. I've placed it between all the lines; even added an else, in the if statement. didn't seem to work!

    Read the article

  • Why is my UITableView not being set?

    - by Jamie L
    I checked using the debbuger in the viewDidLoad method and tracerTableView is 0x0 which i assume means it is nil. I don't understand. I should go ahaed say yes I have already checked my nib file and yes all the connections are correct. Here is the header file and the begging of the .m. ///////////// .h file //////////// @interface TrackerListController : UITableViewController { // The mutable (modifiable) dictionary days holds all the data for the days tab NSMutableArray *trackerList; UITableView *tracerTableView; } @property (nonatomic, retain) NSMutableArray *trackerList; @property (nonatomic, retain) IBOutlet UITableView. *tracerTableView; //The addPackage: method is invoked when the user taps the addbutton created at runtime. -(void) addPackage : (id) sender; @end /////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////// .m file ////////////////////////////////////// @implementation TrackerListController @synthesize trackerList, tracerTableView; (void)viewDidLoad { [super viewDidLoad]; self.title = @"Package Tracker"; self.navigationItem.leftBarButtonItem = self.editButtonItem; UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addPackage:)]; // Set up the Add custom button on the right of the navigation bar self.navigationItem.rightBarButtonItem = addButton; [addButton release]; // Release the addButton from memory since it is no longer needed }

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • Webcam video stream processing.

    - by vikramtheone
    Hi Guys, I'm working with an image processing project, my final goal is to detect features on a real time video and finally track those features. I will be working with an Embedded Processor Platform called Freescale's i.MX515, it is a 32-bit media processor running on Ubuntu 9.04. Right now I'm working on the algorithms to locate the features, so, I'm using still images. When I'm satisfied with the results I will have to start using a video stream and I don't want to make use of a video file as a source stream, because then I will have to worry about video decoders then. Instead I would like to plug in a USB Wecam to the embedded platform (It has USB ports on it), directly take the frames as they are captured and send it to my application. I will take care to buy a webcam which will be supported in Linux (Device driver). But my question is will I be able to capture the incoming video stream from the webcam and send it to my application? Will I be able to configure the webcam and DMA to write the incoming frames in a particular memory location whose pointer I can simply pass to my application? (Confused!!!) I hope I could convey my doubts, can anyone guide me with what steps that I have to take to achieve all of these easily? Do you foresee any impossibility here? Help!!! Regards Vikram

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • smallest mysql type that accomodates single decimal

    - by donpal
    Database newbie here. I'm setting up a mysql table. One of the fields will accept a value in increment of a 0.5. e.g. 0.5, 1.0, 1.5, 2.0, .... 200.5, etc. I've tried int but it doesn't capture the decimals. `value` int(10), What would be the smallest type that can accommodate this value, considering it's only a single decimal. I also was considering that because the decimal will always be 0.5 if at all, I could store it in a separate boolean field? So I would have 2 fields instead. Is this a stupid or somewhat over complicated idea? I don't know if it really saves me any memory, and it might get slower now that I'm accessing 2 fields instead of 1 `value` int(10), `half` bool, //or something similar to boolean What are your suggestions guys? Is the first option better, and what's the smallest data type in that case that would get me the 0.5?

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Very slow guards in my monadic random implementation (haskell)

    - by danpriduha
    Hi! I was tried to write one random number generator implementation, based on number class. I also add there Monad and MonadPlus instance. What mean "MonadPlus" and why I add this instance? Because of I want to use guards like here: -- test.hs -- import RandomMonad import Control.Monad import System.Random x = Rand (randomR (1 ::Integer, 3)) ::Rand StdGen Integer y = do a <-x guard (a /=2) guard (a /=1) return a here comes RandomMonad.hs file contents: -- RandomMonad.hs -- module RandomMonad where import Control.Monad import System.Random import Data.List data RandomGen g => Rand g a = Rand (g ->(a,g)) | RandZero instance (Show g, RandomGen g) => Monad (Rand g) where return x = Rand (\g ->(x,g)) (RandZero)>>= _ = RandZero (Rand argTransformer)>>=(parametricRandom) = Rand funTransformer where funTransformer g | isZero x = funTransformer g1 | otherwise = (getRandom x g1,getGen x g1) where x = parametricRandom val (val,g1) = argTransformer g isZero RandZero = True isZero _ = False instance (Show g, RandomGen g) => MonadPlus (Rand g) where mzero = RandZero RandZero `mplus` x = x x `mplus` RandZero = x x `mplus` y = x getRandom :: RandomGen g => Rand g a ->g ->a getRandom (Rand f) g = (fst (f g)) getGen :: RandomGen g => Rand g a ->g -> g getGen (Rand f) g = snd (f g) when I run ghci interpreter, and give following command getRandom y (mkStdGen 2000000000) I can see memory overflow on my computer (1G). It's not expected, and if I delete one guard, it works very fast. Why in this case it works too slow? What I do wrong?

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • Loading Jscript files into Firefox extension

    - by colon3l
    Hi ! Let's get directly to the problem : I'm actually doing a firefox extension in which i would like to implement the jWebsocket API in order to build a small chat. I got my main script file, named test.js, and the jWebsocket lib into a js folder. Just for you to know, this is my first firefox extension ever. So in my XUL file I got this (for the script part only of course, the interface code is not shown) : <overlay id="test-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://test/content/test.js" /> <script type="application/x-javascript" src="chrome://test/content/js/jwebsocket.js" /> jwebsocket.js being the file I need to call according to jWebsocket website. In my main script file test.js I start with : if (jws.browserSupportsWebSockets()) { jWebSocketClient = new jws.jWebSocketJSONClient(); } else { var lMsg = jws.MSG_WS_NOT_SUPPORTED; alert(lMsg); } jws being the namespace created into the jwebsocket.js file. Of course I've got the required stand-alone server running in background, and working. So from what I understood looking on various websites, is that if a js file is loaded into the javascript allocated memory space (with the tag), all namespace/function should be available between each file. But this was mostly for HTML-oriented issues, so I'm not sure if it applies to XUL/Firefox environment. But the script keep failing at the first jws call. Any ideas on what goes wrong here ? I'm stuck for 2 days now :/

    Read the article

  • Minimal assembler program for CP/M 3.1 (z80)

    - by Andrew J. Brehm
    I seem to be losing the battle against my stupidity. This site explains the system calls under various versions of CP/M. However, when I try to use call 2 (C_WRITE, console output), nothing much happens. I have the following code. ORG 100h LD E,'a' LD C,2 CALL 5 CALL 0 I recite this here from memory. If there are typos, rest assured they were not in the original since the file did compile and I had a COM file to start. I am thinking the lines mean the following: Make sure this gets loaded at address 100h (0h to FFh being the zero page). Load ASCII 'a' into E register for system call 2. Load integer 2 into C register for system call 2. Make system call (JMP to system call is at address 5 in zero page). End program (Exit command is at address 0 in zero page). The program starts and exits with no problems. If I remove the last command, it hangs the computer (which I guess is also expected and shows that CALL 0 works). However, it does not print the ASCII character. (But it does print an extra new line, but the system might have done that.) How can I get my CP/M program to do what the system call is supposed to do? What am I doing wrong?

    Read the article

  • Trouble with Unions in C program.

    - by Jordan S
    I am working on a C program that uses a Union. The union definition is in FILE_A header file and looks like this... // FILE_A.h**************************************************** xdata union { long position; char bytes[4]; }CurrentPosition; If I set the value of CurrentPosition.position in FILE_A.c and then call a function in FILE_B.c that uses the union, the data in the union is back to Zero. This is demonstrated below. // FILE_A.c**************************************************** int main.c(void) { CurrentPosition.position = 12345; SomeFunctionInFileB(); } // FILE_B.c**************************************************** void SomeFunctionInFileB(void) { // After the following lines execute I see all zeros in the flash memory. WriteByteToFlash(CurrentPosition.bytes[0]; WriteByteToFlash(CurrentPosition.bytes[1]; WriteByteToFlash(CurrentPosition.bytes[2]; WriteByteToFlash(CurrentPosition.bytes[3]; } Now, If I pass a long to SomeFunctionInFileB(long temp) and then store it into CurrentPosition.bytes within that function, and finally call WriteBytesToFlash(CurrentPosition.bytes[n]... it works just fine. It appears as though the CurrentPosition Union is not global. So I tried changing the union definition in the header file to include the extern keyword like this... extern xdata union { long position; char bytes[4]; }CurrentPosition; and then putting this in the source (.c) file... xdata union { long position; char bytes[4]; }CurrentPosition; but this causes a compile error that says: C:\SiLabs\Optec Programs\AgosRot\MotionControl.c:76: error 91: extern definition for 'CurrentPosition' mismatches with declaration. C:\SiLabs\Optec Programs\AgosRot\/MotionControl.h:48: error 177: previously defined here So what am I doing wrong? How do I make the union global?

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Generated HTML word document not displaying image correctly

    - by spiderdijon
    I'm trying to add an image to a generated html word document that is embedded in a classic ASP page. The code looks something like this: <% Response.ContentType = "application/msword" %> <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word"> ... <v:shape id="_x0000_s1030" type="#_x0000_t75" style='position:absolute; left:0;text-align:left;margin-left:0;margin-top:17.95pt;width:7in;height:116.85pt; z-index:2;mso-position-horizontal:center;mso-position-horizontal-relative:page; mso-position-vertical-relative:page'> <v:imagedata src="http://xxx/image001.gif" o:title="image001"/> <w:wrap anchorx="page" anchory="page"/> <w:anchorlock/> </v:shape><![endif]--><![if !vml]><span style='mso-ignore:vglayout;position: absolute;z-index:0;left:0px;margin-left:0px;margin-top:24px;width:672px; height:156px'><img width=672 height=156 src="http://xxx/image001.gif" v:shapes="_x0000_s1030"></span><![endif]> The image URL is correct and can be viewed through a browser, however when the word document opens, the image has a red x, with the error message: The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may be corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again. If i copy the html code and try to open the word document on my local machine, it displays the image correctly. It just doesn't work when retrieving the document from the server. This happens for any images I try to add. Is there another way to add images to html-generated word documents that can be output from an asp page? Thanks.

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • c++ function scope

    - by Myx
    I have a main function in A.cpp which has the following relevant two lines of code: B definition(input_file); definition.Print(); In B.h I have the following relevant lines of code: class B { public: // Constructors B(void); B(const char *filename); ~B(void); // File input int ParseLSFile(const char *filename); // Debugging void Print(void); // Data int var1; double var2; vector<char* > var3; map<char*, vector<char* > > var4; } In B.cpp, I have the following function signatures (sorry for being redundant): B::B(void) : var1(-1), var2(numeric_limits<double>::infinity()) { } B::B(const char *filename) { B *def = new B(); def->ParseLSFile(filename); } B::~B(void) { // Free memory for var3 and var 4 } int B::ParseLSFile(const char *filename) { // assign var1, var2, var3, and var4 values } void B::Print(void) { // print contents of var1, var2, var3, and var4 to stdout } So when I call Print() from within B::ParseLSFile(...), then the contents of my structures print correctly to stdout. However, when I call definition.Print() from A.cpp, my structures are empty or contain garbage. Can anyone recommend the correct way to initialize/pass my structures so that I can access them outside of the scope of my function definition? Thanks.

    Read the article

  • Avoiding a fork()/SIGCHLD race condition

    - by larry
    Please consider the following fork()/SIGCHLD pseudo-code. // main program excerpt for (;;) { if ( is_time_to_make_babies ) { pid = fork(); if (pid == -1) { /* fail */ } else if (pid == 0) { /* child stuff */ print "child started" exit } else { /* parent stuff */ print "parent forked new child ", pid children.add(pid); } } } // SIGCHLD handler sigchld_handler(signo) { while ( (pid = wait(status, WNOHANG)) > 0 ) { print "parent caught SIGCHLD from ", pid children.remove(pid); } } In the above example there's a race-condition. It's possible for "/* child stuff */" to finish before "/* parent stuff */" starts which can result in a child's pid being added to the list of children after it's exited, and never being removed. When the time comes for the app to close down, the parent will wait endlessly for the already-finished child to finish. One solution I can think of to counter this is to have two lists: started_children and finished_children. I'd add to started_children in the same place I'm adding to children now. But in the signal handler, instead of removing from children I'd add to finished_children. When the app closes down, the parent can simply wait until the difference between started_children and finished_children is zero. Another possible solution I can think of is using shared-memory, e.g. share the parent's list of children and let the children .add and .remove themselves? But I don't know too much about this. EDIT: Another possible solution, which was the first thing that came to mind, is to simply add a sleep(1) at the start of /* child stuff */ but that smells funny to me, which is why I left it out. I'm also not even sure it's a 100% fix. So, how would you correct this race-condition? And if there's a well-established recommended pattern for this, please let me know! Thanks.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • Old dll.config problem !

    - by user313421
    Since 2005 as I googled it's a problem for who needs to read the configuration of an assembly from it's config file "*.dll.config" and Microsoft didn't do anything yet. Story: If you try to read a setting from a class library (plug-in) you fail. Instead the main application domain (EXE which is using the plug-in) config is read and because probably there's not such a config your plug-in will use default setting which is hard-coded when you create it's settings for first time. Any change to .dll.config wouldn't see by your plug-in and you wonder why it's there! If you want to replace it and start searching you may find something like this: http://stackoverflow.com/questions/594298/c-dll-config-file But just some ideas and one line code. A good replacement for built-in config shouldn't read from file system each time we need a config value, so we can store them in memory; Then what if user changes config file ? we need a FileSystemWatcher and we need some design like singleton ... and finally we are at the same point configuration of .NET is except our one's working. It seems MS did everything but forgot why they built the ".dll.config". Since no DLL is gonna execute by itself, they are referenced from other apps (even if used in web) and so why there's such a "*.dll.config" file ? I'm not gonna argue if it's good to have multiple config files or not. It's my design (plug-able components). Finally { After these years, is there any good practice such as a custom setting class to add in each assemly and read from it's own config file ? }

    Read the article

  • C# creating a Class, having objects as member variables? I think the objects are garbage collecte

    - by Bryan
    So I have a class that has the following member variables. I have get and set functions for every piece of data in this class. public class NavigationMesh { public Vector3 node; int weight; bool isWall; bool hasTreasure; public NavigationMesh(int x, int y, int z, bool setWall, bool setTreasure) { //default constructor //Console.WriteLine(x + " " + y + " " + z); node = new Vector3(x, y, z); //Console.WriteLine(node.X + " " + node.Y + " " + node.Z); isWall = setWall; hasTreasure = setTreasure; weight = 1; }// end constructor public float getX() { Console.WriteLine(node.X); return node.X; } public float getY() { Console.WriteLine(node.Y); return node.Y; } public float getZ() { Console.WriteLine(node.Z); return node.Z; } public bool getWall() { return isWall; } public void setWall(bool item) { isWall = item; } public bool getTreasure() { return hasTreasure; } public void setTreasure(bool item) { hasTreasure = item; } public int getWeight() { return weight; } }// end class In another class, I have a 2-Dim array that looks like this NavigationMesh[,] mesh; mesh = new NavigationMesh[502,502]; I use a double for loop to assign this, my problem is I cannot get the data I need out of the Vector3 node object after I create this object in my array with my "getters". I've tried making the Vector3 a static variable, however I think it refers to the last instance of the object. How do I keep all of these object in memory? I think there being garbage collected. Any thoughts?

    Read the article

< Previous Page | 451 452 453 454 455 456 457 458 459 460 461 462  | Next Page >