Search Results

Search found 12774 results on 511 pages for 'memory corruption'.

Page 451/511 | < Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • document.onclick settimeout function javascript help

    - by Jamex
    Hi, I have a document.onclick function that I would like to have a delay. I can't seem to get the syntax right. my original code is <script type="text/javascript"> document.onclick=check; function check(e){do something} I tried the below, but that code is incorrect, the function did not execute and nothing happened. <script type="text/javascript"> document.onclick=setTimeout("check", 1000); function check(e){do something} I tried the next set, the function got executed, but no delay. <script type="text/javascript"> setTimeout(document.onclick=check, 1000); function check(e){do something} what is the correct syntax for this code. TIA Edit: The solutions were all good, my problem was that I use the function check to obtain the id of the element being clicked on. But after the delay, there is no "memory" of what was being clicked on, so the rest of the function does not get executed. Jimr wrote the short code to preserve clicked event. The code that is working is // Delay execution of event handler function "f" by "time" ms. document.onclick = makeDelayedHandler(check, 250); function makeDelayedHandler( f, time) { return function( e ) {setTimeout(function() {f( e );}, time ); }; } function check(e){ var click = (e && e.target) || (event && event.srcElement); . . . Thank you all.

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • Django caching seems to be causing problems

    - by Issy
    Hey guys, i have just implemented the Django Cache Local Memory back end in some my code, however it seems to be causing a problem. I get the following error when trying to view the site (With Debug On): Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 279, in run self.result = application(self.environ, self.start_response) File "/usr/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 651, in __call__ return self.application(environ, start_response) File "/usr/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 245, in __call__ response = middleware_method(request, response) File "/usr/lib/python2.6/dist-packages/django/middleware/cache.py", line 91, in process_response patch_response_headers(response, timeout) File "/usr/lib/python2.6/dist-packages/django/utils/cache.py", line 112, in patch_response_headers response['Expires'] = http_date(time.time() + cache_timeout) TypeError: unsupported operand type(s) for +: 'float' and 'str' I have checked my code, for caching everything seems to be ok. For example, i have the following in my middleware. MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', ) My settings for Cache: CACHE_BACKEND = 'locmem://' CACHE_MIDDLEWARE_SECONDS = '3600' CACHE_MIDDLEWARE_KEY_PREFIX = 'za' CACHE_MIDDLEWARE_ANONYMOUS_ONLY = True And some of my code (template tag): def get_featured_images(): """ provides featured images """ cache_key = 'featured_images' images = cache.get(cache_key) if images is None: images = FeaturedImage.objects.all().filter(enabled=True)[:5] cache.set(cache_key, images) return {'images': images} Any idea what could be the problem, from the error message below it looks like there's an issue in django's cache.py?

    Read the article

  • Why is my UITableView not being set?

    - by Jamie L
    I checked using the debbuger in the viewDidLoad method and tracerTableView is 0x0 which i assume means it is nil. I don't understand. I should go ahaed say yes I have already checked my nib file and yes all the connections are correct. Here is the header file and the begging of the .m. ///////////// .h file //////////// @interface TrackerListController : UITableViewController { // The mutable (modifiable) dictionary days holds all the data for the days tab NSMutableArray *trackerList; UITableView *tracerTableView; } @property (nonatomic, retain) NSMutableArray *trackerList; @property (nonatomic, retain) IBOutlet UITableView. *tracerTableView; //The addPackage: method is invoked when the user taps the addbutton created at runtime. -(void) addPackage : (id) sender; @end /////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////// .m file ////////////////////////////////////// @implementation TrackerListController @synthesize trackerList, tracerTableView; (void)viewDidLoad { [super viewDidLoad]; self.title = @"Package Tracker"; self.navigationItem.leftBarButtonItem = self.editButtonItem; UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addPackage:)]; // Set up the Add custom button on the right of the navigation bar self.navigationItem.rightBarButtonItem = addButton; [addButton release]; // Release the addButton from memory since it is no longer needed }

    Read the article

  • Webcam video stream processing.

    - by vikramtheone
    Hi Guys, I'm working with an image processing project, my final goal is to detect features on a real time video and finally track those features. I will be working with an Embedded Processor Platform called Freescale's i.MX515, it is a 32-bit media processor running on Ubuntu 9.04. Right now I'm working on the algorithms to locate the features, so, I'm using still images. When I'm satisfied with the results I will have to start using a video stream and I don't want to make use of a video file as a source stream, because then I will have to worry about video decoders then. Instead I would like to plug in a USB Wecam to the embedded platform (It has USB ports on it), directly take the frames as they are captured and send it to my application. I will take care to buy a webcam which will be supported in Linux (Device driver). But my question is will I be able to capture the incoming video stream from the webcam and send it to my application? Will I be able to configure the webcam and DMA to write the incoming frames in a particular memory location whose pointer I can simply pass to my application? (Confused!!!) I hope I could convey my doubts, can anyone guide me with what steps that I have to take to achieve all of these easily? Do you foresee any impossibility here? Help!!! Regards Vikram

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • iOS - when to remove subviews from UITableViewCell

    - by Milad Ghattavi
    I have a UITableView which a UIImage is added as a subview to each cell of it. The images are PNG transparent. The problem is when I scroll through the UITableView, the images get overlapped and then I receive the memory warning and stuff. here's the current code for configuring a cell: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } int cellNumber = indexPath.row + 1; NSString *cellImage1 = [NSString stringWithFormat:@"c%i.png", cellNumber]; UIImage *theImage = [UIImage imageNamed:cellImage1]; UIImageView *cellImage = [[UIImageView alloc] initWithImage:theImage]; [cell.contentView addSubview:cellImage]; return cell; } I know the code for removing the UIImage subview would be like: [cell.imageView removeFromSuperview]; But I don't know where to put it. I've placed it between all the lines; even added an else, in the if statement. didn't seem to work!

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Are there pitfalls to using static class/event as an application message bus

    - by Doug Clutter
    I have a static generic class that helps me move events around with very little overhead: public static class MessageBus<T> where T : EventArgs { public static event EventHandler<T> MessageReceived; public static void SendMessage(object sender, T message) { if (MessageReceived != null) MessageReceived(sender, message); } } To create a system-wide message bus, I simply need to define an EventArgs class to pass around any arbitrary bits of information: class MyEventArgs : EventArgs { public string Message { get; set; } } Anywhere I'm interested in this event, I just wire up a handler: MessageBus<MyEventArgs>.MessageReceived += (s,e) => DoSomething(); Likewise, triggering the event is just as easy: MessageBus<MyEventArgs>.SendMessage(this, new MyEventArgs() {Message="hi mom"}); Using MessageBus and a custom EventArgs class lets me have an application wide message sink for a specific type of message. This comes in handy when you have several forms that, for example, display customer information and maybe a couple forms that update that information. None of the forms know about each other and none of them need to be wired to a static "super class". I have a couple questions: fxCop complains about using static methods with generics, but this is exactly what I'm after here. I want there to be exactly one MessageBus for each type of message handled. Using a static with a generic saves me from writing all the code that would maintain the list of MessageBus objects. Are the listening objects being kept "alive" via the MessageReceived event? For instance, perhaps I have this code in a Form.Load event: MessageBus<CustomerChangedEventArgs>.MessageReceived += (s,e) => DoReload(); When the Form is Closed, is the Form being retained in memory because MessageReceived has a reference to its DoReload method? Should I be removing the reference when the form closes: MessageBus<CustomerChangedEventArgs>.MessageReceived -= (s,e) => DoReload();

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • smallest mysql type that accomodates single decimal

    - by donpal
    Database newbie here. I'm setting up a mysql table. One of the fields will accept a value in increment of a 0.5. e.g. 0.5, 1.0, 1.5, 2.0, .... 200.5, etc. I've tried int but it doesn't capture the decimals. `value` int(10), What would be the smallest type that can accommodate this value, considering it's only a single decimal. I also was considering that because the decimal will always be 0.5 if at all, I could store it in a separate boolean field? So I would have 2 fields instead. Is this a stupid or somewhat over complicated idea? I don't know if it really saves me any memory, and it might get slower now that I'm accessing 2 fields instead of 1 `value` int(10), `half` bool, //or something similar to boolean What are your suggestions guys? Is the first option better, and what's the smallest data type in that case that would get me the 0.5?

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • Very slow guards in my monadic random implementation (haskell)

    - by danpriduha
    Hi! I was tried to write one random number generator implementation, based on number class. I also add there Monad and MonadPlus instance. What mean "MonadPlus" and why I add this instance? Because of I want to use guards like here: -- test.hs -- import RandomMonad import Control.Monad import System.Random x = Rand (randomR (1 ::Integer, 3)) ::Rand StdGen Integer y = do a <-x guard (a /=2) guard (a /=1) return a here comes RandomMonad.hs file contents: -- RandomMonad.hs -- module RandomMonad where import Control.Monad import System.Random import Data.List data RandomGen g => Rand g a = Rand (g ->(a,g)) | RandZero instance (Show g, RandomGen g) => Monad (Rand g) where return x = Rand (\g ->(x,g)) (RandZero)>>= _ = RandZero (Rand argTransformer)>>=(parametricRandom) = Rand funTransformer where funTransformer g | isZero x = funTransformer g1 | otherwise = (getRandom x g1,getGen x g1) where x = parametricRandom val (val,g1) = argTransformer g isZero RandZero = True isZero _ = False instance (Show g, RandomGen g) => MonadPlus (Rand g) where mzero = RandZero RandZero `mplus` x = x x `mplus` RandZero = x x `mplus` y = x getRandom :: RandomGen g => Rand g a ->g ->a getRandom (Rand f) g = (fst (f g)) getGen :: RandomGen g => Rand g a ->g -> g getGen (Rand f) g = snd (f g) when I run ghci interpreter, and give following command getRandom y (mkStdGen 2000000000) I can see memory overflow on my computer (1G). It's not expected, and if I delete one guard, it works very fast. Why in this case it works too slow? What I do wrong?

    Read the article

  • Ideas on frameworks in .NET that can be used for job processing and notifications

    - by Rajat Mehta
    Scenario: We have one instance of WCF windows service which exposes contracts like: AddNewJob(Job job), GetJobs(JobQuery query) etc. This service is consumed by 70-100 instances of client which is Windows Form based .NET app. Typically the service has 50-100 inward calls/minute to add or query jobs that are stored in a table on Sql Server. The same service is also responsible for processing these jobs in real time. It queries database every 5 seconds picks up the queued jobs and starts processing them. A job has 6 states. Queued, Pre-processing, Processing, Post-processing, Completed, Failed, Locked. Another responsibility on this service is to update all clients on every state change of every job. This means almost 200+ callbacks to clients per second. Question: This whole implementation is done using WCF Duplex bindings and works perfectly fine on small number of parallel jobs. Problem arises when we scale it up to 1000 jobs at a time. The notifications don't work as expected, it leads to memory overflow etc. Is there any standard framework that can provide a clean infrastructure for handling this scenario?? Apologies for the long explanation!

    Read the article

  • Trouble with Unions in C program.

    - by Jordan S
    I am working on a C program that uses a Union. The union definition is in FILE_A header file and looks like this... // FILE_A.h**************************************************** xdata union { long position; char bytes[4]; }CurrentPosition; If I set the value of CurrentPosition.position in FILE_A.c and then call a function in FILE_B.c that uses the union, the data in the union is back to Zero. This is demonstrated below. // FILE_A.c**************************************************** int main.c(void) { CurrentPosition.position = 12345; SomeFunctionInFileB(); } // FILE_B.c**************************************************** void SomeFunctionInFileB(void) { // After the following lines execute I see all zeros in the flash memory. WriteByteToFlash(CurrentPosition.bytes[0]; WriteByteToFlash(CurrentPosition.bytes[1]; WriteByteToFlash(CurrentPosition.bytes[2]; WriteByteToFlash(CurrentPosition.bytes[3]; } Now, If I pass a long to SomeFunctionInFileB(long temp) and then store it into CurrentPosition.bytes within that function, and finally call WriteBytesToFlash(CurrentPosition.bytes[n]... it works just fine. It appears as though the CurrentPosition Union is not global. So I tried changing the union definition in the header file to include the extern keyword like this... extern xdata union { long position; char bytes[4]; }CurrentPosition; and then putting this in the source (.c) file... xdata union { long position; char bytes[4]; }CurrentPosition; but this causes a compile error that says: C:\SiLabs\Optec Programs\AgosRot\MotionControl.c:76: error 91: extern definition for 'CurrentPosition' mismatches with declaration. C:\SiLabs\Optec Programs\AgosRot\/MotionControl.h:48: error 177: previously defined here So what am I doing wrong? How do I make the union global?

    Read the article

  • Loading Jscript files into Firefox extension

    - by colon3l
    Hi ! Let's get directly to the problem : I'm actually doing a firefox extension in which i would like to implement the jWebsocket API in order to build a small chat. I got my main script file, named test.js, and the jWebsocket lib into a js folder. Just for you to know, this is my first firefox extension ever. So in my XUL file I got this (for the script part only of course, the interface code is not shown) : <overlay id="test-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://test/content/test.js" /> <script type="application/x-javascript" src="chrome://test/content/js/jwebsocket.js" /> jwebsocket.js being the file I need to call according to jWebsocket website. In my main script file test.js I start with : if (jws.browserSupportsWebSockets()) { jWebSocketClient = new jws.jWebSocketJSONClient(); } else { var lMsg = jws.MSG_WS_NOT_SUPPORTED; alert(lMsg); } jws being the namespace created into the jwebsocket.js file. Of course I've got the required stand-alone server running in background, and working. So from what I understood looking on various websites, is that if a js file is loaded into the javascript allocated memory space (with the tag), all namespace/function should be available between each file. But this was mostly for HTML-oriented issues, so I'm not sure if it applies to XUL/Firefox environment. But the script keep failing at the first jws call. Any ideas on what goes wrong here ? I'm stuck for 2 days now :/

    Read the article

  • Minimal assembler program for CP/M 3.1 (z80)

    - by Andrew J. Brehm
    I seem to be losing the battle against my stupidity. This site explains the system calls under various versions of CP/M. However, when I try to use call 2 (C_WRITE, console output), nothing much happens. I have the following code. ORG 100h LD E,'a' LD C,2 CALL 5 CALL 0 I recite this here from memory. If there are typos, rest assured they were not in the original since the file did compile and I had a COM file to start. I am thinking the lines mean the following: Make sure this gets loaded at address 100h (0h to FFh being the zero page). Load ASCII 'a' into E register for system call 2. Load integer 2 into C register for system call 2. Make system call (JMP to system call is at address 5 in zero page). End program (Exit command is at address 0 in zero page). The program starts and exits with no problems. If I remove the last command, it hangs the computer (which I guess is also expected and shows that CALL 0 works). However, it does not print the ASCII character. (But it does print an extra new line, but the system might have done that.) How can I get my CP/M program to do what the system call is supposed to do? What am I doing wrong?

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • Generated HTML word document not displaying image correctly

    - by spiderdijon
    I'm trying to add an image to a generated html word document that is embedded in a classic ASP page. The code looks something like this: <% Response.ContentType = "application/msword" %> <html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word"> ... <v:shape id="_x0000_s1030" type="#_x0000_t75" style='position:absolute; left:0;text-align:left;margin-left:0;margin-top:17.95pt;width:7in;height:116.85pt; z-index:2;mso-position-horizontal:center;mso-position-horizontal-relative:page; mso-position-vertical-relative:page'> <v:imagedata src="http://xxx/image001.gif" o:title="image001"/> <w:wrap anchorx="page" anchory="page"/> <w:anchorlock/> </v:shape><![endif]--><![if !vml]><span style='mso-ignore:vglayout;position: absolute;z-index:0;left:0px;margin-left:0px;margin-top:24px;width:672px; height:156px'><img width=672 height=156 src="http://xxx/image001.gif" v:shapes="_x0000_s1030"></span><![endif]> The image URL is correct and can be viewed through a browser, however when the word document opens, the image has a red x, with the error message: The image cannot be displayed. Your computer may not have enough memory to open the image, or the image may be corrupted. Restart your computer, and then open the file again. If the red x still appears, you may have to delete the image and then insert it again. If i copy the html code and try to open the word document on my local machine, it displays the image correctly. It just doesn't work when retrieving the document from the server. This happens for any images I try to add. Is there another way to add images to html-generated word documents that can be output from an asp page? Thanks.

    Read the article

< Previous Page | 447 448 449 450 451 452 453 454 455 456 457 458  | Next Page >