Search Results

Search found 12445 results on 498 pages for 'memory fragmentation'.

Page 437/498 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • Putting BigDecimal data into HSQLDB test database using DbUnit

    - by Denise
    Hi everyone, I'm using Hibernate JPA in my backend. I am writing a unit test using JUnit and DBUnit to insert a set of data into an in-memory HSQL database. My dataset contains: <order_line order_line_id="1" quantity="2" discount_price="0.3"/> Which maps to an OrderLine Java object where the discount_price column is defined as: @Column(name = "discount_price", precision = 12, scale = 2) private BigDecimal discountPrice; However, when I run my test case and assert that the discount price returned equals 0.3, the assertion fails and says that the stored value is 0. If I change the discount_price in the dataset to be 0.9, it rounds up to 1. I've checked to make sure HSQLDB isn't doing the rounding and it definitely isn't because I can insert an order line object using Java code with a value like 5.3 and it works fine. To me, it seems like DBUtils is for some reason rounding the number I've defined. Is there a way I can force this to not happen? Can anyone explain why it might be doing this? Thanks!

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • Is there a way to get number of connections in Signalr hub group?

    - by pajo
    Here is my problem, I want to track if user is online or offline and notify other clients about it. I'm using hubs and implemented both IConnected and IDisconnect interfaces. My idea was to send notification to all clients when hub detects connect or disconnect. By default when user refreshes page he will get new connection id and eventually previous connection will call disconnect notifying other clients user is offline even though he's actually online. I tired to use my own ConnectionIdFactory returning username for connection id but with multiple tabs opened at some point it will detect user connectionid disconnected and after that client side hub will try to unsuccessfully connect to the hub in endless loop wasting memory and cpu making browser almost unusable. I needed to fix it fast so I removed my factory and now I add every new connection to the group using username, so I can easily notify single user on all connections, but then I have problem of detecting if user is online or offline as I don't know how many active connection user is having. So I'm wondering is there a way to get number of connections in one group? Or if anybody has some better idea how to track when user goes offline? I'm using Signalr 0.4

    Read the article

  • How to accommodate for the iPhone 4 screen resolution?

    - by dontWatchMyProfile
    This is a programming question! Read on before you vote to close! According to Apple, the iPhone 4 has a new and better screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.

    Read the article

  • ColdFusion 8 Slow Performance

    - by JoeBob
    We have started a new CF8 app and it is running dog slow. A test where we go around ColdFusion (queries within a database utility) show normal speed (80ms). CF8 returns the same query in something like 60 to 80 seconds! I have been looking online and seeing lots of posts about CF8 and performance problems, but don't get any overall sense of a solution; just lots of people trying things and saying that they didn't have the problem with CF7. We are also seeing instability on the server, and some errors relating to garbage collection and the memory heap. We have a number of other applications running on CF8 and they perform adequately...our programmer is not an expert or a guru, he just plugs away. We have isolated this down to a single query that takes forever to return, so it is not a complicated test. Are there any known CF8 problems or obvious tweaks that we should consider trying? If we have to start over and learn a new environment, I will never make deadline. JoeBob

    Read the article

  • Apache module, is it possible to have asynchronous processing

    - by prashant2361
    Hi, I have a requirement where I need to send continous updates to my clients. Client is browser in this case. We have some data which updates every sec, so once client connects to our server, we maintain a persistent connection and keep pushing data to the client. I am looking for suggestions of this implementation at the server end. Basically what I need is this: 1. client connects to server. I maintain the socket and metadata about the socket. metadata contains what updates need to be send to this client 2. server process now waits for new client connections 3. One other process will have the list of all the sockets opened and will go through each of them and send the updates if required. Can we do something like this in apache module: 1. apache process gets the new connection. It maintains the state for the connection. It keeps the state in some global memory and returns back to root process to signify that it is done so that it can accept the new connection 2. the apache process though has returned the status to root process but it is also executing parallely where it going through its global store and sending updates to the client, if any. So can a apache process do these things: 1. Have more than one connection associated with it 2. Asynchronously waiting for new connection and at the same time processing the previous connections? Regards Prashant

    Read the article

  • How do I store a string longer than 4000 characters in an Oracle Database using Java/JDBC?

    - by Ventrue
    I’m not sure how to use Java/JDBC to insert a very long string into an Oracle database. I have a String which is greater than 4000 characters, lets say it’s 6000. I want to take this string and store it in an Oracle database. The way to do this seems to be with the CLOB datatype. Okay, so I declared the column as description CLOB. Now, when it comes time to actually insert the data, I have a prepared statement pstmt. It looks like pstmt = conn.prepareStatement(“INSERT INTO Table VALUES(?)”). So I want to use the method pstmt.setClob(). However, I don’t know how to create a Clob object with my String in it; there’s no constructor (presumably because it can be potentially much larger than available memory). How do I put my String into a Clob? Keep in mind I’m not a very experienced programmer; please try to keep the explanations as simple as possible. Efficiency, good practices, etc. are not a concern here, I just want the absolute easiest solution. I’d like to avoid downloading other packages if it all possible; right now I’m just using JDK 1.4 and what is labelled “ojdbc14.jar”. I've looked around a bit but I haven't been able to follow any of the explanations I've found. If you have a solution that doesn’t use Clobs, I’d be open to that as well, but it has to be one column.

    Read the article

  • Database Functional Programming in Clojure

    - by Ralph
    "It is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." - Abraham Maslow I need to write a tool to dump a large hierarchical (SQL) database to XML. The hierarchy consists of a Person table with subsidiary Address, Phone, etc. tables. I have to dump thousands of rows, so I would like to do so incrementally and not keep the whole XML file in memory. I would like to isolate non-pure function code to a small portion of the application. I am thinking that this might be a good opportunity to explore FP and concurrency in Clojure. I can also show the benefits of immutable data and multi-core utilization to my skeptical co-workers. I'm not sure how the overall architecture of the application should be. I am thinking that I can use an impure function to retrieve the database rows and return a lazy sequence that can then be processed by a pure function that returns an XML fragment. For each Person row, I can create a Future and have several processed in parallel (the output order does not matter). As each Person is processed, the task will retrieve the appropriate rows from the Address, Phone, etc. tables and generate the nested XML. I can use a a generic function to process most of the tables, relying on database meta-data to get the column information, with special functions for the few tables that need custom processing. These functions could be listed in a map(table name -> function). Am I going about this in the right way? I can easily fall back to doing it in OO using Java, but that would be no fun. BTW, are there any good books on FP patterns or architecture? I have several good books on Clojure, Scala, and F#, but although each covers the language well, none look at the "big picture" of function programming design.

    Read the article

  • document.onclick settimeout function javascript help

    - by Jamex
    Hi, I have a document.onclick function that I would like to have a delay. I can't seem to get the syntax right. my original code is <script type="text/javascript"> document.onclick=check; function check(e){do something} I tried the below, but that code is incorrect, the function did not execute and nothing happened. <script type="text/javascript"> document.onclick=setTimeout("check", 1000); function check(e){do something} I tried the next set, the function got executed, but no delay. <script type="text/javascript"> setTimeout(document.onclick=check, 1000); function check(e){do something} what is the correct syntax for this code. TIA Edit: The solutions were all good, my problem was that I use the function check to obtain the id of the element being clicked on. But after the delay, there is no "memory" of what was being clicked on, so the rest of the function does not get executed. Jimr wrote the short code to preserve clicked event. The code that is working is // Delay execution of event handler function "f" by "time" ms. document.onclick = makeDelayedHandler(check, 250); function makeDelayedHandler( f, time) { return function( e ) {setTimeout(function() {f( e );}, time ); }; } function check(e){ var click = (e && e.target) || (event && event.srcElement); . . . Thank you all.

    Read the article

  • What's the best way to retrieve two pieces of data from an XML file?

    - by Morinar
    I've got an XML document that is in either a pre or post FO transformed state that I need to extract some information from. In the pre-case, I need to pull out two tags that represent the pageWidth and pageHeight and in the post case I need to extract the page-height and page-width parameters from a specific tag (I forget which one it is off the top of my head). What I'm looking for is an efficient/easily maintainable way to grab these two elements. I'd like to only read the document a single time fetching the two things I need. I initially started writing something that would use BufferedReader + FileReader, but then I'm doing string searching and it gets messy when the tags span multiple lines. I then looked at the DOMParser, which seems like it would be ideal, but I don't want to have to read the entire file into memory if I could help it as the files could potentially be large and the tags I'm looking for will nearly always be close to the top of the file. I then looked into SAXParser, but that seems like a big pile of complicated overkill for what I'm trying to accomplish. Anybody have any advice? Or simple implementations that would accomplish my goal? Thanks.

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • Django caching seems to be causing problems

    - by Issy
    Hey guys, i have just implemented the Django Cache Local Memory back end in some my code, however it seems to be causing a problem. I get the following error when trying to view the site (With Debug On): Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 279, in run self.result = application(self.environ, self.start_response) File "/usr/lib/python2.6/dist-packages/django/core/servers/basehttp.py", line 651, in __call__ return self.application(environ, start_response) File "/usr/lib/python2.6/dist-packages/django/core/handlers/wsgi.py", line 245, in __call__ response = middleware_method(request, response) File "/usr/lib/python2.6/dist-packages/django/middleware/cache.py", line 91, in process_response patch_response_headers(response, timeout) File "/usr/lib/python2.6/dist-packages/django/utils/cache.py", line 112, in patch_response_headers response['Expires'] = http_date(time.time() + cache_timeout) TypeError: unsupported operand type(s) for +: 'float' and 'str' I have checked my code, for caching everything seems to be ok. For example, i have the following in my middleware. MIDDLEWARE_CLASSES = ( 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.middleware.cache.UpdateCacheMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.cache.FetchFromCacheMiddleware', 'django.contrib.flatpages.middleware.FlatpageFallbackMiddleware', ) My settings for Cache: CACHE_BACKEND = 'locmem://' CACHE_MIDDLEWARE_SECONDS = '3600' CACHE_MIDDLEWARE_KEY_PREFIX = 'za' CACHE_MIDDLEWARE_ANONYMOUS_ONLY = True And some of my code (template tag): def get_featured_images(): """ provides featured images """ cache_key = 'featured_images' images = cache.get(cache_key) if images is None: images = FeaturedImage.objects.all().filter(enabled=True)[:5] cache.set(cache_key, images) return {'images': images} Any idea what could be the problem, from the error message below it looks like there's an issue in django's cache.py?

    Read the article

  • how to implement a really efficient bitvector sorting in python

    - by xiao
    Hello guys! Actually this is an interesting topic from programming pearls, sorting 10 digits telephone numbers in a limited memory with an efficient algorithm. You can find the whole story here What I am interested in is just how fast the implementation could be in python. I have done a naive implementation with the module bitvector. The code is as following: from BitVector import BitVector import timeit import random import time import sys def sort(input_li): return sorted(input_li) def vec_sort(input_li): bv = BitVector( size = len(input_li) ) for i in input_li: bv[i] = 1 res_li = [] for i in range(len(bv)): if bv[i]: res_li.append(i) return res_li if __name__ == "__main__": test_data = range(int(sys.argv[1])) print 'test_data size is:', sys.argv[1] random.shuffle(test_data) start = time.time() sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "sort function takes " + str(elapsed) start = time.time() vec_sort(test_data) elapsed = (time.time() - start) print "vec_sort function takes " + str(elapsed) I have tested from array size 100 to 10,000,000 in my macbook(2GHz Intel Core 2 Duo 2GB SDRAM), the result is as following: test_data size is: 1000 sort function takes 0.000274896621704 vec_sort function takes 0.00383687019348 test_data size is: 10000 sort function takes 0.00380706787109 vec_sort function takes 0.0371489524841 test_data size is: 100000 sort function takes 0.0520560741425 vec_sort function takes 0.374383926392 test_data size is: 1000000 sort function takes 0.867373943329 vec_sort function takes 3.80475401878 test_data size is: 10000000 sort function takes 12.9204008579 vec_sort function takes 38.8053860664 What disappoints me is that even when the test_data size is 100,000,000, the sort function is still faster than vec_sort. Is there any way to accelerate the vec_sort function?

    Read the article

  • iOS - when to remove subviews from UITableViewCell

    - by Milad Ghattavi
    I have a UITableView which a UIImage is added as a subview to each cell of it. The images are PNG transparent. The problem is when I scroll through the UITableView, the images get overlapped and then I receive the memory warning and stuff. here's the current code for configuring a cell: - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]; } int cellNumber = indexPath.row + 1; NSString *cellImage1 = [NSString stringWithFormat:@"c%i.png", cellNumber]; UIImage *theImage = [UIImage imageNamed:cellImage1]; UIImageView *cellImage = [[UIImageView alloc] initWithImage:theImage]; [cell.contentView addSubview:cellImage]; return cell; } I know the code for removing the UIImage subview would be like: [cell.imageView removeFromSuperview]; But I don't know where to put it. I've placed it between all the lines; even added an else, in the if statement. didn't seem to work!

    Read the article

  • Why is my UITableView not being set?

    - by Jamie L
    I checked using the debbuger in the viewDidLoad method and tracerTableView is 0x0 which i assume means it is nil. I don't understand. I should go ahaed say yes I have already checked my nib file and yes all the connections are correct. Here is the header file and the begging of the .m. ///////////// .h file //////////// @interface TrackerListController : UITableViewController { // The mutable (modifiable) dictionary days holds all the data for the days tab NSMutableArray *trackerList; UITableView *tracerTableView; } @property (nonatomic, retain) NSMutableArray *trackerList; @property (nonatomic, retain) IBOutlet UITableView. *tracerTableView; //The addPackage: method is invoked when the user taps the addbutton created at runtime. -(void) addPackage : (id) sender; @end /////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////////////// .m file ////////////////////////////////////// @implementation TrackerListController @synthesize trackerList, tracerTableView; (void)viewDidLoad { [super viewDidLoad]; self.title = @"Package Tracker"; self.navigationItem.leftBarButtonItem = self.editButtonItem; UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemAdd target:self action:@selector(addPackage:)]; // Set up the Add custom button on the right of the navigation bar self.navigationItem.rightBarButtonItem = addButton; [addButton release]; // Release the addButton from memory since it is no longer needed }

    Read the article

  • Webcam video stream processing.

    - by vikramtheone
    Hi Guys, I'm working with an image processing project, my final goal is to detect features on a real time video and finally track those features. I will be working with an Embedded Processor Platform called Freescale's i.MX515, it is a 32-bit media processor running on Ubuntu 9.04. Right now I'm working on the algorithms to locate the features, so, I'm using still images. When I'm satisfied with the results I will have to start using a video stream and I don't want to make use of a video file as a source stream, because then I will have to worry about video decoders then. Instead I would like to plug in a USB Wecam to the embedded platform (It has USB ports on it), directly take the frames as they are captured and send it to my application. I will take care to buy a webcam which will be supported in Linux (Device driver). But my question is will I be able to capture the incoming video stream from the webcam and send it to my application? Will I be able to configure the webcam and DMA to write the incoming frames in a particular memory location whose pointer I can simply pass to my application? (Confused!!!) I hope I could convey my doubts, can anyone guide me with what steps that I have to take to achieve all of these easily? Do you foresee any impossibility here? Help!!! Regards Vikram

    Read the article

  • activemessaging with stomp and activemq.prefetchSize=1

    - by Clint Miller
    I have a situation where I have a single activemq broker with 2 queues, Q1 and Q2. I have two ruby-based consumers using activemessaging. Let's call them C1 and C2. Both consumers subscribe to each queue. I'm setting activemq.prefetchSize=1 when subscribing to each queue. I'm also setting ack=client. Consider the following sequence of events: 1) A message that triggers a long-running job is published to queue Q1. Call this M1. 2) M1 is dispatched to consumer C1, kicking off a long operation. 3) Two messages that trigger short jobs are published to queue Q2. Call these M2 and M3. 4) M2 is dispatched to C2 which quickly runs the short job. 5) M3 is dispatched to C1, even though C1 is still running M1. It's able to dispatch to C1 because prefetchSize=1 is set on the queue subscription, not on the connection. So the fact that a Q1 message has already been dispatched doesn't stop one Q2 message from being dispatched. Since activemessaging consumers are single-threaded, the net result is that M3 sits and waits on C1 for a long time until C1 finishes processing M1. So, M3 is not processed for a long time, despite the fact that consumer C2 is sitting idle (since it quickly finishes with message M2). Essentially, whenever a long Q1 job is run and then a whole bunch of short Q2 jobs are created, exactly one of the short Q2 jobs gets stuck on a consumer waiting for the long Q1 job to finish. Is there a way to set prefetchSize at the connection level rather than at the subscription level? I really don't want any messages dispatched to C1 while it is processing M1. The other alternative is that I could create a consumer dedicated to processing Q1 and then have other consumers dedicated to processing Q2. But, I'd rather not do that since Q1 messages are infrequent--Q1's dedicated consumers would sit idle most of the day tying up memory.

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Computing "average" of two colors

    - by Francisco P.
    This is only marginally programming related - has much more to do w/ colors and their representation. I am working on a very low level app. I have an array of bytes in memory. Those are characters. They were rendered with anti-aliasing: they have values from 0 to 255, 0 being fully transparent and 255 totally opaque (alpha, if you wish). I am having trouble conceiving an algorithm for the rendering of this font. I'm doing the following for each pixel: // intensity is the weight I talked about: 0 to 255 intensity = glyphs[text[i]][x + GLYPH_WIDTH*y]; if (intensity == 255) continue; // Don't draw it, fully transparent else if (intensity == 0) setPixel(x + xi, y + yi, color, base); // Fully opaque, can draw original color else { // Here's the tricky part // Get the pixel in the destination for averaging purposes pixel = getPixel(x + xi, y + yi, base); // transfer is an int for calculations transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.red + (float) pixel.red)/2); // This is my attempt at averaging newPixel.red = (Byte) transfer; transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.green + (float) pixel.green)/2); newPixel.green = (Byte) transfer; // transfer = (int) ((float) ((float) 255.0 - (float) intensity)/255.0 * (((float) color.blue) + (float) pixel.blue)/2); transfer = (int) ((float)((float) (255.0 - (float) intensity/255.0) * (float) color.blue + (float) pixel.blue)/2); newPixel.blue = (Byte) transfer; // Set the newpixel in the desired mem. position setPixel(x+xi, y+yi, newPixel, base); } The results, as you can see, are less than desirable. That is a very zoomed in image, at 1:1 scale it looks like the text has a green "aura". Any idea for how to properly compute this would be greatly appreciated. Thanks for your time!

    Read the article

  • smallest mysql type that accomodates single decimal

    - by donpal
    Database newbie here. I'm setting up a mysql table. One of the fields will accept a value in increment of a 0.5. e.g. 0.5, 1.0, 1.5, 2.0, .... 200.5, etc. I've tried int but it doesn't capture the decimals. `value` int(10), What would be the smallest type that can accommodate this value, considering it's only a single decimal. I also was considering that because the decimal will always be 0.5 if at all, I could store it in a separate boolean field? So I would have 2 fields instead. Is this a stupid or somewhat over complicated idea? I don't know if it really saves me any memory, and it might get slower now that I'm accessing 2 fields instead of 1 `value` int(10), `half` bool, //or something similar to boolean What are your suggestions guys? Is the first option better, and what's the smallest data type in that case that would get me the 0.5?

    Read the article

  • Why are references compacted inside Perl lists?

    - by parkan
    Putting a precompiled regex inside two different hashes referenced in a list: my @list = (); my $regex = qr/ABC/; push @list, { 'one' => $regex }; push @list, { 'two' => $regex }; use Data::Dumper; print Dumper(\@list); I'd expect: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => qr/(?-xism:ABC)/ } ]; But instead we get a circular reference: $VAR1 = [ { 'one' => qr/(?-xism:ABC)/ }, { 'two' => $VAR1->[0]{'one'} } ]; This will happen with indefinitely nested hash references and shallowly copied $regex. I'm assuming the basic reason is that precompiled regexes are actually references, and references inside the same list structure are compacted as an optimization (\$scalar behaves the same way). I don't entirely see the utility of doing this (presumably a reference to a reference has the same memory footprint), but maybe there's a reason based on the internal representation Is this the correct behavior? Can I stop it from happening? Aside from probably making GC more difficult, these circular structures create pretty serious headaches. For example, iterating over a list of queries that may sometimes contain the same regular expression will crash the MongoDB driver with a nasty segfault (see https://rt.cpan.org/Public/Bug/Display.html?id=58500)

    Read the article

  • Silverlight Socket Constantly Returns With Empty Buffer

    - by Benny
    I am using Silverlight to interact with a proxy application that I have developed but, without the proxy sending a message to the Silverlight application, it executes the receive completed handler with an empty buffer ('\0's). Is there something I'm doing wrong? It is causing a major memory leak. this._rawBuffer = new Byte[this.BUFFER_SIZE]; SocketAsyncEventArgs receiveArgs = new SocketAsyncEventArgs(); receiveArgs.SetBuffer(_rawBuffer, 0, _rawBuffer.Length); receiveArgs.Completed += new EventHandler<SocketAsyncEventArgs>(ReceiveComplete); this._client.ReceiveAsync(receiveArgs); if (args.SocketError == SocketError.Success && args.LastOperation == SocketAsyncOperation.Receive) { // Read the current bytes from the stream buffer int bytesRecieved = this._client.ReceiveBufferSize; // If there are bytes to process else the connection is lost if (bytesRecieved > 0) { try { //Find out what we just received string messagePart = UTF8Encoding.UTF8.GetString(_rawBuffer, 0, _rawBuffer.GetLength(0)); //Take out any trailing empty characters from the message messagePart = messagePart.Replace('\0'.ToString(), ""); //Concatenate our current message with any leftovers from previous receipts string fullMessage = _theRest + messagePart; int seperator; //While the index of the seperator (LINE_END defined & initiated as private member) while ((seperator = fullMessage.IndexOf((char)Messages.MessageSeperator.Terminator)) > 0) { //Pull out the first message available (up to the seperator index string message = fullMessage.Substring(0, seperator); //Queue up our new message _messageQueue.Enqueue(message); //Take out our line end character fullMessage = fullMessage.Remove(0, seperator + 1); } //Save whatever was NOT a full message to the private variable used to store the rest _theRest = fullMessage; //Empty the queue of messages if there are any while (this._messageQueue.Count > 0) { ... } } catch (Exception e) { throw e; } // Wait for a new message if (this._isClosing != true) Receive(); } } Thanks in advance.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >