Search Results

Search found 1889 results on 76 pages for 'evolutionary algorithms'.

Page 67/76 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • How to minimize total cost of shortest path tree

    - by Michael
    I have a directed acyclic graph with positive edge-weights. It has a single source and a set of targets (vertices furthest from the source). I find the shortest paths from the source to each target. Some of these paths overlap. What I want is a shortest path tree which minimizes the total sum of weights over all edges. For example, consider two of the targets. Given all edge weights equal, if they share a single shortest path for most of their length, then that is preferable to two mostly non-overlapping shortest paths (fewer edges in the tree equals lower overall cost). Another example: two paths are non-overlapping for a small part of their length, with high cost for the non-overlapping paths, but low cost for the long shared path (low combined cost). On the other hand, two paths are non-overlapping for most of their length, with low costs for the non-overlapping paths, but high cost for the short shared path (also, low combined cost). There are many combinations. I want to find solutions with the lowest overall cost, given all the shortest paths from source to target. Does this ring any bells with anyone? Can anyone point me to relevant algorithms or analogous applications? Cheers!

    Read the article

  • Proving that the distance values extracted in Dijkstra's algorithm is non-decreasing?

    - by Gail
    I'm reviewing my old algorithms notes and have come across this proof. It was from an assignment I had and I got it correct, but I feel that the proof certainly lacks. The question is to prove that the distance values taken from the priority queue in Dijkstra's algorithm is a non-decreasing sequence. My proof goes as follows: Proof by contradiction. Fist, assume that we pull a vertex from Q with d-value 'i'. Next time, we pull a vertex with d-value 'j'. When we pulled i, we have finalised our d-value and computed the shortest-path from the start vertex, s, to i. Since we have positive edge weights, it is impossible for our d-values to shrink as we add vertices to our path. If after pulling i from Q, we pull j with a smaller d-value, we may not have a shortest path to i, since we may be able to reach i through j. However, we have already computed the shortest path to i. We did not check a possible path. We no longer have a guaranteed path. Contradiction.

    Read the article

  • How to perform spatial partitioning in n-dimensions?

    - by kevin42
    I'm trying to design an implementation of Vector Quantization as a c++ template class that can handle different types and dimensions of vectors (e.g. 16 dimension vectors of bytes, or 4d vectors of doubles, etc). I've been reading up on the algorithms, and I understand most of it: here and here I want to implement the Linde-Buzo-Gray (LBG) Algorithm, but I'm having difficulty figuring out the general algorithm for partitioning the clusters. I think I need to define a plane (hyperplane?) that splits the vectors in a cluster so there is an equal number on each side of the plane. [edit to add more info] This is an iterative process, but I think I start by finding the centroid of all the vectors, then use that centroid to define the splitting plane, get the centroid of each of the sides of the plane, continuing until I have the number of clusters needed for the VQ algorithm (iterating to optimize for less distortion along the way). The animation in the first link above shows it nicely. My questions are: What is an algorithm to find the plane once I have the centroid? How can I test a vector to see if it is on either side of that plane?

    Read the article

  • Find all cycles in graph, redux

    - by Shadow
    Hi, I know there are a quite some answers existing on this question. However, I found none of them really bringing it to the point. Some argue that a cycle is (almost) the same as a strongly connected components (s. http://stackoverflow.com/questions/546655/finding-all-cycles-in-graph/549402#549402) , so one could use algorithms designed for that goal. Some argue that finding a cycle can be done via DFS and checking for back-edges (s. boost graph documentation on file dependencies). I now would like to have some suggestions on whether all cycles in a graph can be detected via DFS and checking for back-edges? My opinion is that it indeed could work that way as DFS-VISIT (s. pseudocode of DFS) freshly enters each node that was not yet visited. In that sense, each vertex exhibits a potential start of a cycle. Additionally, as DFS visits each edge once, each edge leading to the starting point of a cycle is also covered. Thus, by using DFS and back-edge checking it should indeed be possible to detect all cycles in a graph. Note that, if cycles with different numbers of participant nodes exist (e.g. triangles, rectangles etc.), additional work has to be done to discriminate the acutal "shape" of each cycle.

    Read the article

  • Media recommendation engine - Single user system - How to start

    - by Microkernel
    Hi guys, I want to implement a media recommendation engine. I saw a similar posts on this, but I think my requirements are bit different from those, so posting here. Here is the deal. I want to implement a recommendation engine for media players like VLC, which would be an engine that has to care for only single user. Like, it would be embedded in a media player on a PC which is typically used by single user. And it will start learning the likes and dislikes of the user and gradually learns what a user likes. Here it will not be able to find similar users for using their data for recommendation as its a single user system. So how to go about this? Or you can consider it as a recommendation engine that has to be put in say iPods, which has to learn about a single user and recommend music/Movies from the collections it has. I thought of start collecting the genre of music/movies (maybe even artist name) that user watches and recommend movies from the most watched Genre, but it look very crude, isn't it? So is there any algorithms I can use or any resources I can refer up to? Regards, MicroKernel :)

    Read the article

  • Returning recursive ternary freaks out

    - by David Titarenco
    Hi, assume this following function: int binaryTree::findHeight(node *n) { if (n == NULL) { return 0; } else { return 1 + max(findHeight(n->left), findHeight(n->right)); } } Pretty standard recursive treeHeight function for a given binary search tree binaryTree. Now, I was helping a friend (he's taking an algorithms course), and I ran into some weird issue with this function that I couldn't 100% explain to him. With max being defined as max(a,b) ((a)>(b)?(a):(b)) (which happens to be the max definition in windef.h), the recursive function freaks out (it runs something like n^n times where n is the tree height). This obviously makes checking the height of a tree with 3000 elements take very, very long. However, if max is defined via templating, like std does it, everything is okay. So using std::max fixed his problem. I just want to know why. Also, why does the countLeaves function work fine, using the same programmatic recursion? int binaryTree::countLeaves(node *n) { if (n == NULL) { return 0; } else if (n->left == NULL && n->right == NULL) { return 1; } else { return countLeaves(n->left) + countLeaves(n->right); } } Is it because in returning the ternary function, the values a => countLeaves(n->left) and b => countLeaves(n->right) were recursively double called simply because they were the resultants? Thank you!

    Read the article

  • Is functional GUI programming possible?

    - by eman
    I've recently caught the FP bug (trying to learn Haskell), and I've been really impressed with what I've seen so far (first-class functions, lazy evaluation, and all the other goodies). I'm no expert yet, but I've already begun to find it easier to reason "functionally" than imperatively for basic algorithms (and I'm having trouble going back where I have to). The one area where current FP seems to fall flat, however, is GUI programming. The Haskell approach seems to be to just wrap imperative GUI toolkits (such as GTK+ or wxWidgets) and to use "do" blocks to simulate an imperative style. I haven't used F#, but my understanding is that it does something similar using OOP with .NET classes. Obviously, there's a good reason for this--current GUI programming is all about IO and side effects, so purely functional programming isn't possible with most current frameworks. My question is, is it possible to have a functional approach to GUI programming? I'm having trouble imagining what this would look like in practice. Does anyone know of any frameworks, experimental or otherwise, that try this sort of thing (or even any frameworks that are designed from the ground up for a functional language)? Or is the solution to just use a hybrid approach, with OOP for the GUI parts and FP for the logic? (I'm just asking out of curiosity--I'd love to think that FP is "the future," but GUI programming seems like a pretty large hole to fill.)

    Read the article

  • Need some ignition for learning Embedded Systems

    - by Rahul
    I'm very much interested in building applications for Embedded Devices. I'm in my 3rd year Electrical Engineering and I'm passionate about coding, algorithms, Linux OS, etc. And also by Googling I found out that Linux OS is one of the best OSes for Embedded devices(may be/may not be). I want to work for companies which work on mobile applications. I'm a newbie/naive to this domain & my skills include C/C++ & MySQL. I need help to get started in the domain of Embedded Systems; like how/where to start off, Hardware prerequisites, necessary programming skills, also what kind of Embedded Applications etc. I've heard of ARM, firmware, PIC Micorcontrollers; but I don't know anything & just need proper introduction about them. Thanx. P.S: I'm currently reading Bjarne Struotsup's lecture in C++ at Texas A&M University, and one chapter in it describes about Embedded Systems Programming.

    Read the article

  • Algorithm possible amounts (over)paid for a specific price, based on denominations

    - by Wrikken
    In a current project, people can order goods delivered to their door and choose 'pay on delivery' as a payment option. To make sure the delivery guy has enough change customers are asked to input the amount they will pay (e.g. delivery is 48,13, they will pay with 60,- (3*20,-)). Now, if it were up to me I'd make it a free field, but apparantly higher-ups have decided is should be a selection based on available denominations, without giving amounts that would result in a set of denominations which could be smaller. Example: denominations = [1,2,5,10,20,50] price = 78.12 possibilities: 79 (multitude of options), 80 (e.g. 4*20) 90 (e.g. 50+2*20) 100 (2*50) It's international, so the denominations could change, and the algorithm should be based on that list. The closest I have come which seems to work is this: for all denominations in reversed order (large=>small) add ceil(price/denomination) * denomination to possibles baseprice = floor(price/denomination) * denomination; for all smaller denominations as subdenomination in reversed order add baseprice + (ceil((price - baseprice) / subdenomination) * subdenomination) to possibles end for end for remove doubles sort Is seems to work, but this has emerged after wildly trying all kinds of compact algorithms, and I cannot defend why it works, which could lead to some edge-case / new countries getting wrong options, and it does generate some serious amounts of doubles. As this is probably not a new problem, and Google et al. could not provide me with an answer save for loads of pages calculating how to make exact change, I thought I'd ask SO: have you solved this problem before? Which algorithm? Any proof it will always work?

    Read the article

  • CUDA: When to use shared memory and when to rely on L1 caching?

    - by Roger Dahl
    After Compute Capability 2.0 (Fermi) was released, I've wondered if there are any use cases left for shared memory. That is, when is it better to use shared memory than just let L1 perform its magic in the background? Is shared memory simply there to let algorithms designed for CC < 2.0 run efficiently without modifications? To collaborate via shared memory, threads in a block write to shared memory and synchronize with __syncthreads(). Why not simply write to global memory (through L1), and synchronize with __threadfence_block()? The latter option should be easier to implement since it doesn't have to relate to two different locations of values, and it should be faster because there is no explicit copying from global to shared memory. Since the data gets cached in L1, threads don't have to wait for data to actually make it all the way out to global memory. With shared memory, one is guaranteed that a value that was put there remains there throughout the duration of the block. This is as opposed to values in L1, which get evicted if they are not used often enough. Are there any cases where it's better too cache such rarely used data in shared memory than to let the L1 manage them based on the usage pattern that the algorithm actually has?

    Read the article

  • Matlab - Propagate points orthogonally on to the edge of shape boundaries

    - by Graham
    Hi I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking orthogonal unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Algorithm(s) for rearranging simple symbolic algebraic expressions

    - by Gabe Johnson
    Hi, I would like to know if there is a straightforward algorithm for rearranging simple symbolic algebraic expressions. Ideally I would like to be able to rewrite any such expression with one variable alone on the left hand side. For example, given the input: m = (x + y) / 2 ... I would like to be able to ask about x in terms of m and y, or y in terms of x and m, and get these: x = 2*m - y y = 2*m - x Of course we've all done this algorithm on paper for years. But I was wondering if there was a name for it. It seems simple enough but if somebody has already cataloged the various "gotchas" it would make life easier. For my purposes I won't need it to handle quadratics. (And yes, CAS systems do this, and yes I know I could just use them as a library. I would like to avoid such a dependency in my application. I really would just like to know if there are named algorithms for approaching this problem.)

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • Practical rules for premature optimization

    - by DougW
    It seems that the phrase "Premature Optimization" is the buzz-word of the day. For some reason, iphone programmers in particular seem to think of avoiding premature optimization as a pro-active goal, rather than the natural result of simply avoiding distraction. The problem is, the term is beginning to be applied more and more to cases that are completely inappropriate. For example, I've seen a growing number of people say not to worry about the complexity of an algorithm, because that's premature optimization (eg http://stackoverflow.com/questions/2190275/help-sorting-an-nsarray-across-two-properties-with-nssortdescriptor/2191720#2191720). Frankly, I think this is just laziness, and appalling to disciplined computer science. But it has occurred to me that maybe considering the complexity and performance of algorithms is going the way of assembly loop unrolling, and other optimization techniques that are now considered unnecessary. What do you think? Are we at the point now where deciding between an O(n^n) and O(n!) complexity algorithm is irrelevant? What about O(n) vs O(n*n)? What do you consider "premature optimization"? What practical rules do you use to consciously or unconsciously avoid it? This is a bit vague, but I'm curious to hear other peoples' opinions on the topic.

    Read the article

  • Log 2 N generic comparison tree

    - by Morano88
    Hey! I'm working on an algorithm for Redundant Binary Representation (RBR) where every two bits represent a digit. I designed the comparator that takes 4 bits and gives out 2 bits. I want to make the comparison in log 2 n so If I have X and Y .. I compare every 2 bits of X with every 2 bits of Y. This is smooth if the number of bits of X or Y equals n where (n = 2^X) i.e n = 2,4,8,16,32,... etc. Like this : However, If my input let us say is 6 or 10 .. then it becomes not smooth and I have to take into account some odd situations like this : I have a shallow experience in algorithms .. is there a generic way to do it .. so at the end I get only 2 bits no matter what the input is ? I just need hints or pseudo-code. If my question is not appropriate here .. so feel free to flag it or tell me to remove it. I'm using VHDL by the way!

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • Automatic tracking algorithm

    - by nico
    Hi everyone, I'm trying to write a simple tracking routine to track some points on a movie. Essentially I have a series of 100-frames-long movies, showing some bright spots on dark background. I have ~100-150 spots per frame, and they move over the course of the movie. I would like to track them, so I'm looking for some efficient (but possibly not overkilling to implement) routine to do that. A few more infos: the spots are a few (es. 5x5) pixels in size the movement are not big. A spot generally does not move more than 5-10 pixels from its original position. The movements are generally smooth. the "shape" of these spots is generally fixed, they don't grow or shrink BUT they become less bright as the movie progresses. the spots don't move in a particular direction. They can move right and then left and then right again the user will select a region around each spot and then this region will be tracked, so I do not need to automatically find the points. As the videos are b/w, I though I should rely on brigthness. For instance I thought I could move around the region and calculate the correlation of the region's area in the previous frame with that in the various positions in the next frame. I understand that this is a quite naïve solution, but do you think it may work? Does anyone know specific algorithms that do this? It doesn't need to be superfast, as long as it is accurate I'm happy. Thank you nico

    Read the article

  • Why Stream/lazy val implementation using is faster than ListBuffer one

    - by anrizal
    I coded the following implementation of lazy sieve algorithms using Stream and lazy val below : def primes(): Stream[Int] = { lazy val ps = 2 #:: sieve(3) def sieve(p: Int): Stream[Int] = { p #:: sieve( Stream.from(p + 2, 2). find(i=> ps.takeWhile(j => j * j <= i). forall(i % _ > 0)).get) } ps } and the following implementation using (mutable) ListBuffer: import scala.collection.mutable.ListBuffer def primes(): Stream[Int] = { def sieve(p: Int, ps: ListBuffer[Int]): Stream[Int] = { p #:: { val nextprime = Stream.from(p + 2, 2). find(i=> ps.takeWhile(j => j * j <= i). forall(i % _ > 0)).get sieve(nextprime, ps += nextprime) } } sieve(3, ListBuffer(3))} When I did primes().takeWhile(_ < 1000000).size , the first implementation is 3 times faster than the second one. What's the explanation for this ? I edited the second version: it should have been sieve(3, ListBuffer(3)) instead of sieve(3, ListBuffer()) .

    Read the article

  • The Immerman-Szelepcsenyi Theorem

    - by Daniel Lorch
    In the Immerman-Szelepcsenyi Theorem, two algorithms are specified that use non-determinisim. There is a rather lengthy algorithm using "inductive counting", which determines the number of reachable configurations for a given non-deterministic turing machine. The algorithm looks like this: Let m_{i+1}=0 For all configurations C Let b=0, r=0 For all configurations D Guess a path from I to D in at most i steps If found Let r=r+1 If D=C or D goes to C in 1 step Let b=1 If r<m_i halt and reject Let m_{i+1}=m_{i+1}+b I is the starting configuration. m_i is the number of configurations reachable from the starting configuration in i steps. This algorithm only calculates the "next step", i.e. m_i+1 from m_i. This seems pretty reasonable, but since we have nondeterminisim, why don't we just write: Let m_i = 0 For all configurations C Guess a path from I to C in at most i steps If found m_i = m_i + 1 What is wrong with this algorithm? I am using nondeterminism to guess a path from I to C, and I verify reachability I am iterating through the list of ALL configurations, so I am sure to not miss any configuration I respect space bounds I can generate a certificate (the list of reachable configs) I believe I have a misunderstanding of the "power" of non-determinisim, but I can't figure out where to look next. I am stuck on this for quite a while and I would really appreciate any help.

    Read the article

  • How to set WS-SecurityPolicy in an inbound CXF service in Mule?

    - by Brakara
    When configuring the service for handling UsernameToken and signatures, it's setup like this: <service name="serviceName"> <inbound> <cxf:inbound-endpoint address="someUrl" protocolConnector="httpsConnector" > <cxf:inInterceptors> <spring:bean class="org.apache.cxf.binding.soap.saaj.SAAJInInterceptor" /> <spring:bean class="org.apache.cxf.ws.security.wss4j.WSS4JInInterceptor"> <spring:constructor-arg> <spring:map> <spring:entry key="action" value="UsernameToken Timestamp Signature" /> <spring:entry key="passwordCallbackRef" value-ref="serverCallback" /> <spring:entry key="signaturePropFile" value="wssecurity.properties" /> </spring:map> </spring:constructor-arg> </spring:bean> </cxf:inInterceptors> </cxf:inbound-endpoint> </inbound> </service> But how is it possible to create a policy of what algorithms that are allowed, and what parts of the message that should be signed?

    Read the article

  • Generating easy-to-remember random identifiers

    - by Carl Seleborg
    Hi all, As all developers do, we constantly deal with some kind of identifiers as part of our daily work. Most of the time, it's about bugs or support tickets. Our software, upon detecting a bug, creates a package that has a name formatted from a timestamp and a version number, which is a cheap way of creating reasonably unique identifiers to avoid mixing packages up. Example: "Bug Report 20101214 174856 6.4b2". My brain just isn't that good at remembering numbers. What I would love to have is a simple way of generating alpha-numeric identifiers that are easy to remember. Examples would be "azil3", "ulmops", "fel2way", etc. I just made these up, but they are much easier to recognize when you see many of them at once. I know of algorithms that perform trigram analysis on text (say you feed them a whole book in German) and that can generate strings that look and feel like German words. This requires lots of data, though, and makes it slightly less suitable for embedding in an application just for this purpose. Do you know of anything else? Thanks! Carl

    Read the article

  • Divide and conquer of large objects for GC performance

    - by Aperion
    At my work we're discussing different approaches to cleaning up a large amount of managed ~50-100MB memory.There are two approaches on the table (read: two senior devs can't agree) and not having the experience the rest of the team is unsure of what approach is more desirable, performance or maintainability. The data being collected is many small items, ~30000 which in turn contains other items, all objects are managed. There is a lot of references between these objects including event handlers but not to outside objects. We'll call this large group of objects and references as a single entity called a blob. Approach #1: Make sure all references to objects in the blob are severed and let the GC handle the blob and all the connections. Approach #2: Implement IDisposable on these objects then call dispose on these objects and set references to Nothing and remove handlers. The theory behind the second approach is since the large longer lived objects take longer to cleanup in the GC. So, by cutting the large objects into smaller bite size morsels the garbage collector will processes them faster, thus a performance gain. So I think the basic question is this: Does breaking apart large groups of interconnected objects optimize data for garbage collection or is better to keep them together and rely on the garbage collection algorithms to processes the data for you? I feel this is a case of pre-optimization, but I do not know enough of the GC to know what does help or hinder it.

    Read the article

  • Need some ignition in Embedded Systems

    - by Rahul
    I'm very much interested in building applications for Embedded Devices. I'm in my 3rd year Electrical Engineering and I'm passionate about coding, algorithms, Linux OS, etc. And also by Googling I found out that Linux OS is one of the best OSes for Embedded devices(may be/may not be). I want to work for companies which work on mobile applications. I'm a newbie/naive to this domain & my skills include C/C++ & MySQL. I need help to get started in the domain of Embedded Systems; like how/where to start off, Hardware prerequisites, necessary programming skills, also what kind of Embedded Applications etc. I've heard of ARM, firmware, PIC Micorcontrollers; but I don't know anything & just need proper introduction about them. Thanx. P.S: I'm currently reading Bjarne Struotsup's lecture in C++ at Texas A&M University, and one chapter in it describes about Embedded Systems Programming.

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >