Search Results

Search found 6361 results on 255 pages for 'speed up'.

Page 235/255 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • Fixed point math in c#?

    - by x4000
    Hi there, I was wondering if anyone here knows of any good resources for fixed point math in c#? I've seen things like this (http://2ddev.72dpiarmy.com/viewtopic.php?id=156) and this (http://stackoverflow.com/questions/79677/whats-the-best-way-to-do-fixed-point-math), and a number of discussions about whether decimal is really fixed point or actually floating point (update: responders have confirmed that it's definitely floating point), but I haven't seen a solid C# library for things like calculating cosine and sine. My needs are simple -- I need the basic operators, plus cosine, sine, arctan2, PI... I think that's about it. Maybe sqrt. I'm programming a 2D RTS game, which I have largely working, but the unit movement when using floating-point math (doubles) has very small inaccuracies over time (10-30 minutes) across multiple machines, leading to desyncs. This is presently only between a 32 bit OS and a 64 bit OS, all the 32 bit machines seem to stay in sync without issue, which is what makes me think this is a floating point issue. I was aware from this as a possible issue from the outset, and so have limited my use of non-integer position math as much as possible, but for smooth diagonal movement at varying speeds I'm calculating the angle between points in radians, then getting the x and y components of movement with sin and cos. That's the main issue. I'm also doing some calculations for line segment intersections, line-circle intersections, circle-rect intersections, etc, that also probably need to move from floating-point to fixed-point to avoid cross-machine issues. If there's something open source in Java or VB or another comparable language, I could probably convert the code for my uses. The main priority for me is accuracy, although I'd like as little speed loss over present performance as possible. This whole fixed point math thing is very new to me, and I'm surprised by how little practical information on it there is on google -- most stuff seems to be either theory or dense C++ header files. Anything you could do to point me in the right direction is much appreciated; if I can get this working, I plan to open-source the math functions I put together so that there will be a resource for other C# programmers out there. UPDATE: I could definitely make a cosine/sine lookup table work for my purposes, but I don't think that would work for arctan2, since I'd need to generate a table with about 64,000x64,000 entries (yikes). If you know any programmatic explanations of efficient ways to calculate things like arctan2, that would be awesome. My math background is all right, but the advanced formulas and traditional math notation are very difficult for me to translate into code.

    Read the article

  • TVirtualStringTree - resetting non-visual nodes and memory comsumption

    - by Remy Lebeau - TeamB
    I have an app that loads records from a binary log file and displays them in a virtual TListView. There are potentially millions of records in a file, and the display can be filtered by the user, so I do not load all of the records in memory at one time, and the ListView item indexes are not a 1-to-1 relation with the file record offsets (List item 1 may be file record 100, for instance). I use the ListView's OnDataHint event to load records for just the items the ListView is actually interested in. As the user scrolls around, the range specified by OnDataHint changes, allowing me to free records that are not in the new range, and allocate new records as needed. This works fine, speed is tolerable, and the memory footprint is very low. I am currently evaluating TVirtualStringTree as a replacement for the TListView, mainly because I want to add the ability to expand/collapse records that span multiple lines (I can fudge it with the TListView by incrementing/decrementing the item count dynamically, but this is not as straight forward as using a real tree). For the most part, I have been able to port the TListView logic and have everything work as I need. I notice that TVirtualStringTree's virtual paridigm is vastly different, though. It does not have the same kind of OnDataHint functionality that TListView does (I can use the OnScroll event to fake it, which allows my memory buffer logic to continue working), and I can use the OnInitializeNode event to associate nodes with records that are allocated. However, once a tree node is initialized, it sees that it remains initialized for the lifetime of the tree. That is not good for me. As the user scrolls around and I remove records from memory, I need to reset those non-visual nodes without removing them from the tree completely, or losing their expand/collapse states. When the user scrolls them back into view, I can re-allocate the records and re-initialize the nodes. Basically, I want to make TVirtualStringTree act as much like TListView as possible, as far as its virtualization is concerned. I have seen that TVirtualStringTree has a ResetNode() method, but I encounter various errors whenever I try to use it. I must be using it wrong. I also thought of just storing a data pointer inside each node to my record buffers, and I allocate and free memory, update those pointers accordingly. The end effect does not work so well, either. Worse, my largest test log file has ~5 million records in it. If I initialize the TVirtualStringTree with that many nodes at one time (when the log display is unfiltered), the tree's internal overhead for its nodes takes up a whopping 260MB of memory (without any records being allocated yet). Whereas with the TListView, loading the same log file and all the memory logic behind it, I can get away with using just a few MBs. Any ideas?

    Read the article

  • Multi-part question about multi-threading, locks and multi-core processors (multi ^ 3)

    - by MusiGenesis
    I have a program with two methods. The first method takes two arrays as parameters, and performs an operation in which values from one array are conditionally written into the other, like so: void Blend(int[] dest, int[] src, int offset) { for (int i = 0; i < src.Length; i++) { int rdr = dest[i + offset]; dest[i + offset] = src[i] > rdr? src[i] : rdr; } } The second method creates two separate sets of int arrays and iterates through them such that each array of one set is Blended with each array from the other set, like so: void CrossBlend() { int[][] set1 = new int[150][75000]; // we'll pretend this actually compiles int[][] set2 = new int[25][10000]; // we'll pretend this actually compiles for (int i1 = 0; i1 < set1.Length; i1++) { for (int i2 = 0; i2 < set2.Length; i2++) { Blend(set1[i1], set2[i2], 0); // or any offset, doesn't matter } } } First question: Since this apporoach is an obvious candidate for parallelization, is it intrinsically thread-safe? It seems like no, since I can conceive a scenario (unlikely, I think) where one thread's changes are lost because a different threads ~simultaneous operation. If no, would this: void Blend(int[] dest, int[] src, int offset) { lock (dest) { for (int i = 0; i < src.Length; i++) { int rdr = dest[i + offset]; dest[i + offset] = src[i] > rdr? src[i] : rdr; } } } be an effective fix? Second question: If so, what would be the likely performance cost of using locks like this? I assume that with something like this, if a thread attempts to lock a destination array that is currently locked by another thread, the first thread would block until the lock was released instead of continuing to process something. Also, how much time does it actually take to acquire a lock? Nanosecond scale, or worse than that? Would this be a major issue in something like this? Third question: How would I best approach this problem in a multi-threaded way that would take advantage of multi-core processors (and this is based on the potentially wrong assumption that a multi-threaded solution would not speed up this operation on a single core processor)? I'm guessing that I would want to have one thread running per core, but I don't know if that's true.

    Read the article

  • C++ vs Matlab vs Python as a main language for Computer Vision Research

    - by Hough
    Hi all, Firstly, sorry for a somewhat long question but I think that many people are in the same situation as me and hopefully they can also gain some benefit from this. I'll be starting my PhD very soon which involves the fields of computer vision, pattern recognition and machine learning. Currently, I'm using opencv (2.1) C++ interface and I especially like its powerful Mat class and the overloaded operations available for matrix and image operations and seamless transformations. I've also tried (and implemented many small vision projects) using opencv python interface (new bindings; opencv 2.1) and I really enjoy python's ability to integrate opencv, numpy, scipy and matplotlib. But recently, I went back to opencv C++ interface because I felt that the official python new bindings were not stable enough and no overloaded operations are available for matrices and images, not to mention the lack of machine learning modules and slow speeds in certain operations. I've also used Matlab extensively in the past and although I've used mex files and other means to speed up the program, I just felt that Matlab's performance was inadequate for real-time vision tasks, be it for fast prototyping or not. When the project becomes larger and larger, many tasks have to be re-written in C and compiled into Mex files increasingly and Matlab becomes nothing more than a glue language. Here comes the sub-questions: For carrying out research in these fields (machine learning, vision, pattern recognition), what is your main or ideal programming language for rapid prototyping of ideas and testing algorithms contained in papers? For computer vision research work, can you list down the pros and cons of using the following languages? C++ (with opencv + gsl + svmlib + other libraries) vs Matlab (with all its toolboxes) vs python (with the imcomplete opencv bindings + numpy + scipy + matplotlib). Are there computer vision PhD/postgrad students here who are using only C++ (with all its availabe libraries including opencv) without even needing to resort to Matlab or python? In other words, given the current existing computer vision or machine learning libraries, is C++ alone sufficient for fast prototyping of ideas? If you're currently using Java or C# for your research, can you list down the reasons why they should be used and how they compare to other languages in terms of available libraries? What is the de facto vision/machine learning programming language and its associated libraries used in your research group? Thanks in advance. Edit: As suggested, I've opened the question to both academic and non-academic computer vision/machine learning/pattern recognition researchers and groups.

    Read the article

  • TypeError #1009, XML and AS3

    - by VideoDnd
    My animation is advanced forward, but it's frozen. It throws a TypeError #1009. How do I get rid of this error and get it to play? ERROR TypeError: Error #1009: Cannot access a property or method of a null object reference. at _fla::MainTimeline/frame1() TypeError: Error #1009: Cannot access a property or method of a null object reference. at _fla::MainTimeline/incrementCounter() at flash.utils::Timer/_timerDispatch() at flash.utils::Timer/tick() download http://sandboxfun.weebly.com/ XML <?xml version="1.0" encoding="utf-8"?> <SESSION> <TIMER TITLE="speed">1000</TIMER> <COUNT TITLE="starting position">10000</COUNT> </SESSION> FLA //DynamicText 'Count' var timer:Timer = new Timer(10); var count:int = 0; var fcount:int = 0; timer.addEventListener(TimerEvent.TIMER, incrementCounter); timer.start(); function incrementCounter(event:TimerEvent) { count = myXML.COUNT.text(); count++; fcount=int(count*count/1000); mytext.text = formatCount(fcount); } function formatCount(i:int):String { var fraction:int = i % 100; var whole:int = i / 100; return ("0000000" + whole).substr(-7, 7) + "." + (fraction < 10 ? "0" + fraction : fraction); } //LOAD XML var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("time.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); /*------CHANGED TIMER VALUE WITH XML------*/ timer = new Timer( Number(myXML.TIMER.text()) ); //timer.start(); //PARSE XML function processXML(e:Event):void { myXML = new XML(e.target.data); trace(myXML.COUNT.text()); trace(myXML.TIMER.text()); } //var count:int = 0;//give it a value type /*------CHANGED COUNT VALUE WITH XML------*/ count = myXML.COUNT.text();

    Read the article

  • Jquery Cycle issue or css issue in Chrome/Safari for Mac

    - by Mark
    Hi i have used the jquery cycle plugin to create multiple simple sliding galleries. In Chrome/Safair on Mac the browser is not loading the images. Here is the link the js i am using is here, although it could be a css issue..? I am struggling to find the real problem. $(document).ready(function() { $('.slides').each(function() { var $this = $(this), $ss = $this.closest('.slideshow'); var prev = $ss.find('a.prev'), next = $ss.find('a.next'); $this.cycle({ prev: prev, next: next, fx: 'scrollLeft', speed: 'fast', timeout: 0 }); }); }); CSS .slideshow { width:476px; height:287px; float:left; margin-right:30px; position:relative; z-index:0; margin-bottom:20px; } .slides { position:absolute; top:0; left:0; z-index:1; } a.prev { display:block; width:23px; height:22px; background:red; position:absolute; z-index:1000; background: url(../images/next_prev.png) no-repeat 0 0; top:133px; left:-11px; } a.next { display:block; width:23px; height:22px; background:red; position:absolute; z-index:1000; background: url(../images/next_prev.png) no-repeat -23px 0; top:133px; right:-11px; } Markup: <div class="slideshow"> <div class="slides"> <img src="images/chief_st_1.jpg" alt="CHIEF stationery + literature" /> <img src="images/chief_st_3.jpg" alt="CHIEF stationery + literature" /> <img src="images/chief_st_2.jpg" alt="CHIEF stationery + literature" /> </div> <a class="prev" href="#"></a> <a class="next" href="#"></a> </div> Any help would be appreciated. thanks

    Read the article

  • Is it normal for a programmer with 2 years experience to take a long time to code simple programs?

    - by ajax81
    Hi all, I'm a relatively new programmer (18 months on the scene), and I'm finally getting to the point where I'm comfortable accepting projects and developing solutions under minimal supervision. Unfortunately, this also means that I've become acutely aware of my performance shortfalls, the most prevalent of which is the amount of time it takes me to develop, test, and submit algorithms for review. A great example of what I'm talking about occurred this week when I was tasked with developing a simple XML web service (asp.net 3.5) callable via client-side JavaScript, that accepts a single parameter and returns a dataset output to a modal window (please note this is the first time I've had to develop a web service and have had ZERO experience creating/consuming them...let alone calling them from JS client side). Keeping a long story short -- I worked on it for 4 days straight, all day each day, for a grand total of 36 hours, not including the time I spent dwelling on the problem in the shower, the morning commute, and laying awake in bed at night. I learned a great deal about web services and xml/json/javascript...but was called in for a management review to discuss the length of time it took me to develop the solution. In the meeting, I was praised for the quality of my work and was in fact told that my effort was commendable. However, they (senior leads and pm's) weren't impressed with the amount of time it took me to develop the solution and expressed that they would have liked to see the solution in roughly 1/3 of the time it took me. I guess what concerns me the most is that I've identified this pattern as common for myself. Between online videos, book research, and trial/error coding...if its something I haven't seen before, I can spend up to two weeks on a problem that seems to only take the pros in the videos moments to code up. And of course, knowing that management isn't happy with this pattern has shaken me up a bit. To sum up, I have some very specific questions I'd like to ask, and would greatly appreciate your objective professional feedback. Is my experience as a junior programmer common among new developers? Or is it possible that I'm just not cut out for the work? If you suspect that my experience is not common and that there may be an aptitude issue, do you have any suggestions/solutions that I could propose to management to help bring me up to speed? Do seasoned, professional programmers ever encounter knowledge barriers that considerably delay deliverables? When you started out in the industry, did you know how to "do it all"? If not, how long did it take you to be perceived as "proficient"? Was it a natural progression of trial and error, or was there a particular zen moment when you knew you had achieved super saiyen power level? Anyways, thanks for taking the time to read my question(s). I don't know if this is the right place to ask for professional career guidance, but I greatly appreciate your willingness to help me out. Cheers, Daniel

    Read the article

  • Jquery Automatic Image Slider w/ CSS & jQuery

    - by Jacinto
    This is Automatic Image Slider w/ CSS & jQuery by Soh Tanaka I am trying to customize it to show .desc when the mouse hover overs the slider but it does not seem to work any help? //Set Default State of each portfolio piece $(".paging").show(); $(".paging a:first").addClass("active"); //Get size of images, how many there are, then determin the size of the image reel. var imageWidth = $(".window").width(); var imageSum = $(".image_reel ul.examples").size(); var imageReelWidth = imageWidth * imageSum; //Adjust the image reel to its new size $(".image_reel").css({'width' : imageReelWidth}); //Paging + Slider Function rotate = function(){ var triggerID = $active.attr("rel") - 1; //Get number of times to slide var image_reelPosition = triggerID * imageWidth; //Determines the distance the image reel needs to slide $(".paging a").removeClass('active'); //Remove all active class $active.addClass('active'); //Add active class (the $active is declared in the rotateSwitch function) //Slider Animation $(".image_reel").animate({ left: -image_reelPosition }, 500 ); }; //Rotation + Timing Event rotateSwitch = function(){ play = setInterval(function(){ //Set timer - this will repeat itself every 3 seconds $active = $('.paging a.active').next(); if ( $active.length === 0) { //If paging reaches the end... $active = $('.paging a:first'); //go back to first } rotate(); //Trigger the paging and slider function }, 7000); //Timer speed in milliseconds (3 seconds) }; rotateSwitch(); //Run function on launch //On Hover $(".image_reel").hover(function() { clearInterval(play); //Stop the rotation }, function() { rotateSwitch(); //Resume rotation }); //Hide the tooglebox when page load $(".desc").hide(); //slide up and down when hover over heading 2 $(".image_reel").hover(function(){ // slide toggle effect set to slow you can set it to fast too. $(this).next(".desc").slideToggle("slow"); return true; }); //On Click $(".paging a").click(function() { $active = $(this); //Activate the clicked paging //Reset Timer clearInterval(play); //Stop the rotation rotate(); //Trigger rotation immediately rotateSwitch(); // Resume rotation return false; //Prevent browser jump to link anchor });

    Read the article

  • TeamCity Scheduled Build not getting all files from VSS

    - by Kate
    Within TeamCity if I trigger a build it all works correctly, however if the Scheduler triggers a build it does not seem to get all the files from VSS. I have clean checkout directory turned on, so I am not sure how it determines the patch for the VSS root. Does anyone have any suggestions on how I can get it to always get all files, and create a new patch each time? I have put the start of two build logs below, as you can see the first one has the correct 249mb, whereas the second only transfers 2MB. The files it doesn't get from VSS seem sporadic and not in relation to what has changed. Manual Trigger [23:57:49]: Checking for changes [00:09:04]: Clean build enabled: removing old files from C:\Builds\Ab 2.0 [00:09:04]: Clearing temporary directory: C:\TeamCity\buildAgent\temp\buildTmp [00:09:05]: Checkout directory: C:\Builds\Ab 2.0 [00:09:05]: Updating sources: server side checkout... (24m:53s) [00:09:05]: [Updating sources: server side checkout...] Will perform clean checkout [00:09:05]: [Updating sources: server side checkout...] Clean checkout reasons [00:09:05]: [Clean checkout reasons] Checkout directory is empty or doesn't exist [00:09:05]: [Clean checkout reasons] "Clean all files before build" turned on [00:09:05]: [Updating sources: server side checkout...] Transferring cached clean patch for VCS root: Ab 2.0 [00:09:42]: [Updating sources: server side checkout...] Building incremental patch over the cached patch [00:31:50]: [Updating sources: server side checkout...] Transferring repository sources: 124.0Mb so far... [00:32:18]: [Updating sources: server side checkout...] Repository sources transferred: 249.46Mb total [00:32:18]: [Updating sources: server side checkout...] Average transfer speed: 183.40Kb per second Triggered by the Scheduler [07:45:01]: Checking for changes [07:55:09]: Clean build enabled: removing old files from C:\Builds\Ab 2.0 [07:55:22]: Clearing temporary directory: C:\TeamCity\buildAgent\temp\buildTmp [07:55:22]: Checkout directory: C:\Builds\Ab 2.0 [07:55:22]: Updating sources: server side checkout... (24m:24s) [07:55:22]: [Updating sources: server side checkout...] Will perform clean checkout [07:55:22]: [Updating sources: server side checkout...] Clean checkout reasons [07:55:22]: [Clean checkout reasons] Checkout directory is empty or doesn't exist [07:55:22]: [Clean checkout reasons] "Clean all files before build" turned on [07:55:22]: [Updating sources: server side checkout...] Building clean patch for VCS root: Ab 2.0 [08:19:46]: [Updating sources: server side checkout...] Transferring cached clean patch for VCS root: Ab 2.0 [08:19:47]: [Updating sources: server side checkout...] Repository sources transferred: 2.01Mb total

    Read the article

  • Upload large files via a webpage

    - by Hultner
    What way is the best way to let users upload large files from there webbrowser to a server. I'm talking 200MB+ possible up to a few gigatyes. I have been thinking of a few possible solutions to the problem (not tried them yet) and this is basically the things I came up with. Server download speed will not be a problem but the users connection possibly could. Having some sort of applet on the client side written in Java or Flash which sends the file in parts (is this possible with an applet) to a php/other script on the server and a checksum+ some other info about the file. On the server scripts all the parts and the info file is saved in a temporary directory wich has a unique name based on the checksum of the file and the ip of the user. When the last chunk is sent the applet sends a signal to the server saying it's finished and the server put the file together in the right location. If a chunk doesn't match the checksum for that part the server will send a response to the applet telling it to reupload that chunk. I don't know how important the checksum checking is since it's all tcpackages, someone with more insigth migth be able to answer on that. This is probably the worst way, changing the settings on your server to allow huge fileuploads via an inputfiel. Do it like a normal transfer. User an uploadmanager which does pretty much the same thing as applet i mentioned above. Pros of the first is probably that it would most likely be rather secure, you could show progress as well and possibly resume an upload if ip hasn't changed and do a threaded upload of the chunks. Cons of the first is that the user will need flash/java for it to work. Pros of the 2nd is that it will pretty much work for everyone but cons are big, first there's no way resuming an intruppted download and if something is wrong the whole file would have to be reuploaded is a few of cons. For the third one the pros is pretty muc the same as for the first but the cons is that the user would have to download an application to their computer and run and the application will have to be have to be compatible with their computer and OS. Another way may be a combination of two. Lets say an applet for bigger or more files and a simple input which is rather restricted to maybe max 10-20MB for smaller files and comability. There are probably other much smarter ways to tackle this and that's why I'm asking for advice here on SO.

    Read the article

  • Git/Mercurial (hg) opinion

    - by Richard
    First, let me say I'm not a professional programmer, but an engineer who had a need for it and had to learn. I was always working alone, so it was just me and my seven splitted personalities ... and we worked okey as a team :) Most of my stuff is done in C/Fortran/Matlab and so far I've been learning git to manage it all. However, although I've had no unsolvable problems with it, I've never been "that" happy with it ... for everything I cannot do, I hae to look up a book. And, for some time now I've been hearning a lot of good stuff about hg. Now, a coleague of mine will have to work with me on a project (I almost feel sorry for him) and he's started learning hg (says he likes it more), and I'm considering the switch myself. We work almost exclusivly on Windows platform (although I manage relatively ok using unix tools and things that come from that part of the world). So, I was wondering, in a described scenario, what problems could I expect with the switch. I heard that hg is rather more user friendly towards windows users, regarding the user interfaces. How does it handle repositories ? Does it create them the same way as git does (just one subdirectory in a working directory) and can I just copy the whole project directory (including git repo) and just carry them somewhere with no extra thinking ? (I really liked that when I was choosing over git/svn). Are there any good books on it that you can recommend (something like Pro Git, only for Hg). What are good ways to implement hg into Visual Studio/GVim for Windows, or into Windows Explorer so I can work relatively easily (I would like to avoid using the command line for everything regarding it, like in git shell). Is there something else I should be aware of (please, on this don't point me to other questions ... they just give me a ton of info, and I'm not sure what is it that I should take as important, and what to disregard). I'm trying to cut some time, since I cannot spend all that time relearning hg, like I did for git. I've also heard git is c project, while mercurial is python ... is there any noticeable difference in speed. git was pretty speedy ... will I encounter some waiting while working. Notice: All my projects are of let's say, middle size ... mostly numerical simulations ... 10-15000 lines (medium size?)

    Read the article

  • mysql index optimization for a table with multiple indexes that index some of the same columns

    - by Sean
    I have a table that stores some basic data about visitor sessions on third party web sites. This is its structure: id, site_id, unixtime, unixtime_last, ip_address, uid There are four indexes: id, site_id/unixtime, site_id/ip_address, and site_id/uid There are many different types of ways that we query this table, and all of them are specific to the site_id. The index with unixtime is used to display the list of visitors for a given date or time range. The other two are used to find all visits from an IP address or a "uid" (a unique cookie value created for each visitor), as well as determining if this is a new visitor or a returning visitor. Obviously storing site_id inside 3 indexes is inefficient for both write speed and storage, but I see no way around it, since I need to be able to quickly query this data for a given specific site_id. Any ideas on making this more efficient? I don't really understand B-trees besides some very basic stuff, but it's more efficient to have the left-most column of an index be the one with the least variance - correct? Because I considered having the site_id being the second column of the index for both ip_address and uid but I think that would make the index less efficient since the IP and UID are going to vary more than the site ID will, because we only have about 8000 unique sites per database server, but millions of unique visitors across all ~8000 sites on a daily basis. I've also considered removing site_id from the IP and UID indexes completely, since the chances of the same visitor going to multiple sites that share the same database server are quite small, but in cases where this does happen, I fear it could be quite slow to determine if this is a new visitor to this site_id or not. The query would be something like: select id from sessions where uid = 'value' and site_id = 123 limit 1 ... so if this visitor had visited this site before, it would only need to find one row with this site_id before it stopped. This wouldn't be super fast necessarily, but acceptably fast. But say we have a site that gets 500,000 visitors a day, and a particular visitor loves this site and goes there 10 times a day. Now they happen to hit another site on the same database server for the first time. The above query could take quite a long time to search through all of the potentially thousands of rows for this UID, scattered all over the disk, since it wouldn't be finding one for this site ID. Any insight on making this as efficient as possible would be appreciated :) Update - this is a MyISAM table with MySQL 5.0. My concerns are both with performance as well as storage space. This table is both read and write heavy. If I had to choose between performance and storage, my biggest concern is performance - but both are important. We use memcached heavily in all areas of our service, but that's not an excuse to not care about the database design. I want the database to be as efficient as possible.

    Read the article

  • pathinfo vs fnmatch

    - by zaf
    There was a small debate regarding the speed of fnmatch over pathinfo here : http://stackoverflow.com/questions/2692536/how-to-check-if-file-is-php I wasn't totally convinced so decided to benchmark the two functions. Using dynamic and static paths showed that pathinfo was faster. Is my benchmarking logic and conclusion valid? I include a sample of the results which are in seconds for 100,000 iterations on my machine : dynamic path pathinfo 3.79311800003 fnmatch 5.10071492195 x1.34 static path pathinfo 1.03921294212 fnmatch 2.37709188461 x2.29 Code: <pre> <?php $iterations=100000; // Benchmark with dynamic file path print("dynamic path\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); } $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(fnmatch('*.php',$f)) $d=uniqid(); } $t3 = microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); // Benchmark with static file path print("static path\n"); $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; $i=$iterations; $t1=microtime(true); while($i-->0) if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0) if(fnmatch('*.php',$f)) $d=uniqid(); $t3=microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); ?> </pre>

    Read the article

  • Data aggregation mongodb vs mysql

    - by Dimitris Stefanidis
    I am currently researching on a backend to use for a project with demanding data aggregation requirements. The main project requirements are the following. Store millions of records for each user. Users might have more than 1 million entries per year so even with 100 users we are talking about 100 million entries per year. Data aggregation on those entries must be performed on the fly. The users need to be able to filter on the entries by a ton of available filters and then present summaries (totals , averages e.t.c) and graphs on the results. Obviously I cannot precalculate any of the aggregation results because the filter combinations (and thus the result sets) are huge. Users are going to have access on their own data only but it would be nice if anonymous stats could be calculated for all the data. The data is going to be most of the time in batch. e.g the user will upload the data every day and it could like 3000 records. In some later version there could be automated programs that upload every few minutes in smaller batches of 100 items for example. I made a simple test of creating a table with 1 million rows and performing a simple sum of 1 column both in mongodb and in mysql and the performance difference was huge. I do not remember the exact numbers but it was something like mysql = 200ms , mongodb = 20 sec. I have also made the test with couchdb and had much worse results. What seems promising speed wise is cassandra which I was very enthusiastic about when I first discovered it. However the documentation is scarce and I haven't found any solid examples on how to perform sums and other aggregate functions on the data. Is that possible ? As it seems from my test (Maybe I have done something wrong) with the current performance its impossible to use mongodb for such a project although the automated sharding functionality seems like a perfect fit for it. Does anybody have experience with data aggregation in mongodb or have any insights that might be of help for the implementation of the project ? Thanks, Dimitris

    Read the article

  • Is my understanding of "select distinct" correct?

    - by paxdiablo
    We recently discovered a performance problem with one of our systems and I think I have the fix but I'm not certain my understanding is correct. In simplest form, we have a table blah into which we accumulate various values based on a key field. The basic form is: recdate date rectime time system varchar(20) count integer accum1 integer accum2 integer There are a lot more accumulators than that but they're all of the same form. The primary key is made up of recdate, rectime and system. As values are collected to the table, the count for a given recdate/rectime/system is incremented and the values for that key are added to the accumulators. That means the averages can be obtained by using accumN / count. Now we also have a view over that table specified as follows: create view blah_v ( recdate, rectime, system, count, accum1, accum2 ) as select distinct recdate, rectime, system, count, value (case when count > 0 then accum1 / count end, 0), value (case when count > 0 then accum2 / count end, 0) from blah; In other words, the view gives us the average value of the accumulators rather than the sums. It also makes sure we don't get a divide-by-zero in those cases where the count is zero (these records do exist and we are not allowed to remove them so don't bother telling me they're rubbish - you're preaching to the choir). We've noticed that the time difference between doing: select distinct recdate from XX varies greatly depending on whether we use the table or the view. I'm talking about the difference being 1 second for the table and 27 seconds for the view (with 100K rows). We actually tracked it back to the select distinct. What seems to be happening is that the DBMS is actually loading all the rows in and sorting them so as to remove duplicates. That's fair enough, it's what we stupidly told it to do. But I'm pretty sure the fact that the view includes every component of the primary key means that it's impossible to have duplicates anyway. We've validated the problem since, if we create another view without the distinct, it performs at the same speed as the underlying table. I just wanted to confirm my understanding that a select distinct can not have duplicates if it includes all the primary key components. If that's so, then we can simply change the view appropriately.

    Read the article

  • Reading Source Code Aloud

    - by Jon Purdy
    After seeing this question, I got to thinking about the various challenges that blind programmers face, and how some of them are applicable even to sighted programmers. Particularly, the problem of reading source code aloud gives me pause. I have been programming for most of my life, and I frequently tutor fellow students in programming, most often in C++ or Java. It is uniquely aggravating to try to verbally convey the essential syntax of a C++ expression. The speaker must give either an idiomatic translation into English, or a full specification of the code in verbal longhand, using explicit yet slow terms such as "opening parenthesis", "bitwise and", et cetera. Neither of these solutions is optimal. On the one hand, an idiomatic translation is only useful to a programmer who can de-translate back into the relevant programming code—which is not usually the case when tutoring a student. In turn, education (or simply getting someone up to speed on a project) is the most common situation in which source is read aloud, and there is a very small margin for error. On the other hand, a literal specification is aggravatingly slow. It takes far far longer to say "pound, include, left angle bracket, iostream, right angle bracket, newline" than it does to simply type #include <iostream>. Indeed, most experienced C++ programmers would read this merely as "include iostream", but again, inexperienced programmers abound and literal specifications are sometimes necessary. So I've had an idea for a potential solution to this problem. In C++, there is a finite set of keywords—63—and operators—54, discounting named operators and treating compound assignment operators and prefix versus postfix auto-increment and decrement as distinct. There are just a few types of literal, a similar number of grouping symbols, and the semicolon. Unless I'm utterly mistaken, that's about it. So would it not then be feasible to simply ascribe a concise, unique pronunciation to each of these distinct concepts (including one for whitespace, where it is required) and go from there? Programming languages are far more regular than natural languages, so the pronunciation could be standardised. Speakers of any language would be able to verbally convey C++ code, and due to the regularity and fixity of the language, speech-to-text software could be optimised to accept C++ speech with a high degree of accuracy. So my question is twofold: first, is my solution feasible; and second, does anyone else have other potential solutions? I intend to take suggestions from here and use them to produce a formal paper with an example implementation of my solution.

    Read the article

  • architecture python question

    - by tom smith
    hi. creating a distributed crawling python app. it consists of a master server, and associated client apps that will run on client servers. the purpose of the client app is to run across a targeted site, to extract specific data. the clients need to go "deep" within the site, behind multiple levels of forms, so each client is specifically geared towards a given site. each client app looks something like main: parse initial url call function level1 (data1) function level1 (data) parse the url, for data1 use the required xpath to get the dom elements call the next function call level2 (data) function level2 (data2) parse the url, for data2 use the required xpath to get the dom elements call the next function call level3 function level3 (dat3) parse the url, for data3 use the required xpath to get the dom elements call the next function call level4 function level4 (data) parse the url, for data4 use the required xpath to get the dom elements at the final function.. --all the data output, and eventually returned to the server --at this point the data has elements from each function... my question: given that the number of calls that is made to the child function by the current function varies, i'm trying to figure out the best approach. each function essentialy fetches a page of content, and then parses the page using a number of different XPath expressions, combined with different regex expressions depending on the site/page. if i run a client on a single box, as a sequential process, it'll take awhile, but the load on the box is rather small. i've thought of attempting to implement the child functions as threads from the current function, but that could be a nightmare, as well as quickly bring the "box" to its knees! i've thought of breaking the app up in a manner that would allow the master to essentially pass packets to the client boxes, in a way to allow each client/function to be run directly from the master. this process requires a bit of rewrite, but it has a number of advantages. a bunch of redundancy, and speed. it would detect if a section of the process was crashing and restart from that point. but not sure if it would be any faster... i'm writing the parsing scripts in python.. so... any thoughts/comments would be appreciated... i can get into a great deal more detail, but didn't want to bore anyone!! thanks! tom

    Read the article

  • MySQL table organization and optimization (Rails)

    - by aguynamedloren
    I've been learning Ruby on Rails over the past few months with no prior programming experience. Lately, I've been thinking about database optimization and table organization. I know there are great books on the subject, but I typically learn by example / as I go. Here's a hypothetical situation: Let's say I am building a social network for a niche community with 250,000 members (users). The users have the ability to attend events. Let's say there are 50,000 past/present/future events. Much like Facebook events, a user can attend any number of events and an event can have any number of attendees. In the database, there would be a table for users and a table for events. Somehow I would have to create an association between the users and events. I could create an "events" column in the users table such that each user row would contain a hash of event IDs, or I could create an "attendees" column in the events table such that each event row would contain a hash of user IDs. Neither of these solutions seem ideal, however. On a users profile page, I want to display the list of events they are associated with, which would require scanning the 50,000 event rows for the user ID of said user if I include an "attendees" column in the events table. Likewise, on an event page, I want to display a list of attendees for the event, which would require scanning the 250,000 user rows for the event ID of said event if I include an "events" column in the users table. Option 3 would be to create a third table that contains the attendee information for each and every event - but I don't see how this would solve any problems. Are these non-issues? Rails makes accessing all of this information easy, but I guess I'm worried about scale. It is entirely possible that I am under-estimating the speed and processing power of modern databases / servers / etc. How long would it take to scan 250,000 user rows for specific event IDs - 10ms? 100ms? 1,000ms? I guess that's not that bad. Am I just over-thinking this?

    Read the article

  • How can i use a duration setting on .animate if it is inside the callback from a .fadeOut effect?

    - by Jannis
    I am trying to just fade the content of section#secondary out and once the content has been faded out, the parent (section#secondary) should animate 'shut' in a slider animation. All this is working however the durations are not and I cannot figure out why. Here is my code: HTML <section id="secondary"> <a href="#" class="slide_button">&laquo;</a> <!-- slide in/back button --> <article> <h1>photos</h1> <div class="album_nav"><a href="#">photo 1 of 6</a> | <a href="#">create an album</a></div> <div class="bar"> <p class="section_title">current image title</p> </div> <section class="image"> <div class="links"> <a class="_back album_link" href="#">« from the album: james new toy</a> <nav> <a href="#" class="_back small_button">back</a> <a href="#" class="_next small_button">next</a> </nav> </div> <img src="http://localhost/~jannis/3781_doggie_wonderland/www/preview/static/images/sample-image-enlarged.jpg" width="418" height="280" alt="" /> </section> </article> <footer> <embed src="http://localhost/~jannis/3781_doggie_wonderland/www/preview/static/flash/secondary-footer.swf" wmode="transparent" width="495" height="115" type="application/x-shockwave-flash" /> </footer> </section> <!-- close secondary --> jQuery // ============================= // = Close button (slide away) = // ============================= $('a.slide_button').click(function() { $(this).closest('section#secondary').children('*').fadeOut('slow', function() { $('section#secondary').animate({'width':'0'}, 3000); }); }); Because the content of section#secondary is variable I use the * selector. What happens is that the fadeOut uses the appropriate slow speed but as soon as the callback fires (once the content is faded out) the section#secondary animates to width: 0 within a couple of milliseconds and not the 3000 ms ( 3 sec ) I set the animation duration to. Any ideas would be appreciated. PS: I cannot post an example at this point but since this is more a matter of theory of jQuery I don't think an example is necessary here. Correct me if I am wrong..

    Read the article

  • Lazy loading the addthis script? (or lazy loading external js content dependent on already fired eve

    - by Keith Bentrup
    I want to have the addthis widget available for my users, but I want to lazy load it so that my page loads as quickly as possible. However, after trying it via a script tag and then via my lazy loading method, it appears to only work via the script tag. In the obfuscated code, I see something that looks like it's dependent on the DOMContentLoaded event (at least for firefox). Since the DOMContentLoaded event has already fired, the widget doesn't render properly. What to do? I could just use a script tag (slower)... or could I fire (in a cross browser way) the DOMContentLoaded (or equivalent) event? I have a feeling this may not be possible b/c I believe that (like jQuery) there are multiple tests of the content ready event, and so multiple simulated events would have to occur. Nonetheless, this is an interesting problem b/c I have seen a couple widgets now assume that you are including their stuff via static script tags. It would be nice if they wrote code that was more useful to developers concerned about speed, but until then, is there a work around?? And/or are any of my assumptions wrong? Edit: Because the 1st answer to the question seemed to miss the point of my problem, I wanted to clarify the situation. This is about a specific problem. I'm not looking for yet another lazy load script or check if some dependencies are loaded script. Specifically this problem deals with external widgets that you do not have control over and may or may not be obfuscated delaying the load of the external widgets until they are needed or at least, til substantially after everything else has been loaded including other deferred elements b/c of the how the widget was written, precludes existing, typical lazy loading paradigms While it's esoteric, I have seen it happen with a couple widgets - where the widget developers assume that you're just willing to throw in another script tag at the bottom of the page. I'm looking to save those 500-1000 ms** though as numerous studies by yahoo, google, and amazon show it to be important to your user's experience. **My testing with hammerhead and personal experience indicates that this will be my savings in this case.

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • Is it too early to start designing for Task Parallel Library?

    - by Joe Erickson
    I have been following the development of the .NET Task Parallel Library (TPL) with great interest since Microsoft first announced it. There is no doubt in my mind that we will eventually take advantage of TPL. What I am questioning is whether it makes sense to start taking advantage of TPL when Visual Studio 2010 and .NET 4.0 are released, or whether it makes sense to wait a while longer. Why Start Now? The .NET 4.0 Task Parallel Library appears to be well designed and some relatively simple tests demonstrate that it works well on today's multi-core CPUs. I have been very interested in the potential advantages of using multiple lightweight threads to speed up our software since buying my first quad processor Dell Poweredge 6400 about seven years ago. Experiments at that time indicated that it was not worth the effort, which I attributed largely to the overhead of moving data between each CPU's cache (there was no shared cache back then) and RAM. Competitive advantage - some of our customers can never get enough performance and there is no doubt that we can build a faster product using TPL today. It sounds fun. Yes, I realize that some developers would rather poke themselves in the eye with a sharp stick, but we really enjoy maximizing performance. Why Wait? Are today's Intel Nehalem CPUs representative of where we are going as multi-core support matures? You can purchase a Nehalem CPU with 4 cores which share a single level 3 cache today, and most likely a 6 core CPU sharing a single level 3 cache by the time Visual Studio 2010 / .NET 4.0 are released. Obviously, the number of cores will go up over time, but what about the architecture? As the number of cores goes up, will they still share a cache? One issue with Nehalem is the fact that, even though there is a very fast interconnect between the cores, they have non-uniform memory access (NUMA) which can lead to lower performance and less predictable results. Will future multi-core architectures be able to do away with NUMA? Similarly, will the .NET Task Parallel Library change as it matures, requiring modifications to code to fully take advantage of it? Limitations Our core engine is 100% C# and has to run without full trust, so we are limited to using .NET APIs.

    Read the article

  • How to call Ajax to run a PHP file while maintaining PHP & Javascript variables.

    - by Umar
    Hi Stackoverflowers. I'm using the Facebook php-sdk to get the users name and friends, right now the loading friends part takes about +3 seconds so I wanted to do it via Ajax, e.g. so the document can load and jQuery then calls an external PHP script which loads the friends (their names and their profile pictures). So to do this I did: $(document).ready(function() { var loadUrl = "http://localhost/fb/getFriends.php" ; $("#friends") .html("Hold on, your friends are loading!") .load(loadUrl); }); But I get a PHP error: Fatal error: Call to a member function api() on a non-object If I do this in the same PHP file (so I don't use Ajax at all to call it) it works fine. Now I think I understand the reason this is happening, but I don't know how to fix it. In my main index.php file I have a bunch of init and session code e.g. FB.init({ appId : '<?php echo $facebook->getAppId(); ?>', session : <?php echo json_encode($session); ?>, // don't refetch the session when PHP already has it status : true, // check login status cookie : true, // enable cookies to allow the server to access the session xfbml : true // parse XFBML }); So I'm just wondering what is the best way to treat my new separate PHP file getFriends.php in a way where it has access to all PHP/JavaScript session data/variables? If you haven't used the Facebook php-sdk I'll quickly explain what I mean: Lets say I have index.php and getUsername.php, from index.php I want to retrieve the getUsername.php file via Ajax using .load. Now the problem is getUsername.php needs to access PHP session data/Javascript Init functions which were created in index.php, so I'm thinking of ways to solve this (I'm new to PHP so sorry if this sounds silly) but I'm thinking maybe I could do a POST in jQuery Ajax and post the session data? Or maybe I could create a PHP class, so something like: class getUsername extends index{} /*Yes I'm a newbie*/ If you have a look at the php-sdk example.php link posted at the top maybe you'd better understand what variables exactly need to be accessed from a new file. Also on a different note, I'm using PHP to work out page rendering times and it seems that fetching the users name alone : // Session based API call. if ($session) { try { $uid = $facebook->getUser(); $me = $facebook->api('/me'); } catch (FacebookApiException $e) { error_log($e); } } Can take a good 4 seconds, is this normal? Once I get the users details is it good to cache it or something? -Speed isn't as important right now, for now I'm just trying to figure out this Ajax-separating php files thing. Woah this is a long post. Thanks very much for your time.

    Read the article

  • What is the Fastest Way to Check for a Keyword in a List of Keywords in Delphi?

    - by lkessler
    I have a small list of keywords. What I'd really like to do is akin to: case MyKeyword of 'CHIL': (code for CHIL); 'HUSB': (code for HUSB); 'WIFE': (code for WIFE); 'SEX': (code for SEX); else (code for everything else); end; Unfortunately the CASE statement can't be used like that for strings. I could use the straight IF THEN ELSE IF construct, e.g.: if MyKeyword = 'CHIL' then (code for CHIL) else if MyKeyword = 'HUSB' then (code for HUSB) else if MyKeyword = 'WIFE' then (code for WIFE) else if MyKeyword = 'SEX' then (code for SEX) else (code for everything else); but I've heard this is relatively inefficient. What I had been doing instead is: P := pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX '); case P of 1: (code for CHIL); 6: (code for HUSB); 11: (code for WIFE); 17: (code for SEX); else (code for everything else); end; This, of course is not the best programming style, but it works fine for me and up to now didn't make a difference. So what is the best way to rewrite this in Delphi so that it is both simple, understandable but also fast? (For reference, I am using Delphi 2009 with Unicode strings.) Followup: Toby recommended I simply use the If Then Else construct. Looking back at my examples that used a CASE statement, I can see how that is a viable answer. Unfortunately, my inclusion of the CASE inadvertently hid my real question. I actually don't care which keyword it is. That is just a bonus if the particular method can identify it like the POS method can. What I need is to know whether or not the keyword is in the set of keywords. So really I want to know if there is anything better than: if pos(' ' + MyKeyword + ' ', ' CHIL HUSB WIFE SEX ') > 0 then The If Then Else equivalent does not seem better in this case being: if (MyKeyword = 'CHIL') or (MyKeyword = 'HUSB') or (MyKeyword = 'WIFE') or (MyKeyword = 'SEX') then In Barry's comment to Kornel's question, he mentions the TDictionary Generic. I've not yet picked up on the new Generic collections and it looks like I should delve into them. My question here would be whether they are built for efficiency and how would using TDictionary compare in looks and in speed to the above two lines? In later profiling, I have found that the concatenation of strings as in: (' ' + MyKeyword + ' ') is VERY expensive time-wise and should be avoided whenever possible. Almost any other solution is better than doing this.

    Read the article

  • Segmenting a double array of labels

    - by Ami
    The Problem: I have a large double array populated with various labels. Each element (cell) in the double array contains a set of labels and some elements in the double array may be empty. I need an algorithm to cluster elements in the double array into discrete segments. A segment is defined as a set of pixels that are adjacent within the double array and one label that all those pixels in the segment have in common. (Diagonal adjacency doesn't count and I'm not clustering empty cells). |-------|-------|------| | Jane | Joe | | | Jack | Jane | | |-------|-------|------| | Jane | Jane | | | | Joe | | |-------|-------|------| | | Jack | Jane | | | Joe | | |-------|-------|------| In the above arrangement of labels distributed over nine elements, the largest cluster is the “Jane” cluster occupying the four upper left cells. What I've Considered: I've considered iterating through every label of every cell in the double array and testing to see if the cell-label combination under inspection can be associated with a preexisting segment. If the element under inspection cannot be associated with a preexisting segment it becomes the first member of a new segment. If the label/cell combination can be associated with a preexisting segment it associates. Of course, to make this method reasonable I'd have to implement an elaborate hashing system. I'd have to keep track of all the cell-label combinations that stand adjacent to preexisting segments and are in the path of the incrementing indices that are iterating through the double array. This hash method would avoid having to iterate through every pixel in every preexisting segment to find an adjacency. Why I Don't Like it: As is, the above algorithm doesn't take into consideration the case where an element in the double array can be associated with two unique segments, one in the horizontal direction and one in the vertical direction. To handle these cases properly, I would need to implement a test for this specific case and then implement a method that will both associate the element under inspection with a segment and then concatenate the two adjacent identical segments. On the whole, this method and the intricate hashing system that it would require feels very inelegant. Additionally, I really only care about finding the large segments in the double array and I'm much more concerned with the speed of this algorithm than with the accuracy of the segmentation, so I'm looking for a better way. I assume there is some stochastic method for doing this that I haven't thought of. Any suggestions?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >