Search Results

Search found 5298 results on 212 pages for 'marching cubes algorithm'.

Page 83/212 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • question about LSD radix sort

    - by davit-datuashvili
    hello i have following code public class LSD{ public static int R=1<<8; public static int bytesword=4; public static void radixLSD(int a[],int l,int r){ int aux[]=new int[a.length]; for (int d=bytesword-1;d>=0;d--){ int i, j; int count[]=new int[R+1]; for ( j=0;j<R;j++) count[j]=0; for (i=l;i<=r;i++) count[digit(a[i],d)+1]++; for (j=1;j<R;j++) count[j]+=count[j-1]; for (i=l;i<=r;i++) aux[count[digit(a[i],d)]++]=a[i]; for (i=l;i<=r;i++) a[i]=aux[i-1]; } } public static void main(String[]args){ int a[]=new int[]{3,6,5,7,4,8,9}; radixLSD(a,0,a.length-1); for (int i=0;i<a.length;i++){ System.out.println(a[i]); } } public static int digit(int n,int d){ return (n>>d)&1; } } but it show me mistake java.lang.ArrayIndexOutOfBoundsException: -1 at LSD.radixLSD(LSD.java:19) at LSD.main(LSD.java:29) please help me

    Read the article

  • Javascript algorithm that calculates week number in Fiscal Year

    - by ForeignerBR
    Hi, I have been looking for a Javascript algorithms that gives me the week number of a given Date object within a custom fiscal year. The fiscal year of my company starts on 1 September and ends on 31 August. Say today happens to be September 1st and I pass in a newly instanced Date object to this function; I would expect it to return 1. Hopefully someone will be able to help me with it. thanks, fbr

    Read the article

  • Algorithm for dragging objects on a fixed grid

    - by FlyingStreudel
    Hello, I am working on a program for the mapping and playing of the popular tabletop game D&D :D Right now I am working on getting the basic functionality like dragging UI elements around, snapping to the grid and checking for collisions. Right now every object when released from the mouse immediately snaps to the nearest grid point. This causes an issue when something like a player object snaps to a grid point that has a wall -or other- adjacent. So essentially when the player is dropped they wind up with some of the wall covering them. This is fine and working as intended, however the problem is that now my collision detection is tripped whenever you try to move this player because its sitting underneath a wall and because of this you cant drag the player anymore. Here is the relevant code: void UIObj_MouseMove(object sender, MouseEventArgs e) { blocked = false; if (dragging) { foreach (UIElement o in ((Floor)Parent).Children) { if (o.GetType() != GetType() && o.GetType().BaseType == typeof(UIObj) && Math.Sqrt(Math.Pow(((UIObj)o).cX - cX, 2) + Math.Pow(((UIObj)o).cY - cY, 2)) < Math.Max(r.Height + ((UIObj)o).r.Height, r.Width + ((UIObj)o).r.Width)) { double Y = e.GetPosition((Floor)Parent).Y; double X = e.GetPosition((Floor)Parent).X; Geometry newRect = new RectangleGeometry(new Rect(Margin.Left + (X - prevX), Margin.Top + (Y - prevY), Margin.Right + (X - prevX), Margin.Bottom + (Y - prevY))); GeometryHitTestParameters ghtp = new GeometryHitTestParameters(newRect); VisualTreeHelper.HitTest(o, null, new HitTestResultCallback(MyHitTestResultCallback), ghtp); } } if (!blocked) { Margin = new Thickness(Margin.Left + (e.GetPosition((Floor)Parent).X - prevX), Margin.Top + (e.GetPosition((Floor)Parent).Y - prevY), Margin.Right + (e.GetPosition((Floor)Parent).X - prevX), Margin.Bottom + (e.GetPosition((Floor)Parent).Y - prevY)); InvalidateVisual(); } prevX = e.GetPosition((Floor)Parent).X; prevY = e.GetPosition((Floor)Parent).Y; cX = Margin.Left + r.Width / 2; cY = Margin.Top + r.Height / 2; } } internal virtual void SnapToGrid() { double xPos = Margin.Left; double yPos = Margin.Top; double xMarg = xPos % ((Floor)Parent).cellDim; double yMarg = yPos % ((Floor)Parent).cellDim; if (xMarg < ((Floor)Parent).cellDim / 2) { if (yMarg < ((Floor)Parent).cellDim / 2) { Margin = new Thickness(xPos - xMarg, yPos - yMarg, xPos - xMarg + r.Width, yPos - yMarg + r.Height); } else { Margin = new Thickness(xPos - xMarg, yPos - yMarg + ((Floor)Parent).cellDim, xPos - xMarg + r.Width, yPos - yMarg + ((Floor)Parent).cellDim + r.Height); } } else { if (yMarg < ((Floor)Parent).cellDim / 2) { Margin = new Thickness(xPos - xMarg + ((Floor)Parent).cellDim, yPos - yMarg, xPos - xMarg + ((Floor)Parent).cellDim + r.Width, yPos - yMarg + r.Height); } else { Margin = new Thickness(xPos - xMarg + ((Floor)Parent).cellDim, yPos - yMarg + ((Floor)Parent).cellDim, xPos - xMarg + ((Floor)Parent).cellDim + r.Width, yPos - yMarg + ((Floor)Parent).cellDim + r.Height); } } } Essentially I am looking for a simple way to modify the existing code to allow the movement of a UI element that has another one sitting on top of it. Thanks!

    Read the article

  • how to elegantly duplicate a graph (neural network)

    - by macias
    I have a graph (network) which consists of layers, which contains nodes (neurons). I would like to write a procedure to duplicate entire graph in most elegant way possible -- i.e. with minimal or no overhead added to the structure of the node or layer. Or yet in other words -- the procedure could be complex, but the complexity should not "leak" to structures. They should be no complex just because they are copyable. I wrote the code in C#, so far it looks like this: neuron has additional field -- copy_of which is pointer the the neuron which base copied from, this is my additional overhead neuron has parameterless method Clone() neuron has method Reconnect() -- which exchanges connection from "source" neuron (parameter) to "target" neuron (parameter) layer has parameterless method Clone() -- it simply call Clone() for all neurons network has parameterless method Clone() -- it calls Clone() for every layer and then it iterates over all neurons and creates mappings neuron=copy_of and then calls Reconnect to exchange all the "wiring" I hope my approach is clear. The question is -- is there more elegant method, I particularly don't like keeping extra pointer in neuron class just in case of being copied! I would like to gather the data in one point (network's Clone) and then dispose it completely (Clone method cannot have an argument though).

    Read the article

  • Sync Algorithms

    - by Kristopher Johnson
    Are there any good references out there for sync algorithms? I'm interested in algorithms that synchronize the following kinds of data between multiple users: calendars documents lists and outlines I'm not just looking for synchronization of contents of directories a la rsync; I am interested in merging the data within individual files.

    Read the article

  • Where can I learn more about datastructure tricky questions?

    - by Sandbox
    I am relatively new to programming (around 1 year programming C#-winforms). Also, I come from a non CS background (no formal degree) Recently, while being interviewed for a job, I was asked about implementing a queue using a stack. I fumbled and wan't able to answer the question. After, the interview I could do it(had to spend some time). I have learnt (and think that I know it well) basic algorithms in datastructures using the book Data Structures: A Pseudocode Approach with C - Richard F. Gilberg (Author) . I want to know about sites/ books which have such questions along with answers. I think this will allow me to develop my CS specific problem solving skills. Any help is appreciated. BOUNTY: I am looking at some blog/website with datastructure and algorithms Q&A.

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • simple plot algorithm with autoscale

    - by adrin
    I need to implement a simple plotting component in C#(WPF to be more precise). What i have is a collection of data samples containing time (X axis) and a value (both double types). I have a drawing canvas of a fixed size (Width x Height) and a DrawLine method/function that can draw on it. The problem I am facing now is how do I draw the plot so that it is autoscaled? In other words how do I map the samples I have to actual pixels on my Width x Height canvas?

    Read the article

  • Calculating holidays

    - by Ralph Shillington
    A number of holidays move around from year to year. For example, in Canada Victoria day (aka the May two-four weekend) is the Monday before May 25th, or Thanksgiving is the 2nd Monday of October (in Canada). I've been using variations on this Linq query to get the date of a holiday for a given year: var year = 2011; var month = 10; var dow = DayOfWeek.Monday; var instance = 2; var day = (from d in Enumerable.Range(1,DateTime.DaysInMonth(year,month)) let sample = new DateTime(year,month,d) where sample.DayOfWeek == dow select sample).Skip(instance-1).Take(1); While this works, and is easy enough to understand, I can imagine there is a more elegant way of making this calculation versus this brute force approach. Of course this doesn't touch on holidays such as Easter and the many other lunar based dates.

    Read the article

  • Is there a name for the technique of using base-2 numbers to encode a list of unique options?

    - by Lunatik
    Apologies for the rather vague nature of this question, I've never been taught programming and Google is rather useless to a self-help guy like me in this case as the key words are pretty ambiguous. I am writing a couple of functions that encode and decode a list of options into a Long so they can easily be passed around the application, you know this kind of thing: 1 - Apple 2 - Orange 4 - Banana 8 - Plum etc. In this case the number 11 would represent Apple, Orange & Plum. I've got it working but I see this used all the time so assume there is a common name for the technique, and no doubt all sorts of best practice and clever algorithms that are at the moment just out of my reach.

    Read the article

  • How can I test if a point lies within a 3d shape with its surface defined by a point cloud?

    - by Ben
    Hi I have a collection of points which describe the surface of a shape that should be roughly spherical, and I need a method with which to determine if any other given point lies within this shape. I've previously been approximating the shape as an exact sphere, but this has proven too inaccurate and I need a more accurate method. Simplicity and speed is favourable over complete accuracy, a good approximation will suffice. I've come across techniques for converting a point cloud to a 3d mesh, but most things I have found have been very complicated, and I am looking for something as simple as possible. Any ideas? Many thanks, Ben.

    Read the article

  • Is there a faster way to parse through a large file with regex quickly?

    - by Ray Eatmon
    Problem: Very very, large file I need to parse line by line to get 3 values from each line. Everything works but it takes a long time to parse through the whole file. Is it possible to do this within seconds? Typical time its taking is between 1 minute and 2 minutes. Example file size is 148,208KB I am using regex to parse through every line: Here is my c# code: private static void ReadTheLines(int max, Responder rp, string inputFile) { List<int> rate = new List<int>(); double counter = 1; try { using (var sr = new StreamReader(inputFile, Encoding.UTF8, true, 1024)) { string line; Console.WriteLine("Reading...."); while ((line = sr.ReadLine()) != null) { if (counter <= max) { counter++; rate = rp.GetRateLine(line); } else if(max == 0) { counter++; rate = rp.GetRateLine(line); } } rp.GetRate(rate); Console.ReadLine(); } } catch (Exception e) { Console.WriteLine("The file could not be read:"); Console.WriteLine(e.Message); } } Here is my regex: public List<int> GetRateLine(string justALine) { const string reg = @"^\d{1,}.+\[(.*)\s[\-]\d{1,}].+GET.*HTTP.*\d{3}[\s](\d{1,})[\s](\d{1,})$"; Match match = Regex.Match(justALine, reg, RegexOptions.IgnoreCase); // Here we check the Match instance. if (match.Success) { // Finally, we get the Group value and display it. string theRate = match.Groups[3].Value; Ratestorage.Add(Convert.ToInt32(theRate)); } else { Ratestorage.Add(0); } return Ratestorage; } Here is an example line to parse, usually around 200,000 lines: 10.10.10.10 - - [27/Nov/2002:16:46:20 -0500] "GET /solr/ HTTP/1.1" 200 4926 789

    Read the article

  • Fast way to manually mod a number

    - by Nikolai Mushegian
    I need to be able to calculate (a^b) % c for very large values of a and b (which individually are pushing limit and which cause overflow errors when you try to calculate a^b). For small enough numbers, using the identity (a^b)%c = (a%c)^b%c works, but if c is too large this doesn't really help. I wrote a loop to do the mod operation manually, one a at a time: private static long no_Overflow_Mod(ulong num_base, ulong num_exponent, ulong mod) { long answer = 1; for (int x = 0; x < num_exponent; x++) { answer = (answer * num_base) % mod; } return answer; } but this takes a very long time. Is there any simple and fast way to do this operation without actually having to take a to the power of b AND without using time-consuming loops? If all else fails, I can make a bool array to represent a huge data type and figure out how to do this with bitwise operators, but there has to be a better way.

    Read the article

  • algorithm q: Fuzzy matching of structured data

    - by user86432
    I have a fairly small corpus of structured records sitting in a database. Given a tiny fraction of the information contained in a single record, submitted via a web form (so structured in the same way as the table schema), (let us call it the test record) I need to quickly draw up a list of the records that are the most likely matches for the test record, as well as provide a confidence estimate of how closely the search terms match a record. The primary purpose of this search is to discover whether someone is attempting to input a record that is duplicate to one in the corpus. There is a reasonable chance that the test record will be a dupe, and a reasonable chance the test record will not be a dupe. The records are about 12000 bytes wide and the total count of records is about 150,000. There are 110 columns in the table schema and 95% of searches will be on the top 5% most commonly searched columns. The data is stuff like names, addresses, telephone numbers, and other industry specific numbers. In both the corpus and the test record it is entered by hand and is semistructured within an individual field. You might at first blush say "weight the columns by hand and match word tokens within them", but it's not so easy. I thought so too: if I get a telephone number I thought that would indicate a perfect match. The problem is that there isn't a single field in the form whose token frequency does not vary by orders of magnitude. A telephone number might appear 100 times in the corpus or 1 time in the corpus. The same goes for any other field. This makes weighting at the field level impractical. I need a more fine-grained approach to get decent matching. My initial plan was to create a hash of hashes, top level being the fieldname. Then I would select all of the information from the corpus for a given field, attempt to clean up the data contained in it, and tokenize the sanitized data, hashing the tokens at the second level, with the tokens as keys and frequency as value. I would use the frequency count as a weight: the higher the frequency of a token in the reference corpus, the less weight I attach to that token if it is found in the test record. My first question is for the statisticians in the room: how would I use the frequency as a weight? Is there a precise mathematical relationship between n, the number of records, f(t), the frequency with which a token t appeared in the corpus, the probability o that a record is an original and not a duplicate, and the probability p that the test record is really a record x given the test and x contain the same t in the same field? How about the relationship for multiple token matches across multiple fields? Since I sincerely doubt that there is, is there anything that gets me close but is better than a completely arbitrary hack full of magic factors? Barring that, has anyone got a way to do this? I'm especially keen on other suggestions that do not involve maintaining another table in the database, such as a token frequency lookup table :). This is my first post on StackOverflow, thanks in advance for any replies you may see fit to give.

    Read the article

  • What is it about Fibonacci numbers?

    - by Ian Bishop
    Fibonacci numbers have become a popular introduction to recursion for Computer Science students and there's a strong argument that they persist within nature. For these reasons, many of us are familiar with them. They also exist within Computer Science elsewhere too; in surprisingly efficient data structures and algorithms based upon the sequence. There are two main examples that come to mind: Fibonacci heaps which have better amortized running time than binomial heaps. Fibonacci search which shares O(log N) running time with binary search on an ordered array. Is there some special property of these numbers that gives them an advantage over other numerical sequences? Is it a density quality? What other possible applications could they have? It seems strange to me as there are many natural number sequences that occur in other recursive problems, but I've never seen a Catalan heap.

    Read the article

  • R: Forecast package: Automatic algorithm for composite model involving ETS and AR

    - by phanikishan
    Hey, I would like to write a code involving automatic selection of a best composite model using ETS as well as autoregressive models. What is the criteria I should base my selection on? Also if I'm using the auto.arima function for deducing number of AR terms and corresponding coefficients from the forecast package in R, does my input series necessarily have to be stationary? or the value for d would be automatically selected thus returning a non-stationary model? Thanks, Phani

    Read the article

  • Algorithm for sentence analysis and tokenization

    - by Andrea Nagar
    I need to analyze a document and compile statistics as to how many times each a sequence of words is used (so the analysis is not on single words but of batch of recurring words). I read that compression algorithms do something similar to what I want - creating dictionaries of blocks of text with a piece of information reporting its frequency. It should be something similar to http://www.codeproject.com/KB/recipes/Patterns.aspx Do you have anything written in C#?

    Read the article

  • How does Amazon's Statistically Improbable Phrases work?

    - by ??iu
    How does something like Statistically Improbable Phrases work? According to amazon: Amazon.com's Statistically Improbable Phrases, or "SIPs", are the most distinctive phrases in the text of books in the Search Inside!™ program. To identify SIPs, our computers scan the text of all books in the Search Inside! program. If they find a phrase that occurs a large number of times in a particular book relative to all Search Inside! books, that phrase is a SIP in that book. SIPs are not necessarily improbable within a particular book, but they are improbable relative to all books in Search Inside!. For example, most SIPs for a book on taxes are tax related. But because we display SIPs in order of their improbability score, the first SIPs will be on tax topics that this book mentions more often than other tax books. For works of fiction, SIPs tend to be distinctive word combinations that often hint at important plot elements. For instance, for Joel's first book, the SIPs are: leaky abstractions, antialiased text, own dog food, bug count, daily builds, bug database, software schedules One interesting complication is that these are phrases of either 2 or 3 words. This makes things a little more interesting because these phrases can overlap with or contain each other.

    Read the article

  • What's the "Hello World!" of genetic algorithms good for?

    - by JohnIdol
    I found this very cool C++ sample , literally the "Hello World!" of genetic algorithms. I so decided to re-code the whole thing in C# and this is the result. Now I am asking myself: is there any practical application along the lines of generating a target string starting from a population of random strings? EDIT: my buddy on twitter just tweeted that "is useful for transcription type things such as translation. Does not have to be Monkey's". I wish I had a clue.

    Read the article

  • Constructing colours for maximum contrast

    - by Martin
    I want to draw some items on screen, each item is in one of N sets. The number of sets changes all the time, so I need to calculate N different colours which are as different as possible (to make it easy to identify what is in which set). So, for example with N = 2 my results would be black and white. With three I guess I would get all red, all green, all blue. For all four, it's less obvious what the correct answer is, and this is where I'm having trouble.

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >