Search Results

Search found 5298 results on 212 pages for 'marching cubes algorithm'.

Page 105/212 | < Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

  • Can I use a genetic algorithm for balancing character builds?

    - by Renan Malke Stigliani
    I'm starting to build a online PVP (duel like, one-on-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I have never done anything like this, I'm still thinking about the math behind the levels/skills/specials balance. So I thought a good way of testing the best builds/combos, would be to implement a Genetic Algorithm. It'd be like this: Generate a big group of random characters Make them fight, level them up accordingly to their victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try and make even better characters Add some more random chars, emulating new players Repeat the process for some time, or util I find some chars who can beat everyone's butt I could then play with the math and try to find better balances to make sure that the top x% of chars would be a mix of various build types. So, is it a good idea, or is there some other, easier method to do the balancing?

    Read the article

  • What alternatives exist of how an agent can follow the path calculated by a path-finding algorithm?

    - by momboco
    What alternatives exist of how an agent can follow the path calculated by a path-finding algorithm? I've seen that the most easy form is go to one point and when the agent has reached this point, discard it and go to the next point. I think that this approach has problems when the game has physics with dynamic objects that can block the travel between point A and point B, then the agent is taken from his original trayectory and sometimes go to the last destiny point is not the most natural behavior. In the literature always I have read that the path is only a suggestion of where the agent has to go, but I don't know how this suggested path must be followed. Thanks.

    Read the article

  • What encryption algorithm/package should I use in a betting game?

    - by user299648
    I have a betting type site where I publish a number (between 0-100) that is encrypted. Then after a period of time, I would review what the number is and prove it with a key to decrypt the encrypted number to prove that I'm not cheating. I also want it to be easily verifiable by an average user. What encryption algorithm/technique/package should I use? I'm no expert on cryptography. There seems to be so many options out there and I'm not sure what to use. python friendly is a plus.

    Read the article

  • How important is it for a programmer to know how to implement a QuickSort/MergeSort algorithm from memory?

    - by John Smith
    I was reviewing my notes and stumbled across the implementation of different sorting algorithms. As I attempted to make sense of the implementation of QuickSort and MergeSort, it occurred to me that although I do programming for a living and consider myself decent at what I do, I have neither the photographic memory nor the sheer brainpower to implement those algorithms without relying on my notes. All I remembered is that some of those algorithms are stable and some are not. Some take O(nlog(n)) or O(n^2) time to complete. Some use more memory than others... I'd feel like I don't deserve this kind of job if it weren't because my position doesn't require that I use any sorting algorithm other than those found in standard APIs. I mean, how many of you have a programming position where it actually is essential that you can remember or come up with this kind of stuff on your own?

    Read the article

  • How do you choose a programming/data structure/algorithm book?

    - by Fanatic23
    I really should not be mentioning the name of the book, but the first time I read it (during my under-grad days) I almost concluded that data structure was a bad course to pick. Which brings me to the question I am asking here. What makes a programming or data structure or algorithm book tick? Clearly, lucid explanation is one. But I also realize that organization of the material is very important and so is diagrams. What else? Some pointers would obviously help when I hang out in my neighborhood computer book shop the next time.

    Read the article

  • Google new algorithm: My company have a 40 sites with different domains that some of their articles appears in my main website

    - by user5674576
    Hi, My company have a 40 sites with different domains that some of their articles appears in my main website with reference to their source. Our articles write by high level processionals in the field that they write about - we also pay them high salary. In recent google algorithm change my main site rating down very seriously. What should we do to restore company main site google rating? our solution and ideas that not working well: rel="canonical" to source website (we already have it before google change without results) meta "original-source" but not have rating influence (we already have it before google change without results) Edit:: maybe we should delete rel="canonical" from main website articles that refer to our other small websites (because this articles in main website not indexed in google)? Thanks in advance

    Read the article

  • What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits?

    - by user12365
    I have server objects that have corresponding client objects. The data to be kept in sync is inside the server object's key/value dictionary. To keep the client objects in sync with the sever objects, I want the server to send the key/value dictionary every frame for each object. What data-structure/algorithm will allow me to send a list of key/value dictionaries using the least amount of bits? Bonus constraint 1: For each type of object, the values of some keys change more often than others. Bonus constraint 2: Memory usage on the server side is relatively expensive.

    Read the article

  • What encryption algorithm/package should I use in a betting game type situation?

    - by user299648
    I have a betting type site where I publish a number (between 0-100) that is encrypted. Then after a period of time, I would review what the number is and prove it with a key to decrypt the encrypted number to prove that I'm not cheating. I also want it to be easily verifiable by an average user. What encryption algorithm/technique/package should I use? I'm no expert on cryptography. There seems to be so many options out there and I'm not sure what to use. python friendly is a plus.

    Read the article

  • Why isn't my algorithm for find the biggest and smallest inputs working?

    - by Matt Ellen
    I have started a new job, and with it comes a new language: Ironpython. Thankfully a good language :D Before starting I got to grips with Python on the whole, but that was only a week's worth of learning. Now I'm writing actual code. I've been charged with writing an algorithm that finds the best input parameter to collect data with. The basic algorithm is (as I've been instructed): Set the input parameter to a good guess Start collecting data When data is available stop collecting find the highest point If the point before this (i.e. for the previous parameter value) was higher and the point before that was lower then we've found the max otherwise the input parameter is increased by the initial guess. goto 2 If the max is found then the min needs to be found. To do this the algorithm carries on increasing the input, but by 1/10 of the max, until the current point is greater than the previous point and the point before that is also greater. Once the min is found then the algorithm stops. Currently I have a simplified data generator outputting the sin of the input, so that I know that the min value should be PI and the max value should be PI/2 The main Python code looks like this (don't worry, this is just for my edification, I don't write real code like this): import sys sys.path.append(r"F:\Programming Source\C#\PythonHelp\PythonHelp\bin\Debug") import clr clr.AddReferenceToFile("PythonHelpClasses.dll") import PythonHelpClasses from PHCStruct import Helper from System import Math helper = Helper() def run(): b = PythonHelpClasses.Executor() a = PythonHelpClasses.HasAnEvent() b.Input = 0.0 helper.__init__() def AnEventHandler(e): b.Stop() h = helper h.lastLastVal, h.lastVal, h.currentVal = h.lastVal, h.currentVal, e.Number if h.lastLastVal < h.lastVal and h.currentVal < h.lastVal and h.NotPast90: h.NotPast90 = False h.bestInput = h.lastInput inputInc = 0.0 if h.NotPast90: inputInc = Math.PI/10.0 else: inputInc = h.bestInput/10.0 if h.lastLastVal > h.lastVal and h.currentVal > h.lastVal and h.NotPast180: h.NotPast180 = False if h.NotPast180: h.lastInput, b.Input = b.Input, b.Input + inputInc b.Start(a) else: print "Best input:", h.bestInput print "Last input:", h.lastInput b.Stop() a.AnEvent += AnEventHandler b.Start(a) PHCStruct.py: class Helper(): def __init__(self): self.currentVal = 0 self.lastVal = 0 self.lastLastVal = 0 self.NotPast90 = True self.NotPast180 = True self.bestInput = 0 self.lastInput = 0 PythonHelpClasses has two small classes I wrote in C# before I realised how to do it in Ironpython. Executor runs a delegate asynchronously while it's running member is true. The important code: public void Start(HasAnEvent hae) { running = true; RunDelegate r = new RunDelegate(hae.UpdateNumber); AsyncCallback ac = new AsyncCallback(UpdateDone); IAsyncResult ar = r.BeginInvoke(Input, ac, null); } public void Stop() { running = false; } public void UpdateDone(IAsyncResult ar) { RunDelegate r = (RunDelegate)((AsyncResult)ar).AsyncDelegate; r.EndInvoke(ar); if (running) { AsyncCallback ac = new AsyncCallback(UpdateDone); IAsyncResult ar2 = r.BeginInvoke(Input, ac, null); } } HasAnEvent has a function that generates the sin of its input and fires an event with that result as its argument. i.e.: public void UpdateNumber(double val) { AnEventArgs e = new AnEventArgs(Math.Sin(val)); System.Threading.Thread.Sleep(1000); if (null != AnEvent) { AnEvent(e); } } The sleep is in there just to slow things down a bit. The problem I am getting is that the algorithm is not coming up with the best input being PI/2 and the final input being PI, but I can't see why. Also the best and final inputs are different each time I run the programme. Can anyone see why? Also when the algorithm terminates the best and final inputs are printed to the screen multiple times, not just once. Can someone explain why?

    Read the article

  • most efficient AABB vs Ray collision algorithms

    - by Asher Einhorn
    Is there a known 'most efficient' algorithm for AABB vs Ray collision detection? I recently stumbled accross Arvo's AABB vs Sphere collision algorithm, and I am wondering if there is a similarly noteworthy algorithm for this. One must have condition for this algorithm is that I need to have the option of querying the result for the distance from the ray's origin to the point of collision. having said this, if there is another, faster algorithm which does not return distance, then in addition to posting one that does, also posting that algorithm would be very helpful indeed. Please also state what the function's return argument is, and how you use it to return distance or a 'no-collision' case. For example, does it have an out parameter for the distance as well as a bool return value? or does it simply return a float with the distance, vs a value of -1 for no collision? (For those that don't know: AABB = Axis Aligned Bounding Box)

    Read the article

  • How can I better implement A star algorithm with a very large set of nodes?

    - by Stephen
    I'm making a game with nodejs in which many enemies must converge on the player as the player moves around a relatively open space (right now it is an open field with few obstacles, but eventually there may be some small buildings in the field with 1 or 2 rooms). It's a multiplayer game using websockets, so the server needs to keep track of enemies and players. I found this javascript A* library which I've modified to be used on the server as a nodejs module. The library utilizes a Binary Heap to track the nodes for the algorithm, so it should be pretty fast (and indeed, with a small grid, say 100x100 it is lightning fast). The problem is that my game is not really tile-based. As the player moves around the map, he is moving on a more or less 1-to-1 per-pixel coordinate system (the player can move in 8 directions, 1 or 2 pixels at a time). In preliminary tests, on an 800x600 field, the path-finding can take anywhere from 400 to 1000 ms. Multiply that by 10 enemies and the game starts to get pretty choppy. I have already set it up so that each enemy will only do a path-finding call once per second or even as slow as once every 2 seconds (they have to keep updating their path because the players can move freely). But even with this long interval, there are noticeable lag spikes or chops every couple of seconds as the enemies update their paths. I'm willing to approach the problem of path-finding differently, if there's another option. I'm assuming that the real problem is the enormous grid (800x600). It also occurs to me that maybe the large arrays are to blame, as I've read that V8 has trouble with large arrays.

    Read the article

  • Is the way I'm implementing my genetic algorithm right?

    - by Mhjr
    In my graduation project, I am asked to use a genetic algorithm (any variation of it can be chosen) to generate valid timetables. What I did was make a simple program that generates unique sequences representing genes, the sequence is described below: (sorry if it's mathematically incorrect) The only variable in the sequence is the room element, so basically the program takes a tree that goes like this: [Course] -(contains)-> [Units] -(contains)-> [Offerings] -(contains)-> [Instructors] -(contains)-> [Rooms] Each course can have n units (duplicates). Each unit can have n offerings (lectures,lab session, excercises,...). Each offering has only 1 instructor. Each instructor (or the whole lecture composed from the four elements of the sequence) has multiple rooms. When a timetable is initialized, one of these sequences that differ in rooms will be taken into the timetable, so the difference in genes (sequences) of each timetable will be just the rooms random choice and the difference between chromosomes (timetables) will be time placements of these genes (sequences). My question is, before I proceed in implementing what I described, is it valid? Is the representation used here for chromosomes a permutation representation?

    Read the article

  • What is an appropriate language for expressing initial stages of algorithm refinement?

    - by hydroparadise
    First, this is not a homework assignment, but you can treat it as such ;). I found the following question in the published paper The Camel Has Two Humps. I was not a CS major going to college (I majored in MIS/Management), but I have a job where I find myself coding quite often. For a non-trivial programming problem, which one of the following is an appropriate language for expressing the initial stages of algorithm refinement? (a) A high-level programming language. (b) English. (c) Byte code. (d) The native machine code for the processor on which the program will run. (e) Structured English (pseudocode). What I do know is that you usually want to start your design implementation by writing down pseuducode and then moving/writing in the desired technology (because we all do that, right?) But I never thought about it in terms of refinement. I mean, if you were the original designer, then you might have access to the original pseudocode. But realisticly, when I have to maintain/refactor/refine somebody elses code, I just keep trucking with the language it currently resides in. Anybody have a definitive answer to this? As a side note, I did a quick scan of the paper as I havn't read every single detail. It presents various score statistics, can't find where the answers are with the paper.

    Read the article

  • What is an efficient algorithm for randomly assigning a pool of objects to a parent using specific rules

    - by maple_shaft
    I need some expert answers to help me determine the most efficient algorithm in this scenario. Consider the following data structures: type B { A parent; } type A { set<B> children; integer minimumChildrenAllowed; integer maximumChildrenAllowed; } I have a situation where I need to fetch all the orphan children (there could be hundreds of thousands of these) and assign them RANDOMLY to A type parents based on the following rules. At the end of the job, there should be no orphans left At the end of the job, no object A should have less children than its predesignated minimum. At the end of the job, no object A should have more children than its predesignated maximum. If we run out of A objects then we should create a new A with default values for minimum and maximum and assign remaining orphans to these objects. The distribution of children should be as evenly distributed as possible. There may already be some children assigned to A before the job starts. I was toying with how to do this but I am afraid that I would just end up looping across the parents sorted from smallest to largest, and then grab an orphan for each parent. I was wondering if there is a more efficient way to handle this?

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • Internal Data Masking

    - by ACShorten
    By default, the data in the product is unmasked for authorized users. If particular data within the object is considered a candidate for data masking then the masking capabilities with the product can be used to mask the data in an appropriate fashion. The inbuilt Data Masking capabilities of the Oracle Utilities Application Framework uses a number of configuration elements: An algorithm, of type F1-MASK, is specified to configure the elements of the data masking including the masking character, number of suffix characters left unmasked, characters to ignore in the string, the application service, security type and authorization levels applicable to the mask. A Data Masking Feature Configuration is created to define where the algorithm applies. The specification of the feature allows you to define the fields to encrypt using the configured algorithm. The algorithm can be attached to a schema field, table field, characteristic, search field and even a child record (such as an identifier). The appropriate user groups are then connected to the application services with the appropriate service types and level to indicate whether the masking applies to the user group or not. For example, say there is a field called CCNBR in the product which holds the credit card details. I would create an algorithm, say CCformatCC, to mask the credit card number with the last few digits as unmasked (as the standard in most systems dictate). I would specify on the Field Mask the following: field="CCNBR", alg="CMformatCC" On the algorithm CMfomatCC, I would specify the mask, application service, security type and the authorization level which users would see the credit card unmasked. To finish the configuration off and to implemention I would connect the appropriate user groups to the application service I specified with the security type and appropriate authorization level for that group. Whenever a user accesses the CCNBR field on any of the maintenance screens, searches and other screens that use the CCNBR meta data definition would then be masked according to the user group that the user was a member of. Refer to the documentation supplied with F1-MASK algorithm type entry for more examples of what is possible.

    Read the article

  • What methods are used to visualize a 4-dimensional Array?

    - by Atomiton
    An Array ( a row of elements ): [ ][ ][ ][ ][ ][ ] A 2-D Array ( a table ): [ ][ ][ ][ ][ ][ ] [ ][ ][ ][ ][ ][ ] [ ][ ][ ][ ][ ][ ] [ ][ ][ ][ ][ ][ ] A 3-D Array: //Imagine the above table as a cube ( a table with depth ) How does one visualize a 4-D array? The closest I can come is multiple cubes, so for int[,,,] [5,10,2,7] would be cube 5, row 10, column 2, layer(depth) 7. I'm not sure if this is the best way to visualize a 4-D array, though... and I'm not sure it's the best way to teach it... however it does have the advantage of being extensible ( a row cubes, a table of cubes, a cube of cubes ( 6-d array ) Cubes through time is another way that I can think of it. Am I on the right track here?

    Read the article

  • Specified key is not a valid size for this algorithm...

    - by phenevo
    Hi, I have with this code: RijndaelManaged rijndaelCipher = new RijndaelManaged(); // Set key and IV rijndaelCipher.Key = Convert.FromBase64String("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz012345678912"); rijndaelCipher.IV = Convert.FromBase64String("1234567890123456789012345678901234567890123456789012345678901234"); I get throws : Specified key is not a valid size for this algorithm. Specified initialization vector (IV) does not match the block size for this algorithm. What's wrong with this strings ? Can I count at some examples strings from You ?

    Read the article

  • Is there an algorithm for converting quaternion rotations to Euler angle rotations?

    - by Will Baker
    Is there an existing algorithm for converting a quaternion representation of a rotation to an Euler angle representation? The rotation order for the Euler representation is known and can be any of the six permutations (i.e. xyz, xzy, yxz, yzx, zxy, zyx). I've seen algorithms for a fixed rotation order (usually the NASA heading, bank, roll convention) but not for arbitrary rotation order. Furthermore, because there are multiple Euler angle representations of a single orientation, this result is going to be ambiguous. This is acceptable (because the orientation is still valid, it just may not be the one the user is expecting to see), however it would be even better if there was an algorithm which took rotation limits (i.e. the number of degrees of freedom and the limits on each degree of freedom) into account and yielded the 'most sensible' Euler representation given those constraints. I have a feeling this problem (or something similar) may exist in the IK or rigid body dynamics domains. Solved: I just realised that it might not be clear that I solved this problem by following Ken Shoemake's algorithms from Graphics Gems. I did answer my own question at the time, but it occurs to me it may not be clear that I did so. See the answer, below, for more detail. Just to clarify - I know how to convert from a quaternion to the so-called 'Tait-Bryan' representation - what I was calling the 'NASA' convention. This is a rotation order (assuming the convention that the 'Z' axis is up) of zxy. I need an algorithm for all rotation orders. Possibly the solution, then, is to take the zxy order conversion and derive from it five other conversions for the other rotation orders. I guess I was hoping there was a more 'overarching' solution. In any case, I am surprised that I haven't been able to find existing solutions out there. In addition, and this perhaps should be a separate question altogether, any conversion (assuming a known rotation order, of course) is going to select one Euler representation, but there are in fact many. For example, given a rotation order of yxz, the two representations (0,0,180) and (180,180,0) are equivalent (and would yield the same quaternion). Is there a way to constrain the solution using limits on the degrees of freedom? Like you do in IK and rigid body dynamics? i.e. in the example above if there were only one degree of freedom about the Z axis then the second representation can be disregarded. I have tracked down one paper which could be an algorithm in this pdf but I must confess I find the logic and math a little hard to follow. Surely there are other solutions out there? Is arbitrary rotation order really so rare? Surely every major 3D package that allows skeletal animation together with quaternion interpolation (i.e. Maya, Max, Blender, etc) must have solved exactly this problem?

    Read the article

  • How do make my encryption algorithm encrypt more than 128 bits?

    - by Ranhiru
    OK, now I have coded for an implementation of AES-128 :) It is working fine. It takes in 128 bits, encrypts and returns 128 bits So how do i enhance my function so that it can handle more than 128 bits? How do i make the encryption algorithm handle larger strings? Can the same algorithm be used to encrypt files? :) The function definition is public byte[] Cipher(byte[] input) { }

    Read the article

< Previous Page | 101 102 103 104 105 106 107 108 109 110 111 112  | Next Page >