Search Results

Search found 2773 results on 111 pages for 'calculate'.

Page 61/111 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Basket Analysis with #dax in #powerpivot and #ssas #tabular

    - by Marco Russo (SQLBI)
    A few days ago I published a new article on DAX Patterns web site describing how to implement Basket Analysis in DAX. This topic is a very classical one and is also covered in the many-to-many revolution white paper. It has been also discussed in several blog posts, listed here in historical order: Simple Basket Analysis in DAX by Chris Webb PowerPivot, basket analysis and the hidden many to many by Alberto Ferrari Applied Basket Analysis in Power Pivot using DAX by Gerhard Brueckl As usual, in DAX Patterns we try to present the required DAX formulas in a way that is easy to adapt to specific models. We also try to show a good implementation from a performance point of view. Further optimizations are always possible in DAX. However, in order to keep the model simple to adapt in different scenarios, we avoid presenting optimizations that would require particular assumptions or restrictions on the data model. I hope you will find the Basket Analysis pattern useful. Even if you do not need it today, reading the DAX formula is a good exercise to check your knowledge of evaluation contexts in DAX. For example, describing how does it work the following expression is not a trivial task! [Orders with Both Products] := CALCULATE (     DISTINCTCOUNT ( Sales[SalesOrderNumber] ),     CALCULATETABLE (         SUMMARIZE ( Sales, Sales[SalesOrderNumber] ),         ALL ( Product ),         USERELATIONSHIP ( Sales[ProductCode], 'Filter Product'[Filter ProductCode] )     ) ) The good news is that you can use the patterns even if you do not really understand all the details of the DAX formulas you are using! Any feedback on this new pattern is very welcome.

    Read the article

  • Finding direction of travel in a world with wrapped edges

    - by crazy
    I need to find the shortest distance direction from one point in my 2D world to another point where the edges are wrapped (like asteroids etc). I know how to find the shortest distance but am struggling to find which direction it's in. The shortest distance is given by: int rows = MapY; int cols = MapX; int d1 = abs(S.Y - T.Y); int d2 = abs(S.X - T.X); int dr = min(d1, rows-d1); int dc = min(d2, cols-d2); double dist = sqrt((double)(dr*dr + dc*dc)); Example of the world : : T : :--------------:--------- : : : S : : : : : : T : : : :--------------: In the diagram the edges are shown with : and -. I've shown a wrapped repeat of the world at the top right too. I want to find the direction in degrees from S to T. So the shortest distance is to the top right repeat of T. but how do I calculate the direction in degreed from S to the repeated T in the top right? I know the positions of both S and T but I suppose I need to find the position of the repeated T however there more than 1. The worlds coordinates system starts at 0,0 at the top left and 0 degrees for the direction could start at West. It seems like this shouldn’t be too hard but I haven’t been able to work out a solution. I hope somone can help? Any websites would be appreciated.

    Read the article

  • How can I generate a view or projection matrix for OpenGL 3.+

    - by Ken
    I'm transitioning from OpenGL 2 to OpenGL 3.+ and to GLSL 1.5. I'm trying to avoid using the deprecated features. My question how do we now generate the view or projection matrix. I was using the matrix stack to calculate the projection matrix for me; GLfloat ptr[16]; gluPerspective(...); glGetFloatv(GL_MODELVIEW_MATRIX, ptr); //then pass ptr via a uniform to the shader But obviously the matrix stack is deprecated. So this approach is not the best an option going forward. I have the 'Red Book', 7th ed, which covers 3.0 & 3.1 and it still uses the deprecated matrix functions in it's examples. I could write some utility-code myself to generate the matrices. But I don't want to re-invent this particular wheel, especially when this functionality is required for every 3D graphics program. What is the accepted way to generate world,view & projection matrices for OpenGL? Is there an emerging 'standard' library for this? Or is there some other hidden (to me) functionality in OpenGL/GLSL which I have overlooked?

    Read the article

  • Avoiding the Anaemic Domain - How to decide what single responsibility a class has

    - by thecapsaicinkid
    Even after reading a bunch I'm still falling into the same trap. I have a class, usually an enity. I need to implement more than one, similar operations on this type. It feels wrong to (seemingly arbitrarily) choose one of these operations to belong inside the entity and push the others out to a separate class; I end up pushing all operations to service classes and am left with an anaemic domain. As a crude example, imagine the typical Employee class with numeric properties to hold how many paid days the employee is entitled to for both sickness and holiday and a collection of days taken for each. public class Employee { public int PaidHolidayAllowance { get; set; } public int PaidSicknessAllowance { get; set; } public IEnumerable<Holiday> Holidays { get; set; } public IEnumerable<SickDays> SickDays { get; set; } } I want two operations, one to calculate remaining holiday, another for remaining paid sick days. It seems strange to include say, CalculateRemaingHoliday() in the Employee class and bump CalculateRemainingPaidSick() to some PaidSicknessCalculator class. I would end up with a PaidSicknessCalculator and a RemainingHolidayCalculator and the anaemic Employee entity as seen above. The other alternative would be to put both operations in the Employee class and kick Single Responsibility to the curb. That doesn't make for particularly maintainable code. I suppose the Employee class should have some initialisation/validation logic (not accepting negative alowances etc.) So maybe I just stick to basic initialisation and validation in the entities themselves and be happy with my separate calculator classes. Or maybe I should be asking myself if Anaemic Domain is actually causing me some tangible problems with my code.

    Read the article

  • How to create single integer index value based on two integers where first is unlimited?

    - by Jan Doggen
    I have table data containing an integer value X ranging from 1.... unknown, and an integer value Y ranging from 1..9 The data need to be presented in order 'X then Y'. For one visual component I can set multiple index names: X;Y But for another component I need a one-dimensional integer value as index (sort order). If X were limited to an upper bound of say 100, the one-dimensional value could simply be X*100 + Y. If the one-dimensional value could have been a real, it could be X + Y/10. But if I want to keep X unlimited, is there a way to calculate a single integer 'indexing' value from X and Y? [Added] Background information: I have a Gantt/TreeList component where the tasks are ordered on a TaskIndex integer. This does not need to be a real database field, I can make it a calculated field in the underlying client dataset. My table data is e.g. as follows: ID Baseline ParentID 1 0 0 (task) 5 2 1 (baseline) 8 1 1 (baseline) 9 0 0 (task) 12 0 0 (task) 16 1 12 (baseline) Task 1 has two baselines numbered 1 and 2 (IDs 8 and 5) Task 9 has no baselines Task 12 has one baseline numbered 1 (ID 16) Baselines number 1-9 (the Y variable from my question); 0 or null identify the tasks ID's are unlimited (the X variable) The user plays with visibility of baselines, e.g. he wants to see all tasks with all baselines labeled 1. This is done by updating a filter on the table. Right now I constantly have to recalculate TaskIndex after changing the filter (looping through records). It would be nice if TaskIndex could be calculated on the fly for each record knowing only the data in the current record (I work in Delphi where a client dataset has an OnCalcFields event handler, that is triggered for each record when necessary). I have no control over the inner workings of the visual component.

    Read the article

  • Toon/cel shading with variable line width?

    - by Nick Wiggill
    I see a few broad approaches out there to doing cel shading: Duplication & enlargement of model with flipped normals (not an option for me) Sobel filter / fragment shader approaches to edge detection Stencil buffer approaches to edge detection Geometry (or vertex) shader approaches that calculate face and edge normals Am I correct in assuming the geometry-centric approach gives the greatest amount of control over lighting and line thickness, as well eg. for terrain where you might see the silhouette line of a hill merging gradually into a plain? What if I didn't need pixel lighting on my terrain surfaces? (And I probably won't as I plan to use cell-based vertex- or texturemap-based lighting/shadowing.) Would I then be better off sticking with the geometry-type approach, or go for a screen space / fragment approach instead to keep things simpler? If so, how would I get the "inking" of hills within the mesh silhouette, rather than only the outline of the entire mesh (with no "ink" details inside that outline? Lastly, is it possible to cheaply emulate the flipped-normals approach, using a geometry shader? Is that exactly what the GS approaches do? What I want - varying line thickness with intrusive lines inside the silhouette... What I don't want...

    Read the article

  • what is the Web development process efficiency judgment criteria

    - by Ahmed safan
    I'm working as a web developer and I want to be able to determine if I'm efficient. Does this include the how long it take to accomplish tasks such as: Server side code for the site logic with one language or multiple php,asp,asp.net. Client side code like javascript with jquery for ajax, menus and other interactivity Page layout, html, css (color, fonts (but I have no artistic sense!)) The needs of the site and how it will work (planning) How can i judge how long it will take to complete a website? The site has CMS for adding and editing news, products, articles on the experience of the company. Also, they can edit team work, add Recreational Activities and a logo gallery with compressed psd download, and send messages to cpanel and to email. You are starting from scratch except JQuery and PHPmailer. How can I estimate how long the job will take, and how can I calculate the required time to finish any new projects? I'm so sorry for many scattered questions, but I'm in my first experiment and I want to take benefits from the great experience of those who have it.

    Read the article

  • Getting an OBB out of another OBB?

    - by Milo
    I'm working on collision resolution for my game. I just need a good way to get an object out of another object if it gets stuck. In this case a car. Here is a typical scenario. The red car is in the green object. How do I correctly get it out so the car can slide along the edge of the object as it should. I tried: if(buildings.size() > 0) { Entity e = buildings.get(0); Vector2D vel = new Vector2D(); vel.x = vehicle.getVelocity().x; vel.y = vehicle.getVelocity().y; vel.normalize(); while(vehicle.getRect().overlaps(e.getRect())) { vehicle.setCenter(vehicle.getCenterX() - vel.x * 0.1f, vehicle.getCenterY() - vel.y * 0.1f); } colided = true; } But that does not work too well. Is there some sort of vector I could calculate to use as the vector to move the car away from the object? Thanks

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • importing animations in Blender, weird rotations/locations

    - by user975135
    This is for the Blender 2.6 API. There are two problems: 1. When I import a single animation frame from my animation file to Blender, all bones look fine. But when I import multiple (all of the frames), just the first one looks right, seems like newer frames are affected by older ones, so you get slightly off positions/rotations. This is true when both assigning PoseBone.matrix and PoseBone.matrix_basis. bone_index = 0 # for each frame: for frame_index in range(frame_count): # for each pose bone: add a key for bone_name in bone_names: # "bone_names" - a list of bone names I got earlier pose.bones[bone_name].matrix = animation_matrices[frame_index][bone_index] # "animation_matrices" - a nested list of matrices generated from reading a file # create the 'keys' for the Action from the poses pose.bones[bone_name].keyframe_insert('location', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('rotation_euler', frame = frame_index+1) pose.bones[bone_name].keyframe_insert('scale', frame = frame_index+1) bone_index += 1 bone_index = 0 Again, it seems like previous frames are affecting latter ones, because if I import a single frame from the middle of the animation, it looks fine. 2. I can't assign armature-space animation matrices read from a file to a skeleton with hierarchy (parenting). In Blender 2.4 you could just assign them to PoseBone.poseMatrix and bones would deform perfectly whether the bones had a hierarchy or none at all. In Blender 2.6, there's PoseBone.matrix_basis and PoseBone.matrix. While matrix_basis is relative to parent bone, matrix isn't, the API says it's in object space. So it should have worked, but doesn't. So I guess we need to calculate a local space matrix from our armature-space animation matrices from the files. So I tried multiplying it ( PoseBone.matrix ) with PoseBone.parent.matrix.inverted() in both possible orders with no luck, still weird deformations.

    Read the article

  • Split a 2D scene in layers or have a z coordinate

    - by Bane
    I am in the process of writing a 2D game engine, and a dilemma emerged. Let me explain the situation... I have a Scene class, to which various objects can be added (Drawable, ParticleEmitter, Light2D, etc), and as this is a 2D scene, things will obviously be drawn over each other. My first thought was that I could have basic add and remove methods, but I soon realized that then there would be no way for the programmer to control the order in which things were drawn. So I can up with two options, each with its pros and cons. A) Would be to split the scene in layers. By that I mean instead of having the scene be a container of objects, have it be a container of layers, which are in turn the containers of objects. B) Would require to have some kind of z-coordinate, and then have the scene sorted so objects with lower z get drawn first. Option A is pretty solid, but the problem is with the lights. In what layer do I add it? Does it work cross-layer? On all bottom layers? And I still need the Z coordinate to calculate the shadow! Option B would require me to change all my code from having Vector2D positions, to some kind of class that inherits from Vector2D and adds a z coordinate to it (I don't want it to be a Vector3D because I still need all the same methods the 2D kind has, just with .z clamped on). Am I missing something? Is there an alternative to these methods? I'm working in Javascript, if that makes a difference.

    Read the article

  • How to indicate reliability when reporting availability of competencies

    - by Jan Doggen
    We have employees with competencies: Pete Welder Carpenter Melissa Carpenter Assume they both work 40 hours/week, and have not yet been assigned work. We need to report the availability of these competencies, expressed in hours. As far as I can see now, we can report this in two ways: Method A. When someone has multiple competencies, count them both. Welder 40 hours Carpenter 80 hours Method B. When someone has multiple competencies, count an equal division of hours for each Welder 20 hours Carpenter 60 hours Method A has our preference: - A good planner will know to plan the least available competency first. If 30 hours of welding is planned, we will be left with 10 welder, 50 carpenter. - Method B has the disadvantage that the planner thinks he cannot plan the job when 30 hours of welding is required. However, if we report this we would like to give an estimate of the reliability of the numbers for each competency, i.e. how much are these over-reported? In my example A, would I say that carpenter is 100% over-reported, or 50%, or maybe another number? How would I calculate this for large numbers of competencies? I'm sure we are not the first ones dealing with this, is there a 'usual' way of doing this in planning? Additionally: - Would there be an even better method than A or B? - Optionally, we also have an preference order of competencies (like: use him/her in this order), Pete could be 1. welder 2. carpenter. Does this introduce new options?

    Read the article

  • Resultant Vector Algorithm for 2D Collisions

    - by John
    I am making a Pong based game where a puck hits a paddle and bounces off. Both the puck and the paddles are Circles. I came up with an algorithm to calculate the resultant vector of the puck once it meets a paddle. The game seems to function correctly but I'm not entirely sure my algorithm is correct. Here are my variables for the algorithm: Given: velocity = the magnitude of the initial velocity of the puck before the collision x = the x coordinate of the puck y = the y coordinate of the puck moveX = the horizontal speed of the puck moveY = the vertical speed of the puck otherX = the x coordinate of the paddle otherY = the y coordinate of the paddle piece.horizontalMomentum = the horizontal speed of the paddle before it hits the puck piece.verticalMomentum = the vertical speed of the paddle before it hits the puck slope = the direction, in radians, of the puck's velocity distX = the horizontal distance between the center of the puck and the center of the paddle distY = the vertical distance between the center of the puck and the center of the paddle Algorithm solves for: impactAngle = the angle, in radians, of the angle of impact. newSpeedX = the speed of the resultant vector in the X direction newSpeedY = the speed of the resultant vector in the Y direction Here is the code for my algorithm: int otherX = piece.x; int otherY = piece.y; double velocity = Math.sqrt((moveX * moveX) + (moveY * moveY)); double slope = Math.atan(moveX / moveY); int distX = x - otherX; int distY = y - otherY; double impactAngle = Math.atan(distX / distY); double newAngle = impactAngle + slope; int newSpeedX = (int)(velocity * Math.sin(newAngle)) + piece.horizontalMomentum; int newSpeedY = (int)(velocity * Math.cos(newAngle)) + piece.verticalMomentum; for those who are not program savvy here is it simplified: velocity = v(moveX² + moveY²) slope = arctan(moveX / moveY) distX = x - otherX distY = y - otherY impactAngle = arctan(distX / distY) newAngle = impactAngle + slope newSpeedX = velocity * sin(newAngle) + piece.horizontalMomentum newSpeedY = velocity * cos(newAngle) + piece.verticalMomentum My Question: Is this algorithm correct? Is there an easier/simpler way to do what I'm trying to do?

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • Find points whose pairwise distances approximate a given distance matrix

    - by Stephan Kolassa
    Problem. I have a symmetric distance matrix with entries between zero and one, like this one: D = ( 0.0 0.4 0.0 0.5 ) ( 0.4 0.0 0.2 1.0 ) ( 0.0 0.2 0.0 0.7 ) ( 0.5 1.0 0.7 0.0 ) I would like to find points in the plane that have (approximately) the pairwise distances given in D. I understand that this will usually not be possible with strictly correct distances, so I would be happy with a "good" approximation. My matrices are smallish, no more than 10x10, so performance is not an issue. Question. Does anyone know of an algorithm to do this? Background. I have sets of probability densities between which I calculate Hellinger distances, which I would like to visualize as above. Each set contains no more than 10 densities (see above), but I have a couple of hundred sets. What I did so far. I did consider posting at math.SE, but looking at what gets tagged as "geometry" there, it seems like this kind of computational geometry question would be more on-topic here. If the community thinks this should be migrated, please go ahead. This looks like a straightforward problem in computational geometry, and I would assume that anyone involved in clustering might be interested in such a visualization, but I haven't been able to google anything. One simple approach would be to randomly plonk down points and perturb them until the distance matrix is close to D, e.g., using Simulated Annealing, or run a Genetic Algorithm. I have to admit that I haven't tried that yet, hoping for a smarter way. One specific operationalization of a "good" approximation in the sense above is Problem 4 in the Open Problems section here, with k=2. Now, while finding an algorithm that is guaranteed to find the minimum l1-distance between D and the resulting distance matrix may be an open question, it still seems possible that there at least is some approximation to this optimal solution. If I don't get an answer here, I'll mail the gentleman who posed that problem and ask whether he knows of any approximation algorithm (and post any answer I get to that here).

    Read the article

  • Calculating a child Position, Rotation and Scale values?

    - by Sergio Plascencia
    I am making my own game editor(just for fun) anyway I have problem that I had several days trying to resolve but I have been unsuccessful. Here goes... I have an object "A": Position: (3,3,3), Rotation: (45,10,0), Scale(1,2,2.5) And an object "B": Position: (1,1,1), Rotation: (10,34,18), Scale(1.5,2,1) I now make a parent/child relationship. "B" is a child of "A": A |--B When I do the relationship I need to re-calculate the Child("B") Position, Rotation and Scale such that it maintains its current position, rotation and scale(Location in world). So for child position "B" it would now be (-2, -2, -2) since now "A" it is center and (-2, -2, -2) will keep the object in its same position. I think I got the Position and scale figure out, but rotation I cant. So I was trying to figure out what to do and what I did is opened Unity and run the same example and I did noticed that when making an abject a child object the child object did not moved at all but had its Position, Rotation and Scale values changed(Related to the parent). For example: Unity (Parent Object "A"): Position: (0,0,0) Rotation: (45,10,0) Scale: (1,1,1) Unity (Child Object "B"): Position: (0,0,0) Rotation: (0,0,0) Scale: (1,1,1) When making it a parent child relation("B" is a child of "A") the child object("B") in its Rotation values now has: X: -44.13605 Y: -14.00195 Z: 9.851074 If I plug the same values to my editor(To the child "B" rotation X, Y, Z values) the object does not move at all. So I basically need to know how did Unity arrive at those rotation values for the child(What are the calculations?). If you can help and put all the equations for the Position, Rotation or Scale then I can double check I am doing it correctly but with the Rotation I really need help. Thanks!

    Read the article

  • Is the separation of program logic and presentation layer going too far?

    - by Timwi
    In a Drupal programming guide, I noticed this sentence: The theme hook receives the total number of votes and the number of votes for just that item, but the template wants to display a percentage. That kind of work shouldn't be done in a template; instead, the math is performed here. The math necessary to calculate a percentage from a total and a number is (number/total)*100. Is this application of two basic arithmetic operators within a presentation layer already too much? Is the maintenance of the entire system severely compromised by this amount of mathematics? The WPF (Windows Presentation Framework) and its UI mark-up language, XAML, seem to go to similar extremes. If you try to so much as add two numbers in the View (the presentation layer), you have committed a cardinal sin. Consequently, XAML has no operators for any arithmetic whatsoever. Is this ultra-strict separation really the holy grail of programming? What are the significant gains to be had from taking the separation to such extremes?

    Read the article

  • How to visually "connect" skybox edges with terrain model

    - by David
    I'm working on a simple airplane game where I use skybox cube rendered using disabled depth test. Very close to the bottom side of the skybox is my terrain model. What bothers me is that the terrain is not connected to the skybox bottom. This is not visible while the plane flies low, but as it gets some altitude, the terrain looks smaller because of the perspective. Since the skybox center is always same as the camera position, the skybox moves with the plane, but the terrain goes into the distance. Ok, I think you understand the problem. My question is how to fix it. It's an airplane game so limiting max altitude is not possible. I thought about some way to stretch terrain to always cover whole bottom side of the skybox cube, but that doesn't feel right and I don't even know how would I calculate new terrain dimensions every frame. Here are some screenshot of games where you can clearly see the problem: (oops, I cannot post images yet) darker brown is the skybox bottom here: http://i.stack.imgur.com/iMsAf.png untextured brown is the skybox bottom here: http://i.stack.imgur.com/9oZr7.png

    Read the article

  • Handling extremely large numbers in a language which can't?

    - by Mallow
    I'm trying to think about how I would go about doing calculations on extremely large numbers (to infinitum - intergers no floats) if the language construct is incapable of handling numbers larger than a certain value. I am sure I am not the first nor the last to ask this question but the search terms I am using aren't giving me an algorithm to handle those situations. Rather most suggestions offer a language change or variable change, or talk about things that seem irrelevant to my search. So I need a little guideance. I would sketch out an algorithm like this: Determine the max length of the integer variable for the language. If a number is more than half the length of the max length of the variable split it in an array. (give a little play room) Array order [0] = the numbers most to the right [n-max] = numbers most to the left Ex. Num: 29392023 Array[0]:23, Array[1]: 20, array[2]: 39, array[3]:29 Since I established half the length of the variable as the mark off point I can then calculate the ones, tenths, hundredths, etc. Place via the halfway mark so that if a variable max length was 10 digits from 0 to 9999999999 then I know that by halfing that to five digits give me some play room. So if I add or multiply I can have a variable checker function that see that the sixth digit (from the right) of array[0] is the same place as the first digit (from the right) of array[1]. Dividing and subtracting have their own issues which I haven't thought about yet. I would like to know about the best implementations of supporting larger numbers than the program can.

    Read the article

  • Pre game loading time vs. in game loading time

    - by Keeper
    I'm developing a game in which a random maze is included. There are some AI creatures, lurking the maze. And I want them to go in some path according to the mazes shape. Now there are two possibilities for me to implement that, the first way (which I used) is by calculating several wanted lurking paths once the maze is created. The second, is by calculating a path once needed to be calculated, when a creature starts lurking it. My main concern is loading times. If I calculate many paths at the creating of the maze, the pre loading time is a bit long, so I thought about calculating them when needed. At the moment the game is not 'heavy' so calculating paths in mid game is not noticeable, but I'm afraid it will once it will get more complicated. Any suggestions, comments, opinions, will be of help. Edit: As for now, let p be the number of pre-calculated paths, a creatures has the probability of 1/p to take a new path (which means a path calculation) instead of an existing one. A creature does not start its patrol until the path is fully calculated of course, so no need to worry about him getting killed in the process.

    Read the article

  • Weird rotation problem

    - by Phil
    I'm creating a simple tank game. No matter what I do, the turret keeps facing the target with it's side. I just can't figure out how to turn it 90 degrees in Y once so it faces it correctly. I've checked the pivot in Maya and it doesn't matter how I change it. This is the code I use to calculate how to face the target: void LookAt() { var forwardA = transform.forward; var forwardB = (toLookAt.transform.position - transform.position); var angleA = Mathf.Atan2(forwardA.x, forwardA.z) * Mathf.Rad2Deg; var angleB = Mathf.Atan2(forwardB.x, forwardB.z) * Mathf.Rad2Deg; var angleDiff = Mathf.DeltaAngle(angleA, angleB); //print(angleDiff.ToString()); if (angleDiff > 20) { //Rotate to transform.Rotate(new Vector3(0, (-turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else if (angleDiff < 20) { transform.Rotate(new Vector3(0, (turretSpeed * Time.deltaTime),0)); //transform.rotation = new Quaternion(transform.rotation.x, transform.rotation.y + adjustment, transform.rotation.z, transform.rotation.w); } else { } } I'm using Unity3d and would appreciate any help I can get! Thanks!

    Read the article

  • Ray Tracing concers: Efficient Data Structure and Photon Mapping

    - by Grieverheart
    I'm trying to build a simple ray tracer for specific target scenes. An example of such scene can be seen below. I'm concerned as to what accelerating data structure would be most efficient in this case since all objects are touching but on the other hand, the scene is uniform. The objects in my ray tracer are stored as a collection of triangles, thus I also have access to individual triangles. Also, when trying to find the bounding box of the scene, how should infinite planes be handled? Should one instead use the viewing frustum to calculate the bounding box? A few other questions I have are about photon mapping. I've read the original paper by Jensen and many more material. In the compact data structure for the photon they introduce, they store photon power as 4 chars, which from my understanding is 3 chars for color and 1 for flux. But I don't understand how 1 char is enough to store a flux of the order of 1/n, where n is the number of photons (I'm also a bit confused about flux vs power). The other question about photon mapping is, if it would be more efficient in my case to store photons per object (or even per Object's triangle) instead of using a balanced kd-tree. Also, same question about bounding box of the scene but for photon mapping. How should one find a bounding box from the pov of the light when infinite planes are involved?

    Read the article

  • looking for a short explanation of fuzzy logic

    - by user613326
    Well i got the idea that basics of fuzzy logic are not that hard to grasp. And i got the feeling that someone might explain it to me in like 30 minutes. Just like i understand neural networks and am able to re-create the famous Xor problem. And go just beyond it and create 3 layer networks of x nodes. I'd like to understand fuzzy till a similar usefully level, in c# language. However the problem is face, I'd like to get concept right however i see many websites who include lots of errors in their basic explaining. Like for example showing pictures and use different numbers as shown in pictures to calculate, as if lots of people just copied stuff without noticing what they write down. While others for me go to deep in their math notation) To me that's very annoying to learn from. For me there is no need to re-invent wheel; Aforge already got a fuzzy logic framework. So what i am looking for are some good examples, good examples like how the neural XOR problem is solved. Is there anyone such a instructional resource out there; do you know a web page, or YouTube where it is shortly explained, what would you recommend me ? Note this article comes close; but it just doesnt nail it for me. After that i downloaded a bunch of free PDF's but most are academic and hard to read for me (i'm not English and dont have a special math degree). (i've been looking around a lot for this, good starter material about it is hard to find).

    Read the article

  • StyleCop 4.7.34.0 has been released

    - by TATWORTH
    StyleCop 4.7.34.0 was released, today, 6/July at http://stylecop.codeplex.com/releases/view/79972Compatible with the Visual Studio 2012 RC (11.0.50522).Install order should be : VS2008VS2010VS2012 RCR#6.1.1 (for VS2010)R#7.0 (tested with daily build 7.0.83.281) (down load from http://confluence.jetbrains.net/display/ReSharper/ReSharper+7+EAP)StyleCop This version is now compatible with R# 5.1 (5.1.3000.12), R# 6.0 (6.0.2202.688), R# 6.1 (6.1.37.86), R# 6.1.1 (6.1.1000.82) and R# 7.0 (7.0.83.281).Here are the bug details for fixed in 4.7 and closed in 4.7 issues (over 100 issues fixed since 4.6)Here are the bug details for all issues since 4.3.3.0 that have been fixed and closed (over 450 fixes).Updated Release Notes are available hereOnline Rules documentation is available hereFixes for this release are:Update ReSharper 7.0 references to 7.0.83.281Fix for 7343. Inheritdoc was raising false positives for some partial classes. Added regression test data.Fix for 7351. Remove the correct blank lines for SA1512 on code cleanup.Fix for 7346. Insert documentation fully on cleanup and on bulb items.Ensure that the SuppressMessage bulbItem can calculate the correct element to insert at.Fix for 7352. Module level suppressmessages were not working.Spelling mistake.Add missing typenames to resource file.Spelling fixes.Remove obsolete files.

    Read the article

  • Multiplayer Network Game - Interpolation and Frame Rate

    - by J.C.
    Consider the following scenario: Let's say, for sake of example and simplicity, that you have an authoritative game server that sends state to its clients every 45ms. The clients are interpolating state with an interpolation delay of 100 ms. Finally, the clients are rendering a new frame every 15ms. When state is updated on the client, the client time is set from the incoming state update. Each time a frame renders, we take the render time (client time - interpolation delay) and identify a previous and target state to interpolate from. To calculate the interpolation amount/factor, we take the difference of the render time and previous state time and divide by the difference of the target state and previous state times: var factor = ((renderTime - previousStateTime) / (targetStateTime - previousStateTime)) Problem: In the example above, we are effectively displaying the same interpolated state for 3 frames before we collected the next server update and a new client (render) time is set. The rendering is mostly smooth, but there is a dash of jaggedness to it. Question: Given the example above, I'd like to think that the interpolation amount/factor should increase with each frame render to smooth out the movement. Should this be considered and, if so, what is the best way to achieve this given the information from above?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >