Search Results

Search found 2773 results on 111 pages for 'calculate'.

Page 85/111 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

  • Camera frustum calculation coming out wrong

    - by Telanor
    I'm trying to calculate a view/projection/bounding frustum for the 6 directions of a point light and I'm having trouble with the views pointing along the Y axis. Our game uses a right-handed, Y-up system. For the other 4 directions I create the LookAt matrix using (0, 1, 0) as the up vector. Obviously that doesn't work when looking along the Y axis so for those I use an up vector of (-1, 0, 0) for -Y and (1, 0, 0) for +Y. The view matrix seems to come out correctly (and the projection matrix always stays the same), but the bounding frustum is definitely wrong. Can anyone see what I'm doing wrong? This is the code I'm using: camera.Projection = Matrix.PerspectiveFovRH((float)Math.PI / 2, ShadowMapSize / (float)ShadowMapSize, 1, 5); for(var i = 0; i < 6; i++) { var renderTargetView = shadowMap.GetRenderTargetView((TextureCubeFace)i); var up = DetermineLightUp((TextureCubeFace) i); var forward = DirectionToVector((TextureCubeFace) i); camera.View = Matrix.LookAtRH(this.Position, this.Position + forward, up); camera.BoundingFrustum = new BoundingFrustum(camera.View * camera.Projection); } private static Vector3 DirectionToVector(TextureCubeFace direction) { switch (direction) { case TextureCubeFace.NegativeX: return -Vector3.UnitX; case TextureCubeFace.NegativeY: return -Vector3.UnitY; case TextureCubeFace.NegativeZ: return -Vector3.UnitZ; case TextureCubeFace.PositiveX: return Vector3.UnitX; case TextureCubeFace.PositiveY: return Vector3.UnitY; case TextureCubeFace.PositiveZ: return Vector3.UnitZ; default: throw new ArgumentOutOfRangeException("direction"); } } private static Vector3 DetermineLightUp(TextureCubeFace direction) { switch (direction) { case TextureCubeFace.NegativeY: return -Vector3.UnitX; case TextureCubeFace.PositiveY: return Vector3.UnitX; default: return Vector3.UnitY; } } Edit: Here's what the values are coming out to for the PositiveX and PositiveY directions: Constants: Position = {X:0 Y:360 Z:0} camera.Projection = [M11:0.9999999 M12:0 M13:0 M14:0] [M21:0 M22:0.9999999 M23:0 M24:0] [M31:0 M32:0 M33:-1.25 M34:-1] [M41:0 M42:0 M43:-1.25 M44:0] PositiveX: up = {X:0 Y:1 Z:0} target = {X:1 Y:360 Z:0} camera.View = [M11:0 M12:0 M13:-1 M14:0] [M21:0 M22:1 M23:0 M24:0] [M31:1 M32:0 M33:0 M34:0] [M41:0 M42:-360 M43:0 M44:1] camera.BoundingFrustum: Matrix = [M11:0 M12:0 M13:1.25 M14:1] [M21:0 M22:0.9999999 M23:0 M24:0] [M31:0.9999999 M32:0 M33:0 M34:0] [M41:0 M42:-360 M43:-1.25 M44:0] Top = {A:0.7071068 B:-0.7071068 C:0 D:254.5584} Bottom = {A:0.7071068 B:0.7071068 C:0 D:-254.5584} Left = {A:0.7071068 B:0 C:0.7071068 D:0} Right = {A:0.7071068 B:0 C:-0.7071068 D:0} Near = {A:1 B:0 C:0 D:-1} Far = {A:-1 B:0 C:0 D:5} PositiveY: up = {X:0 Y:0 Z:-1} target = {X:0 Y:361 Z:0} camera.View = [M11:-1 M12:0 M13:0 M14:0] [M21:0 M22:0 M23:-1 M24:0] [M31:0 M32:-1 M33:0 M34:0] [M41:0 M42:0 M43:360 M44:1] camera.BoundingFrustum: Matrix = [M11:-0.9999999 M12:0 M13:0 M14:0] [M21:0 M22:0 M23:1.25 M24:1] [M31:0 M32:-0.9999999 M33:0 M34:0] [M41:0 M42:0 M43:-451.25 M44:-360] Top = {A:0 B:0.7071068 C:0.7071068 D:-254.5585} Bottom = {A:0 B:0.7071068 C:-0.7071068 D:-254.5585} Left = {A:-0.7071068 B:0.7071068 C:0 D:-254.5585} Right = {A:0.7071068 B:0.7071068 C:0 D:-254.5585} Near = {A:0 B:1 C:0 D:-361} Far = {A:0 B:-1 C:0 D:365} When I use the resulting BoundingFrustum to cull regions outside of it, this is the result: Pass PositiveX: Drew 3 regions Pass NegativeX: Drew 6 regions Pass PositiveY: Drew 400 regions Pass NegativeY: Drew 36 regions Pass PositiveZ: Drew 3 regions Pass NegativeZ: Drew 6 regions There are only 400 regions to draw and the light is in the center of them. As you can see, the PositiveY direction is drawing every single region. With the near/far planes of the perspective matrix set as small as they are, there's no way a single frustum could contain every single region.

    Read the article

  • Replacing ASP.NET Forms Authentication with WIF Session Authentication (for the better)

    - by Your DisplayName here!
    ASP.NET Forms Authentication and WIF Session Authentication (which has *nothing* to do with ASP.NET sessions) are very similar. Both inspect incoming requests for a special cookie that contains identity information, if that cookie is present it gets validated and if that is successful, the identity information is made available to the application via HttpContext.User/Thread.CurrentPrincipal. The main difference between the two is the identity to cookie serialization engine that sits below. Whereas ForsmAuth can only store the name of the user and an additional UserData string. It is limited to a single cookie and hardcoded to protection via the machine key. WIF session authentication in turn has these additional features: Can serialize a complete ClaimsPrincipal (including claims) to the cookie(s). Has a cookie overflow mechanism when data gets too big. In total it can create up to 8 cookies (á 4 KB) per domain (not that I would recommend round tripping that much data). Supports server side caching (which is an extensible mechanism). Has an extensible mechanism for protection (DPAPI by default, RSA as an option for web farms, and machine key based protection is coming in .NET 4.5) So in other words – session authentication is the superior technology, and if done cleverly enough you can replace FormsAuth without any changes to your application code. The only features missing is the redirect mechanism to a login page and an easy to use API to set authentication cookies. But that’s easy to add ;) FormsSessionAuthenticationModule This module is a sub class of the standard WIF session module, adding the following features: Handling EndRequest to do the redirect on 401s to the login page configured for FormsAuth. Reads the FormsAuth cookie name, cookie domain, timeout and require SSL settings to configure the module accordingly. Implements sliding expiration if configured for FormsAuth. It also uses the same algorithm as FormsAuth to calculate when the cookie needs renewal. Implements caching of the principal on the server side (aka session mode) if configured in an AppSetting. Supports claims transformation via a ClaimsAuthenticationManager. As you can see, the whole module is designed to easily replace the FormsAuth mechanism. Simply set the authentication mode to None and register the module. In the spirit of the FormsAuthentication class, there is also now a SessionAuthentication class with the same methods and signatures (e.g. SetAuthCookie and SignOut). The rest of your application code should not be affected. In addition the session module looks for a HttpContext item called “NoRedirect”. If that exists, the redirect to the login page will *not* happen, instead the 401 is passed back to the client. Very useful if you are implementing services or web APIs where you want the actual status code to be preserved. A corresponding UnauthorizedResult is provided that gives you easy access to the context item. The download contains a sample app, the module and an inspector for session cookies and tokens. Let’s hope that in .NET 4.5 such a module comes out of the box. HTH

    Read the article

  • How do developer get rid of silly requirements?

    - by sugar
    Hmm ! First of all let me give you the note of the requirement. So that you can have idea what kind of problem I am facing. Words From Project Manager : Hey ! Sugar, I am assigning you a task for developing a framework. This framework is supposed to be developed for all iOS application. Please go through brief of the required framework. It should be able to detect the thickness of my Thumb. It should be able to detect whether User is using thumb or Fingers If user is using thumbs/Fingers, Framework should calculate the size of thumb/fingers. Once size is been calculated, all elements of user interface should arranged & resized automatically. ( not specified how & where as its framework - it should be smart enough to arrange automatically ) If thumb size is larger elements should get arranged near by center area of iPad/iPhone If thumb size is smaller elements should get arranged near by corners of iPad/iPhone If thumb size is larger, fonts of all elements should get smaller. ( assuming = aged person ) If thumb size is smaller, fonts of all elements should get larger. ( assuming smaller thumb = low aged person ) Summary : This framework is required for creating user-friendly user-interfaces programmatically. We need to develop a very developer-friendly framework. Framework should be developed in such a way that we can use in as many projects as needed. Well, I am a developer. What I want to have as an answer is as follows. How to describe them - the way of they thinking is bit ridiculous ? How do I explain them - we can better concentrate on developing actual projects ? How do I convince them - that this kind of things even if possible, is not recommended to develop such things ? How do I say politely, gently & respectfully NO to this ? What should I say, So that they can not point at my experience ? ( e.g. you are 3 years experienced guy & you must have abilities to develop such things ) Feeling horror. Please help. Thanks in advance, Sugar. Note : Please help me to tag this question properly. I am stuck & this is real situation. Frustrated & tensed. You guys might have faced such requirements from TopLevel. requesting you to help with your experience. Well ! I came to know that - those TOPLEVEL guys don't have any idea of iPad, iPhone, Apple etc. I would do one thing. Sir, before we go further for framework development. It is strongly recommended to read Apple Human Interface Guidlines.

    Read the article

  • SQL analytical mash-ups deliver real-time WOW! for big data

    - by KLaker
    One of the overlooked capabilities of SQL as an analysis engine, because we all just take it for granted, is that you can mix and match analytical features to create some amazing mash-ups. As we move into the exciting world of big data these mash-ups can really deliver those "wow, I never knew that" moments. While Java is an incredibly flexible and powerful framework for managing big data there are some significant challenges in using Java and MapReduce to drive your analysis to create these "wow" discoveries. One of these "wow" moments was demonstrated at this year's OpenWorld during Andy Mendelsohn's general keynote session.  Here is the scenario - we are looking for fraudulent activities in our big data stream and in this case we identifying potentially fraudulent activities by looking for specific patterns. We using geospatial tagging of each transaction so we can create a real-time fraud-map for our business users. Where we start to move towards a "wow" moment is to extend this basic use of spatial and pattern matching, as shown in the above dashboard screen, to incorporate spatial analytics within the SQL pattern matching clause. This will allow us to compute the distance between transactions. Apologies for the quality of this screenshot….hopefully below you see where we have extended our SQL pattern matching clause to use location of each transaction and to calculate the distance between each transaction: This allows us to compare the time of the last transaction with the time of the current transaction and see if the distance between the two points is possible given the time frame. Obviously if I buy something in Florida from my favourite bike store (may be a new carbon saddle for my Trek) and then 5 minutes later the system sees my credit card details being used in Arizona there is high probability that this transaction in Arizona is actually fraudulent (I am fast on my Trek but not that fast!) and we can flag this up in real-time on our dashboard: In this post I have used the term "real-time" a couple of times and this is an important point and one of the key reasons why SQL really is the only language to use if you want to analyse  big data. One of the most important questions that comes up in every big data project is: how do we do analysis? Many enlightened customers are now realising that using Java-MapReduce to deliver analysis does not result in "wow" moments. These "wow" moments only come with SQL because it is offers a much richer environment, it is simpler to use and it is faster - which makes it possible to deliver real-time "Wow!". Below is a slide from Andy's session showing the results of a comparison of Java-MapReduce vs. SQL pattern matching to deliver our "wow" moment during our live demo.  You can watch our analytical mash-up "Wow" demo that compares the power of 12c SQL pattern matching + spatial analytics vs. Java-MapReduce  here: You can get more information about SQL Pattern Matching on our SQL Analytics home page on OTN, see here http://www.oracle.com/technetwork/database/bi-datawarehousing/sql-analytics-index-1984365.html.  You can get more information about our spatial analytics here: http://www.oracle.com/technetwork/database-options/spatialandgraph/overview/index.html If you would like to watch the full Database 12c OOW presentation see here: http://medianetwork.oracle.com/video/player/2686974264001

    Read the article

  • Project Euler 17: (Iron)Python

    - by Ben Griswold
    In my attempt to learn (Iron)Python out in the open, here’s my solution for Project Euler Problem 17.  As always, any feedback is welcome. # Euler 17 # http://projecteuler.net/index.php?section=problems&id=17 # If the numbers 1 to 5 are written out in words: # one, two, three, four, five, then there are # 3 + 3 + 5 + 4 + 4 = 19 letters used in total. # If all the numbers from 1 to 1000 (one thousand) # inclusive were written out in words, how many letters # would be used? # # NOTE: Do not count spaces or hyphens. For example, 342 # (three hundred and forty-two) contains 23 letters and # 115 (one hundred and fifteen) contains 20 letters. The # use of "and" when writing out numbers is in compliance # with British usage. import time start = time.time() def to_word(n): h = { 1 : "one", 2 : "two", 3 : "three", 4 : "four", 5 : "five", 6 : "six", 7 : "seven", 8 : "eight", 9 : "nine", 10 : "ten", 11 : "eleven", 12 : "twelve", 13 : "thirteen", 14 : "fourteen", 15 : "fifteen", 16 : "sixteen", 17 : "seventeen", 18 : "eighteen", 19 : "nineteen", 20 : "twenty", 30 : "thirty", 40 : "forty", 50 : "fifty", 60 : "sixty", 70 : "seventy", 80 : "eighty", 90 : "ninety", 100 : "hundred", 1000 : "thousand" } word = "" # Reverse the numbers so position (ones, tens, # hundreds,...) can be easily determined a = [int(x) for x in str(n)[::-1]] # Thousands position if (len(a) == 4 and a[3] != 0): # This can only be one thousand based # on the problem/method constraints word = h[a[3]] + " thousand " # Hundreds position if (len(a) >= 3 and a[2] != 0): word += h[a[2]] + " hundred" # Add "and" string if the tens or ones # position is occupied with a non-zero value. # Note: routine is broken up this way for [my] clarity. if (len(a) >= 2 and a[1] != 0): # catch 10 - 99 word += " and" elif len(a) >= 1 and a[0] != 0: # catch 1 - 9 word += " and" # Tens and ones position tens_position_value = 99 if (len(a) >= 2 and a[1] != 0): # Calculate the tens position value per the # first and second element in array # e.g. (8 * 10) + 1 = 81 tens_position_value = int(a[1]) * 10 + a[0] if tens_position_value <= 20: # If the tens position value is 20 or less # there's an entry in the hash. Use it and there's # no need to consider the ones position word += " " + h[tens_position_value] else: # Determine the tens position word by # dividing by 10 first. E.g. 8 * 10 = h[80] # We will pick up the ones position word later in # the next part of the routine word += " " + h[(a[1] * 10)] if (len(a) >= 1 and a[0] != 0 and tens_position_value > 20): # Deal with ones position where tens position is # greater than 20 or we have a single digit number word += " " + h[a[0]] # Trim the empty spaces off both ends of the string return word.replace(" ","") def to_word_length(n): return len(to_word(n)) print sum([to_word_length(i) for i in xrange(1,1001)]) print "Elapsed Time:", (time.time() - start) * 1000, "millisecs" a=raw_input('Press return to continue')

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • Converting openGl code to DirectX

    - by Fredrik Boston Westman
    First of all, this is kind of a follow up question on @byte56 excellent anwser on this question concerning picking algorithms. I'm trying to convert one of his code examples to directX 11 however I have run in to some problems ( I can pick but the picking is way off), and I wanted to make sure I had done it rigth before moving on and checking the rest of my code. I am not that familiar with openGl but I can imagine openGl has diffrent coordinations systems, and functions that alters how you must implement to code abit. This is his code example: public Ray GetPickRay() { int mouseX = Mouse.getX(); int mouseY = WORLD.Byte56Game.getHeight() - Mouse.getY(); float windowWidth = WORLD.Byte56Game.getWidth(); float windowHeight = WORLD.Byte56Game.getHeight(); //get the mouse position in screenSpace coords double screenSpaceX = ((float) mouseX / (windowWidth / 2) - 1.0f) * aspectRatio; double screenSpaceY = (1.0f - (float) mouseY / (windowHeight / 2)); double viewRatio = Math.tan(((float) Math.PI / (180.f/ViewAngle) / 2.00f))* zoomFactor; screenSpaceX = screenSpaceX * viewRatio; screenSpaceY = screenSpaceY * viewRatio; //Find the far and near camera spaces Vector4f cameraSpaceNear = new Vector4f((float) (screenSpaceX * NearPlane), (float) (screenSpaceY * NearPlane), (float) (-NearPlane), 1); Vector4f cameraSpaceFar = new Vector4f((float) (screenSpaceX * FarPlane), (float) (screenSpaceY * FarPlane), (float) (-FarPlane), 1); //Unproject the 2D window into 3D to see where in 3D we're actually clicking Matrix4f tmpView = Matrix4f(view); Matrix4f invView = (Matrix4f) tmpView.invert(); Vector4f worldSpaceNear = new Vector4f(); Matrix4f.transform(invView, cameraSpaceNear, worldSpaceNear); Vector4f worldSpaceFar = new Vector4f(); Matrix4f.transform(invView, cameraSpaceFar, worldSpaceFar); //calculate the ray position and direction Vector3f rayPosition = new Vector3f(worldSpaceNear.x, worldSpaceNear.y, worldSpaceNear.z); Vector3f rayDirection = new Vector3f(worldSpaceFar.x - worldSpaceNear.x, worldSpaceFar.y - worldSpaceNear.y, worldSpaceFar.z - worldSpaceNear.z); rayDirection.normalise(); return new Ray(rayPosition, rayDirection); } All rigths reserved to him of course This is my DirectX 11 code : void GraphicEngine::pickRayVector(float mouseX, float mouseY,XMVECTOR& pickRayInWorldSpacePos, XMVECTOR& pickRayInWorldSpaceDir) { float PRVecX, PRVecY; float nearPlane = 0.1f; float farPlane = 200.0f; floar viewAngle = 0.4 * 3.14; PRVecX = ((( 2.0f * mouseX) / ClientWidth ) - 1 ) * tan((viewAngle)/2); PRVecY = (1-(( 2.0f * mouseY) / ClientHeight)) * tan((viewAngle)/2); XMVECTOR cameraSpaceNear = XMVectorSet(PRVecX * nearPlane,PRVecY * nearPlane, -nearPlane, 1.0f); XMVECTOR cameraSpaceFar = XMVectorSet(PRVecX * farPlane,PRVecY * farPlane, -farPlane, 1.0f); // Transform 3D Ray from View space to 3D ray in World space XMMATRIX invMat; XMVECTOR matInvDeter; invMat = XMMatrixInverse(&matInvDeter, cam->getCameraView()); //Inverse of View Space matrix is World space matrix XMVECTOR worldSpaceNear = XMVector3TransformCoord(cameraSpaceNear, invMat); XMVECTOR worldSpaceFar = XMVector3TransformCoord(cameraSpaceFar, invMat); pickRayInWorldSpacePos = worldSpaceNear; pickRayInWorldSpaceDir = worldSpaceFar-worldSpaceNear; pickRayInWorldSpaceDir = XMVector3Normalize(pickRayInWorldSpaceDir); } A couple of notes: The mouse coordinates are already converted so that the top left corner of the client window would be (0,0) and the bottom rigth (800,600) ( or whatever resolution you would have) I hadn't used any far or near plane before, so i just made some arbitrary number up for them. To my understanding it shouldnt matter as long as the object you are trying to pick is in between the range of thoese numbers The viewAngle is the same angle that I used when setting the camera view with XMMatrixPerspectiveFovLH , I just hadn't made it a member variable of my Camera class yet. I removed the variable aspectRation and zoomFactor because I assumed that they where related to some specific function of his game. Now I'm not sure, but I think the problems lies either withing the mouse to viewspace conversion, maby that we use diffrent coordinations systems. Either that or how i transform the matrixes in the the end, because i know order is important when it comes to matrixes. Any help is appriciated! Thanks in advance. Edit: One more note, my code is in c++

    Read the article

  • Quaternion dfference + time --> angular velocity (gyroscope in physics library)

    - by AndrewK
    I am using Bullet Physic library to program some function, where I have difference between orientation from gyroscope given in quaternion and orientation of my object, and time between each frame in milisecond. All I want is set the orientation from my gyroscope to orientation of my object in 3D space. But all I can do is set angular velocity to my object. I have orientation difference and time, and from that I calculate vector of angular velocity [Wx,Wy,Wz] from that formula: W(t) = 2 * dq(t)/dt * conj(q(t)) My code is: btQuaternion diffQuater = gyroQuater - boxQuater; btQuaternion conjBoxQuater = gyroQuater.inverse(); btQuaternion velQuater = ((diffQuater * 2.0f) / d_time) * conjBoxQuater; And everything works well, till I get: 1 rotating around Y axis, angle about 60 degrees, then I have these values in 2 critical frames: x: -0.013220 y: -0.038050 z: -0.021979 w: -0.074250 - diffQuater x: 0.120094 y: 0.818967 z: 0.156797 w: -0.538782 - gyroQuater x: 0.133313 y: 0.857016 z: 0.178776 w: -0.464531 - boxQuater x: 0.207781 y: 0.290452 z: 0.245594 - diffQuater -> euler angles x: 3.153619 y: -66.947929 z: 175.936615 - gyroQuater -> euler angles x: 4.290697 y: -57.553043 z: 173.320053 - boxQuater -> euler angles x: 0.138128 y: 2.823307 z: 1.025552 w: 0.131360 - velQuater d_time: 0.058000 x: 0.211020 y: 1.595124 z: 0.303650 w: -1.143846 - diffQuater x: 0.089518 y: 0.771939 z: 0.144527 w: -0.612543 - gyroQuater x: -0.121502 y: -0.823185 z: -0.159123 w: 0.531303 - boxQuater x: nan y: nan z: nan - diffQuater -> euler angles x: 2.985240 y: -76.304405 z: -170.555054 - gyroQuater -> euler angles x: 3.269681 y: -65.977966 z: 175.639420 - boxQuater -> euler angles x: -0.730262 y: -2.882153 z: -1.294721 w: 63.325996 - velQuater d_time: 0.063000 2 rotating around X axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.013045 y: -0.004186 z: -0.005667 w: -0.022482 - diffQuater x: -0.848030 y: -0.187985 z: 0.114400 w: 0.482099 - gyroQuater x: -0.834985 y: -0.183799 z: 0.120067 w: 0.504580 - boxQuater x: 0.036336 y: 0.002312 z: 0.020859 - diffQuater -> euler angles x: -113.129463 y: 0.731925 z: 25.415056 - gyroQuater -> euler angles x: -110.232368 y: 0.860897 z: 25.350458 - boxQuater -> euler angles x: -0.865820 y: -0.456086 z: 0.034084 w: 0.013184 - velQuater d_time: 0.055000 x: -1.721662 y: -0.387898 z: 0.229844 w: 0.910235 - diffQuater x: -0.874310 y: -0.200132 z: 0.115142 w: 0.426933 - gyroQuater x: 0.847352 y: 0.187766 z: -0.114703 w: -0.483302 - boxQuater x: -144.402298 y: 4.891629 z: 71.309158 - diffQuater -> euler angles x: -119.515343 y: 1.745076 z: 26.646086 - gyroQuater -> euler angles x: -112.974533 y: 0.738675 z: 25.411509 - boxQuater -> euler angles x: 2.086195 y: 0.676526 z: -0.424351 w: 70.104248 - velQuater d_time: 0.057000 2 rotating around Z axis, angle about 120 degrees, then I have these values in 2 critical frames: x: -0.000736 y: 0.002812 z: -0.004692 w: -0.008181 - diffQuater x: -0.003829 y: 0.012045 z: -0.868035 w: 0.496343 - gyroQuater x: -0.003093 y: 0.009232 z: -0.863343 w: 0.504524 - boxQuater x: -0.000822 y: -0.003032 z: 0.004162 - diffQuater -> euler angles x: -1.415189 y: 0.304210 z: -120.481873 - gyroQuater -> euler angles x: -1.091881 y: 0.227784 z: -119.399445 - boxQuater -> euler angles x: 0.159042 y: 0.169228 z: -0.754599 w: 0.003900 - velQuater d_time: 0.025000 x: -0.007598 y: 0.024074 z: -1.749412 w: 0.968588 - diffQuater x: -0.003769 y: 0.012030 z: -0.881377 w: 0.472245 - gyroQuater x: 0.003829 y: -0.012045 z: 0.868035 w: -0.496343 - boxQuater x: -5.645197 y: 1.148993 z: -146.507187 - diffQuater -> euler angles x: -1.418294 y: 0.270319 z: -123.638245 - gyroQuater -> euler angles x: -1.415183 y: 0.304208 z: -120.481873 - boxQuater -> euler angles x: 0.017498 y: -0.013332 z: 2.040073 w: 148.120056 - velQuater d_time: 0.027000 The problem is the most visible in diffQuater - euler angles vector. Can someone tell me why it is like that? and how to solve that problem? All suggestions are welcome.

    Read the article

  • Determining explosion radius damage - Circle to Rectangle 2D

    - by Paul Renton
    One of the Cocos2D games I am working on has circular explosion effects. These explosion effects need to deal a percentage of their set maximum damage to all game characters (represented by rectangular bounding boxes as the objects in question are tanks) within the explosion radius. So this boils down to circle to rectangle collision and how far away the circle's radius is from the closest rectangle edge. I took a stab at figuring this out last night, but I believe there may be a better way. In particular, I don't know the best way to determine what percentage of damage to apply based on the distance calculated. Note : All tank objects have an anchor point of (0,0) so position is according to bottom left corner of bounding box. Explosion point is the center point of the circular explosion. TankObject * tank = (TankObject*) gameSprite; float distanceFromExplosionCenter; // IMPORTANT :: All GameCharacter have an assumed (0,0) anchor if (explosionPoint.x < tank.position.x) { // Explosion to WEST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, tank.position); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHWEST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = tank.position.x - explosionPoint.x; } // end if } else if (explosionPoint.x > (tank.position.x + tank.contentSize.width)) { // Explosion to EAST of tank if (explosionPoint.y <= tank.position.y) { //Explosion SOUTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y)); } else if (explosionPoint.y >= (tank.position.y + tank.contentSize.height)) { // Explosion NORTHEAST distanceFromExplosionCenter = ccpDistance(explosionPoint, ccp(tank.position.x + tank.contentSize.width, tank.position.y + tank.contentSize.height)); } else { // Exp center's y is between bottom and top corner of rect distanceFromExplosionCenter = explosionPoint.x - (tank.position.x + tank.contentSize.width); } // end if } else { // Tank is either north or south and is inbetween left and right corner of rect if (explosionPoint.y < tank.position.y) { // Explosion is South distanceFromExplosionCenter = tank.position.y - explosionPoint.y; } else { // Explosion is North distanceFromExplosionCenter = explosionPoint.y - (tank.position.y + tank.contentSize.height); } // end if } // end outer if if (distanceFromExplosionCenter < explosionRadius) { /* Collision :: Smaller distance larger the damage */ int damageToApply; if (self.directHit) { damageToApply = self.explosionMaxDamage + self.directHitBonusDamage; [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explsoion-> DIRECT HIT with total damage %d", damageToApply); } else { // TODO adjust this... turning out negative for some reason... damageToApply = (1 - (distanceFromExplosionCenter/explosionRadius) * explosionMaxDamage); [tank takeDamageAndAdjustHealthBar:damageToApply]; CCLOG(@"Explosion-> Non direct hit collision with tank"); CCLOG(@"Damage to apply is %d", damageToApply); } // end if } else { CCLOG(@"Explosion-> Explosion distance is larger than explosion radius"); } // end if } // end if Questions: 1) Can this circle to rect collision algorithm be done better? Do I have too many checks? 2) How to calculate the percentage based damage? My current method generates negative numbers occasionally and I don't understand why (Maybe I need more sleep!). But, in my if statement, I ask if distance < explosion radius. When control goes through, distance/radius must be < 1 right? So 1 - that intermediate calculation should not be negative. Appreciate any help/advice!

    Read the article

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • Seamless STP with Oracle SOA Suite

    - by user12339860
    STP stands for “Straight Through Processing”. Wikipedia describes STP as a solution that enables “the entire trade process for capital markets and payment transactions to be conducted electronically without the need for re-keying or manual intervention, subject to legal and regulatory restrictions” .I will deal with the later part of the definition i.e “payment transactions without manual intervention” in this article. The STP that I am writing about involves the interaction between a Bank and its’ corporate customers,to that extent this business case is also called “Corporate Payments”.Simply put a  Corporate Payment-STP solution needs to connect the payment transaction right from the Corporate ERP into the Bank’s Payment Hub. A SOA based STP solution can do a lot more than just process transaction. But before I get to the solution let me describe the perspectives of the two primary parties in this interaction. The Corporate customer and the Bank. Corporate's Interaction with Bank:  Typically it is the treasury department of an enterprise which interacts with the Bank on a daily basis. Here is how a day of interaction would look like from the treasury department of a corp. Corporate Cash Retrieve Beginning of day totals Monitor Cash Accounts Send or receive cash between accounts Supply chain payments Payment Settlements Calculate settlement positions Retrieve End of Day totals Assess Transaction Financial Impact Short Term Investment Desk Retrieve Current Account information Conduct Investment activities Bank’s Interaction with the Corporate :  From the Bank’s perspective, the interaction starts from the point of on boarding a corporate customer to billing the corporate for the value added services it provides. Once the corporate is on-boarded the daily interaction involves Handle the various formats of data arriving from customers Process Beginning of Day & End of Day reporting request from customers Meet compliance requirements Process Payments Transmit Payment Status Challenges with this Interaction :  Both the Bank & the Corporate face many challenges from these interactions. Some of the challenges include Keeping a consistent view of transaction data for various LOBs of the corporate & the Bank Corporate customers use different ERPs, hence the data formats are bound to be different Can the Bank’s IT systems convert the data formats that can be easily mapped to the corporate ERP How does the Bank manage the communication profiles of these customers?  Corporate customers are demanding near real time visibility on their corporate accounts Corporate customers can make better cash management decisions if they can analyse the impact. Can the Bank create opportunities to sell its products to the investment desks at corporate houses & manage their orders? How will the Bank bill the corporate customer for the value added services it provides. What does a SOA based Seamless STP solution bring to the table? Highlights of Oracle SOA based STP solution For the Corporate Customer: No Manual or Paper based banking transactions Secure Delivery of Payment data to the Bank from multiple ERPs without customization Single Portal for monitoring & administering payment transactions Rule based validation of payments Customer has data necessary for more effective handling of payment and cash management decisions  Business measurements track progress toward payment cost goals  For the Bank: Reduces time & complexity of transactions Simplifies the process of introducing new products to corporate customers Single Payment hub for all corporate ERP payments across multiple instruments New Revenue sources by delivering value added services to customers Leverages existing payment infrastructure Remove Inconsistent data formats and interchange between bank and corporate systems  Compliance and many other benefits

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • Calculating for leap year [migrated]

    - by Bradley Bauer
    I've written this program using Java in Eclipse. I was able to utilize a formula I found that I explained in the commented out section. Using the for loop I can iterate through each month of the year, which I feel good about in that code, it seems clean and smooth to me. Maybe I could give the variables full names to make everything more readable but I'm just using the formula in its basic essence :) Well my problem is it doesn't calculate correctly for years like 2008... Leap Years. I know that if (year % 400 == 0 || (year % 4 == 0 && year % 100 != 0)) then we have a leap year. Maybe if the year is a leap year I need to subtract a certain amount of days from a certain month. Any solutions, or some direction would be great thanks :) package exercises; public class E28 { /* * Display the first days of each month * Enter the year * Enter first day of the year * * h = (q + (26 * (m + 1)) / 10 + k + k/4 + j/4 + 5j) % 7 * * h is the day of the week (0: Saturday, 1: Sunday ......) * q is the day of the month * m is the month (3: March 4: April.... January and Feburary are 13 and 14) * j is the century (year / 100) * k is the year of the century (year %100) * */ public static void main(String[] args) { java.util.Scanner input = new java.util.Scanner(System.in); System.out.print("Enter the year: "); int year = input.nextInt(); int j = year / 100; // Find century for formula int k = year % 100; // Find year of century for formula // Loop iterates 12 times. Guess why. for (int i = 1, m = i; i <= 12; i++) { // Make m = i. So loop processes formula once for each month if (m == 1 || m == 2) m += 12; // Formula requires that Jan and Feb are represented as 13 and 14 else m = i; // if not jan or feb, then set m to i int h = (1 + (26 * (m + 1)) / 10 + k + k/4 + j/4 + 5 * j) % 7; // Formula created by a really smart man somewhere // I let the control variable i steer the direction of the formual's m value String day; if (h == 0) day = "Saturday"; else if (h == 1) day = "Sunday"; else if (h == 2) day = "Monday"; else if (h == 3) day = "Tuesday"; else if (h == 4) day = "Wednesday"; else if (h == 5) day = "Thursday"; else day = "Friday"; switch (m) { case 13: System.out.println("January 1, " + year + " is " + day); break; case 14: System.out.println("Feburary 1, " + year + " is " + day); break; case 3: System.out.println("March 1, " + year + " is " + day); break; case 4: System.out.println("April 1, " + year + " is " + day); break; case 5: System.out.println("May 1, " + year + " is " + day); break; case 6: System.out.println("June 1, " + year + " is " + day); break; case 7: System.out.println("July 1, " + year + " is " + day); break; case 8: System.out.println("August 1, " + year + " is " + day); break; case 9: System.out.println("September 1, " + year + " is " + day); break; case 10: System.out.println("October 1, " + year + " is " + day); break; case 11: System.out.println("November 1, " + year + " is " + day); break; case 12: System.out.println("December 1, " + year + " is " + day); break; } } } }

    Read the article

  • How do I cap rendering of tiles in a 2D game with SDL?

    - by farmdve
    I have some boilerplate code working, I basically have a tile based map composed of just 3 colors, and some walls and render with SDL. The tiles are in a bmp file, but each tile inside it corresponds to an internal number of the type of tile(color, or wall). I have pretty basic collision detection and it works, I can also detetc continuous presses, which allows me to move pretty much anywhere I want. I also have a moving camera, which follows the object. The problem is that, the tile based map is bigger than the resolution, thus not all of the map can be displayed on the screen, but it's still rendered. I would like to cap it, but since this is new to me, I pretty much have no idea. Although I cannot post all the code, as even though I am a newbie and the code pretty basic, it's already quite a few lines, I can post what I tried to do void set_camera() { //Center the camera over the dot camera.x = ( player.box.x + DOT_WIDTH / 2 ) - SCREEN_WIDTH / 2; camera.y = ( player.box.y + DOT_HEIGHT / 2 ) - SCREEN_HEIGHT / 2; //Keep the camera in bounds. if(camera.x < 0 ) { camera.x = 0; } if(camera.y < 0 ) { camera.y = 0; } if(camera.x > LEVEL_WIDTH - camera.w ) { camera.x = LEVEL_WIDTH - camera.w; } if(camera.y > LEVEL_HEIGHT - camera.h ) { camera.y = LEVEL_HEIGHT - camera.h; } } set_camera() is the function which calculates the camera position based on the player's positions. I won't pretend I know much about it. Rectangle box = {0,0,0,0}; for(int t = 0; t < TOTAL_TILES; t++) { if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) apply_surface(box.x - camera.x, box.y - camera.y, surface, screen, &clips[tiles[t]]); box.x += TILE_WIDTH; //If we've gone too far if(box.x >= LEVEL_WIDTH) { //Move back box.x = 0; //Move to the next row box.y += TILE_HEIGHT; } } This is basically my render code. The for loop loops over 192 tiles stored in an int array, each with their own unique value describing the tile type(wall or one of three possible colored tiles). box is an SDL_Rect containing the current position of the tile, which is calculated on render. TILE_HEIGHT and TILE_WIDTH are of value 80. So the cap is determined by if(box.x < (camera.x - TILE_WIDTH) || box.y > (camera.y - TILE_HEIGHT)) However, this is just me playing with the values and see what doesn't break it. I pretty much have no idea how to calculate it. My screen resolution is 1024/768, and the tile map is of size 1280/960.

    Read the article

  • Incorrect lighting results with deferred rendering

    - by Lasse
    I am trying to render a light-pass to a texture which I will later apply on the scene. But I seem to calculate the light position wrong. I am working on view-space. In the image above, I am outputting the attenuation of a point light which is currently covering the whole screen. The light is at 0,10,0 position, and I transform it to view-space first: Vector4 pos; Vector4 tmp = new Vector4 (light.Position, 1); // Transform light position for shader Vector4.Transform (ref tmp, ref Camera.ViewMatrix, out pos); shader.SendUniform ("LightViewPosition", ref pos); Now to me that does not look as it should. What I think it should look like is that the white area should be on the center of the scene. The camera is at the corner of the scene, and it seems as if the light would move along with the camera. Here's the fragment shader code: void main(){ // default black color vec3 color = vec3(0); // Pixel coordinates on screen without depth vec2 PixelCoordinates = gl_FragCoord.xy / ScreenSize; // Get pixel position using depth from texture vec4 depthtexel = texture( DepthTexture, PixelCoordinates ); float depthSample = unpack_depth(depthtexel); // Get pixel coordinates on camera-space by multiplying the // coordinate on screen-space by inverse projection matrix vec4 world = (ImP * RemapMatrix * vec4(PixelCoordinates, depthSample, 1.0)); // Undo the perspective calculations vec3 pixelPosition = (world.xyz / world.w) * 3; // How far the light should reach from it's point of origin float lightReach = LightColor.a / 2; // Vector in between light and pixel vec3 lightDir = (LightViewPosition.xyz - pixelPosition); float lightDistance = length(lightDir); vec3 lightDirN = normalize(lightDir); // Discard pixels too far from light source //if(lightReach < lightDistance) discard; // Get normal from texture vec3 normal = normalize((texture( NormalTexture, PixelCoordinates ).xyz * 2) - 1); // Half vector between the light direction and eye, used for specular component vec3 halfVector = normalize(lightDirN + normalize(-pixelPosition)); // Dot product of normal and light direction float NdotL = dot(normal, lightDirN); float attenuation = pow(lightReach / lightDistance, LightFalloff); // If pixel is lit by the light if(NdotL > 0) { // I have moved stuff from here to above so I can debug them. // Diffuse light color color += LightColor.rgb * NdotL * attenuation; // Specular light color color += LightColor.xyz * pow(max(dot(halfVector, normal), 0.0), 4.0) * attenuation; } RT0 = vec4(color, 1); //RT0 = vec4(pixelPosition, 1); //RT0 = vec4(depthSample, depthSample, depthSample, 1); //RT0 = vec4(NdotL, NdotL, NdotL, 1); RT0 = vec4(attenuation, attenuation, attenuation, 1); //RT0 = vec4(lightReach, lightReach, lightReach, 1); //RT0 = depthtexel; //RT0 = 100 / vec4(lightDistance, lightDistance, lightDistance, 1); //RT0 = vec4(lightDirN, 1); //RT0 = vec4(halfVector, 1); //RT0 = vec4(LightColor.xyz,1); //RT0 = vec4(LightViewPosition.xyz/100, 1); //RT0 = vec4(LightPosition.xyz, 1); //RT0 = vec4(normal,1); } What am I doing wrong here?

    Read the article

  • Circle-Line Collision Detection Problem

    - by jazzdawg
    I am currently developing a breakout clone and I have hit a roadblock in getting collision detection between a ball (circle) and a brick (convex polygon) working correctly. I am using a Circle-Line collision detection test where each line represents and edge on the convex polygon brick. For the majority of the time the Circle-Line test works properly and the points of collision are resolved correctly. Collision detection working correctly. However, occasionally my collision detection code returns false due to a negative discriminant when the ball is actually intersecting the brick. Collision detection failing. I am aware of the inefficiency with this method and I am using axis aligned bounding boxes to cut down on the number of bricks tested. My main concern is if there are any mathematical bugs in my code below. /* * from and to are points at the start and end of the convex polygons edge. * This function is called for every edge in the convex polygon until a * collision is detected. */ bool circleLineCollision(Vec2f from, Vec2f to) { Vec2f lFrom, lTo, lLine; Vec2f line, normal; Vec2f intersectPt1, intersectPt2; float a, b, c, disc, sqrt_disc, u, v, nn, vn; bool one = false, two = false; // set line vectors lFrom = from - ball.circle.centre; // localised lTo = to - ball.circle.centre; // localised lLine = lFrom - lTo; // localised line = from - to; // calculate a, b & c values a = lLine.dot(lLine); b = 2 * (lLine.dot(lFrom)); c = (lFrom.dot(lFrom)) - (ball.circle.radius * ball.circle.radius); // discriminant disc = (b * b) - (4 * a * c); if (disc < 0.0f) { // no intersections return false; } else if (disc == 0.0f) { // one intersection u = -b / (2 * a); intersectPt1 = from + (lLine.scale(u)); one = pointOnLine(intersectPt1, from, to); if (!one) return false; return true; } else { // two intersections sqrt_disc = sqrt(disc); u = (-b + sqrt_disc) / (2 * a); v = (-b - sqrt_disc) / (2 * a); intersectPt1 = from + (lLine.scale(u)); intersectPt2 = from + (lLine.scale(v)); one = pointOnLine(intersectPt1, from, to); two = pointOnLine(intersectPt2, from, to); if (!one && !two) return false; return true; } } bool pointOnLine(Vec2f p, Vec2f from, Vec2f to) { if (p.x >= min(from.x, to.x) && p.x <= max(from.x, to.x) && p.y >= min(from.y, to.y) && p.y <= max(from.y, to.y)) return true; return false; }

    Read the article

  • The Three-Legged Milk Stool - Why Oracle Fusion Incentive Compensation makes the difference!

    - by Richard Lefebvre
    During the London Olympics, we were exposed to dozens of athletes who worked with sports psychologists to maximize their performance. Executives often hire business psychologists to coach their teams to excellence. In the same vein, Fusion Incentive Compensation can be used to get people to change their sales behavior so we can make our numbers. But what about using incentive compensation solutions in a non-sales scenario to drive change? Recently, I was working an opportunity where a company was having a low user adoption rate for Salesforce.com, which was causing problems for them. I suggested they use Fusion Incentive Comp to change the reps' behavior. We tossed around the idea of tracking user adoption by creating a variable bonus for reps based on how well they forecasted revenues in the new system. Another thought was to reward the reps for how often they logged into the system or for the percentage of leads that became opportunities and turned into revenue. A new twist on a great product. Fusion CRM's Sweet Spot I'm excited about the sales performance management (SPM) tools in Fusion CRM. This trio of Incentive Compensation, Territory Management, and Quota Management sets us apart from the competition because Oracle is the only vendor that provides all three of these capabilities on a single tech stack, in a single application, and with a single look and feel. The niche vendors offer standalone territory or incentive compensation solutions, but then the customer has to custom build the other tools and can end up with a Frankenstein-type environment. On average, companies overpay sales commissions by three to eight percent. You calculate that number for a company the size of Oracle for one quarter and it makes a pretty air-tight financial case for using SPM tools to figure accurate commissions. Plus when sales reps get the right compensation, they can be out selling rather than spending precious time figuring out what they didn't get paid or looking for another job. And one more thing ... Oracle knows incentive comp. We have been a Gartner Market Scope leader in this space for the last five years. Our solution gets high marks because of its scalability and because of its interoperability with other technologies. And now that we're leading with Fusion, our incentive compensation offering includes the innovations that the Fusion team built, plus enhancements from the E-Business Suite Incentive Comp team. It's a case of making a good thing even better. (See product video.) The "Wedge" Apps In a number of accounts that I'm working on, there is a non-Oracle CRM system of record. That gives me the perfect opportunity to introduce the benefits of our SPM tools and to get the customer using Fusion. Then the door is wide open for the company to uptake more of Fusion CRM, especially since all the integrations they need are out of the box. I really believe that implementing this wedge of SPM tools is the ticket to taking market share away from other vendors. It allows us to insert ourselves in an environment where no other CRM solution in the market has the extending capabilities of Fusion. Not Just Your Usual Suspects Usually the stakeholders that I talk to for Territory Management are tightly aligned with the sales management team. When I sell the quota planning tool, I'm talking to finance people on the ERP side of the house who are measuring quotas and forecasting revenue. And then Incentive Comp is of most interest to the sales operations people, and generally these people roll up to either HR or the payroll department. I think of our Fusion SPM tools as a three-legged stool straddling an organization's Sales, Finance, and HR departments. So when you're prospecting for opportunities -- yes, people with a CRM perspective will be very interested -- but don't limit yourselves to that constituency. You might find stakeholders in accounting, revenue planning, or HR compensation teams. You just might discover, as I did at United Airlines, that the HR organization is spearheading the CRM project because incentive compensation is what they need ... and they're the ones with the budget. Jason Loh Global Solutions Manager, Fusion CRM Sales Planning Oracle Corporation

    Read the article

  • How to convert pitch and yaw to x, y, z rotations?

    - by Aaron Anodide
    I'm a beginner using XNA to try and make a 3D Asteroids game. I'm really close to having my space ship drive around as if it had thrusters for pitch and yaw. The problem is I can't quite figure out how to translate the rotations, for instance, when I pitch forward 45 degrees and then start to turn - in this case there should be rotation being applied to all three directions to get the "diagonal yaw" - right? I thought I had it right with the calculations below, but they cause a partly pitched forward ship to wobble instead of turn.... :( So my quesiton is: how do you calculate the X, Y, and Z rotations for an object in terms of pitch and yaw? Here's current (almost working) calculations for the Rotation acceleration: float accel = .75f; // Thrust +Y / Forward if (currentKeyboardState.IsKeyDown(Keys.I)) { this.ship.AccelerationY += (float)Math.Cos(this.ship.RotationZ) * accel; this.ship.AccelerationX += (float)Math.Sin(this.ship.RotationZ) * -accel; this.ship.AccelerationZ += (float)Math.Sin(this.ship.RotationX) * accel; } // Rotation +Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.J)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * accel; } // Rotation -Z / Yaw if (currentKeyboardState.IsKeyDown(Keys.K)) { this.ship.RotationAccelerationZ += (float)Math.Cos(this.ship.RotationX) * -accel; this.ship.RotationAccelerationY += (float)Math.Sin(this.ship.RotationX) * -accel; this.ship.RotationAccelerationX += (float)Math.Sin(this.ship.RotationY) * -accel; } // Rotation +X / Pitch if (currentKeyboardState.IsKeyDown(Keys.F)) { this.ship.RotationAccelerationX += accel; } // Rotation -X / Pitch if (currentKeyboardState.IsKeyDown(Keys.D)) { this.ship.RotationAccelerationX -= accel; } I'm combining that with drawing code that does a rotation to the model: public void Draw(Matrix world, Matrix view, Matrix projection, TimeSpan elsapsedTime) { float seconds = (float)elsapsedTime.TotalSeconds; // update velocity based on acceleration this.VelocityX += this.AccelerationX * seconds; this.VelocityY += this.AccelerationY * seconds; this.VelocityZ += this.AccelerationZ * seconds; // update position based on velocity this.PositionX += this.VelocityX * seconds; this.PositionY += this.VelocityY * seconds; this.PositionZ += this.VelocityZ * seconds; // update rotational velocity based on rotational acceleration this.RotationVelocityX += this.RotationAccelerationX * seconds; this.RotationVelocityY += this.RotationAccelerationY * seconds; this.RotationVelocityZ += this.RotationAccelerationZ * seconds; // update rotation based on rotational velocity this.RotationX += this.RotationVelocityX * seconds; this.RotationY += this.RotationVelocityY * seconds; this.RotationZ += this.RotationVelocityZ * seconds; Matrix translation = Matrix.CreateTranslation(PositionX, PositionY, PositionZ); Matrix rotation = Matrix.CreateRotationX(RotationX) * Matrix.CreateRotationY(RotationY) * Matrix.CreateRotationZ(RotationZ); model.Root.Transform = rotation * translation * world; model.CopyAbsoluteBoneTransformsTo(boneTransforms); foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = boneTransforms[mesh.ParentBone.Index]; effect.View = view; effect.Projection = projection; effect.EnableDefaultLighting(); } mesh.Draw(); } }

    Read the article

  • AI to move custom-shaped spaceships (shape affecting movement behaviour)

    - by kaoD
    I'm designing a networked turn based 3D-6DOF space fleet combat strategy game which relies heavily on ship customization. Let me explain the game a bit, since you need to know a bit about it to set the question. What I aim for is the ability to create your own fleet of ships with custom shapes and attached modules (propellers, tractor beams...) which would give advantages and disadvantages to each ship, so you have lots of different fleet distributions. E.g., long ship with two propellers at the side would let the ship spin around that plane easily, bigger ships would move slowly unless you place lots of propellers at the back (therefore spending more "construction" points and energy when moving, and it will only move fast towards that direction.) I plan to balance all the game around this feature. The game would revolve around two phases: orders and combat phase. During the orders phase, you command the different ships. When all players finish the order phase, the combat phase begins and the ship orders get resolved in real-time for some time, then the action pauses and there's a new orders phase. The problem comes when I think about player input. To move a ship, you need to turn on or off different propellers if you want to steer, travel forward, brake, rotate in place... These propellers don't have to work at their whole power, so you can achieve more movement combinations with less propellers. I think this approach is a bit boring. The player doesn't want to fiddle with motors or anything, you just want to MOVE and KILL. The way I intend the player to give orders to these ships is by a destination and a rotation, and then the AI would calculate the correct propeller power to achive that movement and rotation. Propulsion doesn't have to be the same throught the entire turn calculation (after the orders have been given) so it would be cool if the ships reacted as they move, adjusting the power of the propellers for their needs dynamically, but it may be too hard to implement and it's not really needed for the game to work. In both cases, how would that AI decide which propellers to activate for the best (or at least not worst) trajectory to be achieved? I though about some approaches: Learning AI: The ship types would learn about their movement by trial and error, adjusting their behaviour with more uses, and finally becoming "smart". I don't want to get involved THAT far in AI coding, and I think it can be frustrating for the player (even if you can let it learn without playing.) Pre-calculated timestep movement: Upon ship creation, ALL possible movements are calculated for each propeller configuration and power for a given delta-time. Memory intensive, ugly, bad. Pre-calculated trajectories: The same as above but not for each delta-time but the whole trajectory, which would then be fitted as much as possible. Requires a fixed propeller configuration for the whole combat phase and is still memory intensive, ugly and bad. Continuous brute forcing: The AI continously checks ALL possible propeller configurations throughout the entire combat phase, precalculates a few time steps and decides which is the best one based on that. Con: what's good now might not be that good later, and it's too CPU intensive, ugly, and bad too. Single brute forcing: Same as above, but only brute forcing at the beginning of the simulation, so it needs constant propeller configuration throughout the entire combat phase. Coninuous angle check: This is not a full movement method, but maybe a way to discard "stupid" propeller configurations. Given the current propeller's normal vector and the final one, you can approximate the power needed for the propeller based on the angle. You must do this continuously throughout the whole combat phase. I figured this one out recently so I didn't put in too much thought. A priori, it has the "what's good now might not be that good later" drawback too, and it doesn't care about the other propellers which may act together to make a better propelling configuration. I'm really stuck here. Any ideas?

    Read the article

  • Texture displays on Android emulator but not on device

    - by Rob
    I have written a simple UI which takes an image (256x256) and maps it to a rectangle. This works perfectly on the emulator however on the phone the texture does not show, I see only a white rectangle. This is my code: public void onSurfaceCreated(GL10 gl, EGLConfig config) { byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); vertexBuffer = byteBuffer.asFloatBuffer(); vertexBuffer.put(cardshape); vertexBuffer.position(0); byteBuffer = ByteBuffer.allocateDirect(shape.length * 4); byteBuffer.order(ByteOrder.nativeOrder()); textureBuffer = byteBuffer.asFloatBuffer(); textureBuffer.put(textureshape); textureBuffer.position(0); // Set the background color to black ( rgba ). gl.glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Enable Smooth Shading, default not really needed. gl.glShadeModel(GL10.GL_SMOOTH); // Depth buffer setup. gl.glClearDepthf(1.0f); // Enables depth testing. gl.glEnable(GL10.GL_DEPTH_TEST); // The type of depth testing to do. gl.glDepthFunc(GL10.GL_LEQUAL); // Really nice perspective calculations. gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST); gl.glEnable(GL10.GL_TEXTURE_2D); loadGLTexture(gl); } public void onDrawFrame(GL10 gl) { gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glDisable(GL10.GL_DEPTH_TEST); gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f); gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix gl.glPushMatrix(); // Push The Matrix gl.glLoadIdentity(); // Reset The Matrix gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glLoadIdentity(); gl.glTranslatef(card.x, card.y, 0.0f); gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); } public void onSurfaceChanged(GL10 gl, int width, int height) { // Sets the current view port to the new size. gl.glViewport(0, 0, width, height); // Select the projection matrix gl.glMatrixMode(GL10.GL_PROJECTION); // Reset the projection matrix gl.glLoadIdentity(); // Calculate the aspect ratio of the window GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f); // Select the modelview matrix gl.glMatrixMode(GL10.GL_MODELVIEW); // Reset the modelview matrix gl.glLoadIdentity(); } public int[] texture = new int[1]; public void loadGLTexture(GL10 gl) { // loading texture Bitmap bitmap; bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.image); // generate one texture pointer gl.glGenTextures(0, texture, 0); //adds texture id to texture array // ...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, texture[0]); //activates texture to be used now // create nearest filtered texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // Use Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); // Clean up bitmap.recycle(); } As per many other similar issues and resolutions on the web i have tried setting the minsdkversion is 3, loading the bitmap via an input stream bitmap = BitmapFactory.decodeStream(is), setting BitmapFactory.Options.inScaled to false, putting the images in the nodpi folder and putting them in the raw folder.. all of which didn't help. I'm not really sure what else to try..

    Read the article

  • Drawing random smooth lines contained in a square [migrated]

    - by Doug Mercer
    I'm trying to write a matlab function that creates random, smooth trajectories in a square of finite side length. Here is my current attempt at such a procedure: function [] = drawroutes( SideLength, v, t) %DRAWROUTES Summary of this function goes here % Detailed explanation goes here %Some parameters intended to help help keep the particles in the box RandAccel=.01; ConservAccel=0; speedlimit=.1; G=10^(-8); % %Initialize Matrices Ax=zeros(v,10*t); Ay=Ax; vx=Ax; vy=Ax; x=Ax; y=Ax; sx=zeros(v,1); sy=zeros(v,1); % %Define initial position in square x(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); y(:,1)=SideLength*.15*ones(v,1)+(SideLength*.7)*rand(v,1); % for i=2:10*t %Measure minimum particle distance component wise from boundary %for each vehicle BorderGravX=[abs(SideLength*ones(v,1)-x(:,i-1)),abs(x(:,i-1))]'; BorderGravY=[abs(SideLength*ones(v,1)-y(:,i-1)),abs(y(:,i-1))]'; rx=min(BorderGravX)'; ry=min(BorderGravY)'; % %Set the sign of the repulsive force for k=1:v if x(k,i)<.5*SideLength sx(k)=1; else sx(k)=-1; end if y(k,i)<.5*SideLength sy(k)=1; else sy(k)=-1; end end % %Calculate Acceleration w/ random "nudge" and repulive force Ax(:,i)=ConservAccel*Ax(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sx*G./rx.^2; Ay(:,i)=ConservAccel*Ay(:,i-1)+RandAccel*(rand(v,1)-.5*ones(v,1))+sy*G./ry.^2; % %Ad hoc method of trying to slow down particles from jumping outside of %feasible region for h=1:v if abs(vx(h,i-1)+Ax(h,i))<speedlimit vx(h,i)=vx(h,i-1)+Ax(h,i); elseif (vx(h,i-1)+Ax(h,i))<-speedlimit vx(h,i)=-speedlimit; else vx(h,i)=speedlimit; end end for h=1:v if abs(vy(h,i-1)+Ay(h,i))<speedlimit vy(h,i)=vy(h,i-1)+Ay(h,i); elseif (vy(h,i-1)+Ay(h,i))<-speedlimit vy(h,i)=-speedlimit; else vy(h,i)=speedlimit; end end % %Update position x(:,i)=x(:,i-1)+(vx(:,i-1)+vx(:,i))/2; y(:,i)=y(:,i-1)+(vy(:,i-1)+vy(:,1))/2; % end %Plot position clf; hold on; axis([-100,SideLength+100,-100,SideLength+100]); cc=hsv(v); for j=1:v plot(x(j,1),y(j,1),'ko') plot(x(j,:),y(j,:),'color',cc(j,:)) end hold off; % end My original plan was to place particles within a square, and move them around by allowing their acceleration in the x and y direction to be governed by a uniformly distributed random variable. To keep the particles within the square, I tried to create a repulsive force that would push the particles away from the boundaries of the square. In practice, the particles tend to leave the desired "feasible" region after a relatively small number of time steps (say, 1000)." I'd love to hear your suggestions on either modifying my existing code or considering the problem from another perspective. When reading the code, please don't feel the need to get hung up on any of the ad hoc parameters at the very beginning of the script. They seem to help, but I don't believe any beside the "G" constant should truly be necessary to make this system work. Here is an example of the current output: Many of the vehicles have found their way outside of the desired square region, [0,400] X [0,400].

    Read the article

  • Why is this beat detection code failing to register some beats properly?

    - by Quincy
    I made this SoundAnalyzer class to detect beats in songs: class SoundAnalyzer { public SoundBuffer soundData; public Sound sound; public List<double> beatMarkers = new List<double>(); public SoundAnalyzer(string path) { soundData = new SoundBuffer(path); sound = new Sound(soundData); } // C = threshold, N = size of history buffer / 1024 B = bands public void PlaceBeatMarkers(float C, int N, int B) { List<double>[] instantEnergyList = new List<double>[B]; GetEnergyList(B, ref instantEnergyList); for (int i = 0; i < B; i++) { PlaceMarkers(instantEnergyList[i], N, C); } beatMarkers.Sort(); } private short[] getRange(int begin, int end, short[] array) { short[] result = new short[end - begin]; for (int i = 0; i < end - begin; i++) { result[i] = array[begin + i]; } return result; } // get a array of with a list of energy for each band private void GetEnergyList(int B, ref List<double>[] instantEnergyList) { for (int i = 0; i < B; i++) { instantEnergyList[i] = new List<double>(); } short[] samples = soundData.Samples; float timePerSample = 1 / (float)soundData.SampleRate; int sampleIndex = 0; int nextSamples = 1024; int samplesPerBand = nextSamples / B; // for the whole song while (sampleIndex + nextSamples < samples.Length) { complex[] FFT = FastFourier.Calculate(getRange(sampleIndex, nextSamples + sampleIndex, samples)); // foreach band for (int i = 0; i < B; i++) { double energy = 0; for (int j = 0; j < samplesPerBand; j++) energy += FFT[i * samplesPerBand + j].GetMagnitude(); energy /= samplesPerBand; instantEnergyList[i].Add(energy); } if (sampleIndex + nextSamples >= samples.Length) nextSamples = samples.Length - sampleIndex - 1; sampleIndex += nextSamples; samplesPerBand = nextSamples / B; } } // place the actual markers private void PlaceMarkers(List<double> instantEnergyList, int N, float C) { double timePerSample = 1 / (double)soundData.SampleRate; int index = N; int numInBuffer = index; double historyBuffer = 0; //Fill the history buffer with n * instant energy for (int i = 0; i < index; i++) { historyBuffer += instantEnergyList[i]; } // If instantEnergy / samples in buffer < instantEnergy for the next sample then add beatmarker. while (index + 1 < instantEnergyList.Count) { if(instantEnergyList[index + 1] > (historyBuffer / numInBuffer) * C) beatMarkers.Add((index + 1) * 1024 * timePerSample); historyBuffer -= instantEnergyList[index - numInBuffer]; historyBuffer += instantEnergyList[index + 1]; index++; } } } For some reason it's only detecting beats from 637 sec to around 641 sec, and I have no idea why. I know the beats are being inserted from multiple bands since I am finding duplicates, and it seems that it's assigning a beat to each instant energy value in between those values. It's modeled after this: http://www.flipcode.com/misc/BeatDetectionAlgorithms.pdf So why won't the beats register properly?

    Read the article

  • Hopping/Tumbling Windows Could Introduce Latency.

    This is a pre-article to one I am going to be writing on adjusting an event’s time and duration to satisfy business process requirements but it is one that I think is really useful when understanding the way that Hopping/Tumbling windows work within StreamInsight.  A Tumbling window is just a special shortcut version of  a Hopping window where the width of the window is equal to the size of the hop Here is the simplest and often used definition for a Hopping Window.  You can find them all here public static CepWindowStream<CepWindow<TPayload>> HoppingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     TimeSpan hopSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   And here is the definition for a Tumbling Window public static CepWindowStream<CepWindow<TPayload>> TumblingWindow<TPayload>(     this CepStream<TPayload> source,     TimeSpan windowSize,     WindowInputPolicy inputPolicy,     HoppingWindowOutputPolicy outputPolicy )   These methods allow you to group events into windows of a temporal size.  It is a really useful and simple feature in StreamInsight.  One of the downsides though is that the windows cannot be flushed until an event in a following window occurs.  This means that you will potentially never see some events or see them with a delay.  Let me explain. Remember that a stream is a potentially unbounded sequence of events. Events in StreamInsight are given a StartTime.  It is this StartTime that is used to calculate into which temporal window an event falls.  It is best practice to assign a timestamp from the source system and not one from the system clock on the processing server.  StreamInsight cannot know when a window is over.  It cannot tell whether you have received all events in the window or whether some events have been delayed which means that StreamInsight cannot flush the stream for you.   Imagine you have events with the following Timestamps 12:10:10 PM 12:10:20 PM 12:10:35 PM 12:10:45 PM 11:59:59 PM And imagine that you have defined a 1 minute Tumbling Window over this stream using the following syntax var HoppingStream = from shift in inputStream.TumblingWindow(TimeSpan.FromMinutes(1),HoppingWindowOutputPolicy.ClipToWindowEnd) select new WindowCountPayload { CountInWindow = (Int32)shift.Count() };   The events between 12:10:10 PM and 12:10:45 PM will not be seen until the event at 11:59:59 PM arrives.  This could be a real problem if you need to react to windows promptly This can always be worked around by using a different design pattern but a lot of the examples I see assume there is a constant, very frequent stream of events resulting in windows always being flushed. Further examples of using windowing in StreamInsight can be found here

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >